openfree commited on
Commit
09d782b
ยท
verified ยท
1 Parent(s): e6fa448

Update app-backup.py

Browse files
Files changed (1) hide show
  1. app-backup.py +1760 -129
app-backup.py CHANGED
@@ -1363,151 +1363,1782 @@ if LLAMA_CPP_AVAILABLE:
1363
  print(f"Failed to download model at startup: {e}")
1364
 
1365
 
1366
- # Gradio Interface
1367
- with gr.Blocks(theme='soft', title="AI Podcast Generator") as demo:
1368
- gr.Markdown("# ๐ŸŽ™๏ธ AI Podcast Generator - Professional Edition")
1369
- gr.Markdown("Convert any article, blog, PDF document, or topic into an engaging professional podcast conversation with in-depth analysis!")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1370
 
1371
- # ์ƒ๋‹จ์— ๋กœ์ปฌ LLM ์ƒํƒœ ํ‘œ์‹œ
1372
- with gr.Row():
1373
- gr.Markdown(f"""
1374
- ### ๐Ÿค– Enhanced Professional Configuration:
1375
- - **Primary**: Local LLM ({converter.config.local_model_name}) - Runs on your device
1376
- - **Fallback**: API LLM ({converter.config.api_model_name}) - Used when local fails
1377
- - **Status**: {"โœ… Llama CPP Available" if LLAMA_CPP_AVAILABLE else "โŒ Llama CPP Not Available - Install llama-cpp-python"}
1378
- - **Conversation Style**: Professional podcast with 2-4 sentence detailed answers
1379
- - **Conversation Length**: {converter.config.min_conversation_turns}-{converter.config.max_conversation_turns} exchanges (professional depth)
1380
- - **Search**: {"โœ… Brave Search Enabled" if BRAVE_KEY else "โŒ Brave Search Not Available - Set BSEARCH_API"}
1381
- - **Features**: ๐ŸŽฏ Keyword input | ๐Ÿ“Š Data-driven insights | ๐Ÿ”ฌ Expert analysis
1382
- """)
1383
 
1384
- with gr.Row():
1385
- with gr.Column(scale=3):
1386
- # Input type selector - ํ‚ค์›Œ๋“œ ์˜ต์…˜ ์ถ”๊ฐ€
1387
- input_type_selector = gr.Radio(
1388
- choices=["URL", "PDF", "Keyword"],
1389
- value="URL",
1390
- label="Input Type",
1391
- info="Choose between URL, PDF file upload, or keyword/topic"
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1392
  )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1393
 
1394
- # URL input
1395
- url_input = gr.Textbox(
1396
- label="Article URL",
1397
- placeholder="Enter the article URL here...",
1398
- value="",
1399
- visible=True
1400
  )
1401
 
1402
- # PDF upload
1403
- pdf_input = gr.File(
1404
- label="Upload PDF",
1405
- file_types=[".pdf"],
1406
- visible=False
1407
- )
 
 
1408
 
1409
- # Keyword input (์ƒˆ๋กœ ์ถ”๊ฐ€)
1410
- keyword_input = gr.Textbox(
1411
- label="Topic/Keyword",
1412
- placeholder="Enter a topic or keyword (e.g., 'AI trends', '์ธ๊ณต์ง€๋Šฅ ์ตœ์‹  ๋™ํ–ฅ')",
1413
- value="",
1414
- visible=False,
1415
- info="The system will search for latest information about this topic"
1416
- )
 
 
 
 
 
 
 
 
 
 
1417
 
1418
- with gr.Column(scale=1):
1419
- # ์–ธ์–ด ์„ ํƒ
1420
- language_selector = gr.Radio(
1421
- choices=["English", "Korean"],
1422
- value="English",
1423
- label="Language / ์–ธ์–ด",
1424
- info="Select output language / ์ถœ๋ ฅ ์–ธ์–ด๋ฅผ ์„ ํƒํ•˜์„ธ์š”"
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1425
  )
1426
 
1427
- mode_selector = gr.Radio(
1428
- choices=["Local", "API"],
1429
- value="Local",
1430
- label="Processing Mode",
1431
- info="Local: Runs on device (Primary) | API: Cloud-based (Fallback)"
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1432
  )
1433
 
1434
- # TTS ์—”์ง„ ์„ ํƒ
1435
- with gr.Group():
1436
- gr.Markdown("### TTS Engine Selection")
1437
- tts_selector = gr.Radio(
1438
- choices=["Edge-TTS", "Spark-TTS", "MeloTTS"],
1439
- value="Edge-TTS",
1440
- label="TTS Engine",
1441
- info="Edge-TTS: Cloud-based, natural voices | Spark-TTS: Local AI model | MeloTTS: Local, requires GPU"
1442
- )
1443
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1444
  gr.Markdown("""
1445
- **๐Ÿ“ป Professional Podcast Style:**
1446
- - In-depth expert discussions
1447
- - Host asks insightful questions
1448
- - Expert provides detailed 2-4 sentence answers
1449
- - Includes data, research, and real examples
1450
- - 12-15 professional exchanges
1451
-
1452
- **๐Ÿ” Keyword Feature:**
1453
- - Enter any topic to generate a podcast
1454
- - Automatically searches latest information
1455
- - Creates expert discussion from search results
1456
-
1457
- **๐Ÿ‡ฐ๐Ÿ‡ท ํ•œ๊ตญ์–ด ์ „๋ฌธ ํŒŸ์บ์ŠคํŠธ:**
1458
- - ์‹ฌ์ธต์ ์ธ ์ „๋ฌธ๊ฐ€ ๋Œ€๋‹ด
1459
- - ์ง„ํ–‰์ž(์ค€์ˆ˜)๊ฐ€ ํ†ต์ฐฐ๋ ฅ ์žˆ๋Š” ์งˆ๋ฌธ
1460
- - ์ „๋ฌธ๊ฐ€(๋ฏผํ˜ธ)๊ฐ€ 2-4๋ฌธ์žฅ์œผ๋กœ ์ƒ์„ธ ๋‹ต๋ณ€
1461
- - ๋ฐ์ดํ„ฐ์™€ ์‚ฌ๋ก€๋ฅผ ํฌํ•จํ•œ ์ „๋ฌธ์  ๋‚ด์šฉ
1462
  """)
1463
-
1464
- convert_btn = gr.Button("๐ŸŽฏ Generate Professional Conversation / ์ „๋ฌธ ๋Œ€ํ™” ์ƒ์„ฑ", variant="primary", size="lg")
1465
-
1466
- with gr.Row():
1467
- with gr.Column():
1468
- conversation_output = gr.Textbox(
1469
- label="Generated Professional Conversation (Editable) / ์ƒ์„ฑ๋œ ์ „๋ฌธ ๋Œ€ํ™” (ํŽธ์ง‘ ๊ฐ€๋Šฅ)",
1470
- lines=35, # ๋” ๊ธด ์ „๋ฌธ์  ๋Œ€ํ™”๋ฅผ ์œ„ํ•ด ์ฆ๊ฐ€
1471
- max_lines=70,
1472
- interactive=True,
1473
- placeholder="Professional podcast conversation will appear here. You can edit it before generating audio.\n์ „๋ฌธ ํŒŸ์บ์ŠคํŠธ ๋Œ€ํ™”๊ฐ€ ์—ฌ๊ธฐ์— ํ‘œ์‹œ๋ฉ๋‹ˆ๋‹ค. ์˜ค๋””์˜ค ์ƒ์„ฑ ์ „์— ํŽธ์ง‘ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.\n\n์‹ฌ์ธต์ ์ด๊ณ  ์ „๋ฌธ์ ์ธ ๋Œ€๋‹ด ํ˜•์‹์œผ๋กœ ์ง„ํ–‰๋ฉ๋‹ˆ๋‹ค.",
1474
- info="Edit the conversation as needed. Format: 'Speaker Name: Text' / ํ•„์š”์— ๋”ฐ๋ผ ๋Œ€ํ™”๋ฅผ ํŽธ์ง‘ํ•˜์„ธ์š”. ํ˜•์‹: 'ํ™”์ž ์ด๋ฆ„: ํ…์ŠคํŠธ'"
1475
- )
1476
-
1477
- # ์˜ค๋””์˜ค ์ƒ์„ฑ ๋ฒ„ํŠผ ์ถ”๊ฐ€
1478
  with gr.Row():
1479
- generate_audio_btn = gr.Button("๐ŸŽ™๏ธ Generate Audio from Text / ํ…์ŠคํŠธ์—์„œ ์˜ค๋””์˜ค ์ƒ์„ฑ", variant="secondary", size="lg")
1480
- gr.Markdown("*Edit the conversation above, then click to generate audio / ์œ„์˜ ๋Œ€ํ™”๋ฅผ ํŽธ์ง‘ํ•œ ํ›„ ํด๋ฆญํ•˜์—ฌ ์˜ค๋””์˜ค๋ฅผ ์ƒ์„ฑํ•˜์„ธ์š”*")
1481
-
1482
- with gr.Column():
1483
- audio_output = gr.Audio(
1484
- label="Professional Podcast Audio / ์ „๋ฌธ ํŒŸ์บ์ŠคํŠธ ์˜ค๋””์˜ค",
1485
- type="filepath",
1486
- interactive=False
1487
- )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1488
 
1489
- # ์ƒํƒœ ๋ฉ”์‹œ์ง€ ์ถ”๊ฐ€
1490
- status_output = gr.Textbox(
1491
- label="Status / ์ƒํƒœ",
1492
- interactive=False,
1493
- visible=True
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1494
  )
1495
 
1496
-
1497
- gr.Examples(
1498
- examples=[
1499
- ["https://huggingface.co/blog/openfree/cycle-navigator", "URL", "Local", "Edge-TTS", "English"],
1500
- ["quantum computing breakthroughs", "Keyword", "Local", "Edge-TTS", "English"], # Professional keyword example
1501
- ["https://huggingface.co/papers/2505.14810", "URL", "Local", "Edge-TTS", "Korean"],
1502
- ["์ธ๊ณต์ง€๋Šฅ ์œค๋ฆฌ์™€ ๊ทœ์ œ", "Keyword", "Local", "Edge-TTS", "Korean"], # Korean professional keyword
1503
- ],
1504
- inputs=[url_input, input_type_selector, mode_selector, tts_selector, language_selector],
1505
- outputs=[conversation_output, status_output],
1506
- fn=synthesize_sync,
1507
- cache_examples=False,
1508
- )
1509
-
1510
- # Input type change handler - ์ˆ˜์ •๋จ
1511
  input_type_selector.change(
1512
  fn=toggle_input_visibility,
1513
  inputs=[input_type_selector],
@@ -1521,7 +3152,7 @@ with gr.Blocks(theme='soft', title="AI Podcast Generator") as demo:
1521
  outputs=[tts_selector]
1522
  )
1523
 
1524
- # ์ด๋ฒคํŠธ ์—ฐ๊ฒฐ - ์ˆ˜์ •๋œ ๋ถ€๋ถ„
1525
  def get_article_input(input_type, url_input, pdf_input, keyword_input):
1526
  """Get the appropriate input based on input type"""
1527
  if input_type == "URL":
@@ -1553,4 +3184,4 @@ if __name__ == "__main__":
1553
  share=False,
1554
  server_name="0.0.0.0",
1555
  server_port=7860
1556
- )
 
1363
  print(f"Failed to download model at startup: {e}")
1364
 
1365
 
1366
+ # Gradio Interface - ๊ฐœ์„ ๋œ ๋ ˆ์ด์•„์›ƒ
1367
+ with gr.Blocks(theme='soft', title="AI Podcast Generator", css="""
1368
+ .container {max-width: 1200px; margin: auto; padding: 20px;}
1369
+ .header-text {text-align: center; margin-bottom: 30px;}
1370
+ .input-group {background: #f7f7f7; padding: 20px; border-radius: 10px; margin-bottom: 20px;}
1371
+ .output-group {background: #f0f0f0; padding: 20px; border-radius: 10px;}
1372
+ .status-box {background: #e8f4f8; padding: 15px; border-radius: 8px; margin-top: 10px;}
1373
+ """) as demo:
1374
+ with gr.Column(elem_classes="container"):
1375
+ # ํ—ค๋”
1376
+ with gr.Row(elem_classes="header-text"):
1377
+ gr.Markdown("""
1378
+ # ๐ŸŽ™๏ธ AI Podcast Generator - Professional Edition
1379
+ ### Convert any article, blog, PDF document, or topic into an engaging professional podcast conversation with in-depth analysis!
1380
+ """)
1381
+
1382
+ with gr.Row(elem_classes="discord-badge"):
1383
+ gr.HTML("""
1384
+ <p style="text-align: center;">
1385
+ <a href="https://discord.gg/openfreeai" target="_blank">
1386
+ <img src="https://img.shields.io/static/v1?label=Discord&message=Openfree%20AI&color=%230000ff&labelColor=%23800080&logo=discord&logoColor=white&style=for-the-badge" alt="badge">
1387
+ </a>
1388
+ </p>
1389
+ """)
1390
+
1391
+
1392
+
1393
+ # ์ƒํƒœ ํ‘œ์‹œ ์„น์…˜
1394
+ with gr.Row():
1395
+ with gr.Column(scale=1):
1396
+ gr.Markdown(f"""
1397
+ #### ๐Ÿค– System Status
1398
+ - **LLM**: {converter.config.local_model_name.split('.')[0]}
1399
+ - **Fallback**: {converter.config.api_model_name.split('/')[-1]}
1400
+ - **Llama CPP**: {"โœ… Ready" if LLAMA_CPP_AVAILABLE else "โŒ Not Available"}
1401
+ - **Search**: {"โœ… Brave API" if BRAVE_KEY else "โŒ No API"}
1402
+ """)
1403
+ with gr.Column(scale=1):
1404
+ gr.Markdown("""
1405
+ #### ๐Ÿ“ป Podcast Features
1406
+ - **Length**: 12-15 professional exchanges
1407
+ - **Style**: Expert discussions with data & insights
1408
+ - **Languages**: English & Korean (ํ•œ๊ตญ์–ด)
1409
+ - **Input**: URL, PDF, or Keywords
1410
+ """)
1411
+
1412
+ # ๋ฉ”์ธ ์ž…๋ ฅ ์„น์…˜
1413
+ with gr.Group(elem_classes="input-group"):
1414
+ with gr.Row():
1415
+ # ์™ผ์ชฝ: ์ž…๋ ฅ ์˜ต์…˜๋“ค
1416
+ with gr.Column(scale=2):
1417
+ # ์ž…๋ ฅ ํƒ€์ž… ์„ ํƒ
1418
+ input_type_selector = gr.Radio(
1419
+ choices=["URL", "PDF", "Keyword"],
1420
+ value="URL",
1421
+ label="๐Ÿ“ฅ Input Type",
1422
+ info="Choose your content source"
1423
+ )
1424
+
1425
+ # URL ์ž…๋ ฅ
1426
+ url_input = gr.Textbox(
1427
+ label="๐Ÿ”— Article URL",
1428
+ placeholder="Enter the article URL here...",
1429
+ value="",
1430
+ visible=True,
1431
+ lines=2
1432
+ )
1433
+
1434
+ # PDF ์—…๋กœ๋“œ
1435
+ pdf_input = gr.File(
1436
+ label="๐Ÿ“„ Upload PDF",
1437
+ file_types=[".pdf"],
1438
+ visible=False
1439
+ )
1440
+
1441
+ # ํ‚ค์›Œ๋“œ ์ž…๋ ฅ
1442
+ keyword_input = gr.Textbox(
1443
+ label="๐Ÿ” Topic/Keyword",
1444
+ placeholder="Enter a topic (e.g., 'AI trends 2024', '์ธ๊ณต์ง€๋Šฅ ์ตœ์‹  ๋™ํ–ฅ')",
1445
+ value="",
1446
+ visible=False,
1447
+ info="System will search and compile latest information",
1448
+ lines=2
1449
+ )
1450
+
1451
+ # ์˜ค๋ฅธ์ชฝ: ์„ค์ • ์˜ต์…˜๋“ค
1452
+ with gr.Column(scale=1):
1453
+ # ์–ธ์–ด ์„ ํƒ
1454
+ language_selector = gr.Radio(
1455
+ choices=["English", "Korean"],
1456
+ value="English",
1457
+ label="๐ŸŒ Language / ์–ธ์–ด",
1458
+ info="Output language"
1459
+ )
1460
+
1461
+ # ์ฒ˜๋ฆฌ ๋ชจ๋“œ
1462
+ mode_selector = gr.Radio(
1463
+ choices=["Local", "API"],
1464
+ value="Local",
1465
+ label="โš™๏ธ Processing Mode",
1466
+ info="Local: On-device | API: Cloud"
1467
+ )
1468
+
1469
+ # TTS ์—”์ง„
1470
+ tts_selector = gr.Radio(
1471
+ choices=["Edge-TTS", "Spark-TTS", "MeloTTS"],
1472
+ value="Edge-TTS",
1473
+ label="๐Ÿ”Š TTS Engine",
1474
+ info="Voice synthesis engine"
1475
+ )
1476
+
1477
+ # ์ƒ์„ฑ ๋ฒ„ํŠผ
1478
+ with gr.Row():
1479
+ convert_btn = gr.Button(
1480
+ "๐ŸŽฏ Generate Professional Conversation",
1481
+ variant="primary",
1482
+ size="lg",
1483
+ scale=1
1484
+ )
1485
+
1486
+ # ์ถœ๋ ฅ ์„น์…˜
1487
+ with gr.Group(elem_classes="output-group"):
1488
+ with gr.Row():
1489
+ # ์™ผ์ชฝ: ๋Œ€ํ™” ํ…์ŠคํŠธ
1490
+ with gr.Column(scale=3):
1491
+ conversation_output = gr.Textbox(
1492
+ label="๐Ÿ’ฌ Generated Professional Conversation (Editable)",
1493
+ lines=25,
1494
+ max_lines=50,
1495
+ interactive=True,
1496
+ placeholder="Professional podcast conversation will appear here...\n์ „๋ฌธ ํŒŸ์บ์ŠคํŠธ ๋Œ€ํ™”๊ฐ€ ์—ฌ๊ธฐ์— ํ‘œ์‹œ๋ฉ๋‹ˆ๋‹ค...",
1497
+ info="Edit the conversation as needed. Format: 'Speaker Name: Text'"
1498
+ )
1499
+
1500
+ # ์˜ค๋””์˜ค ์ƒ์„ฑ ๋ฒ„ํŠผ
1501
+ with gr.Row():
1502
+ generate_audio_btn = gr.Button(
1503
+ "๐ŸŽ™๏ธ Generate Audio from Text",
1504
+ variant="secondary",
1505
+ size="lg"
1506
+ )
1507
+
1508
+ # ์˜ค๋ฅธ์ชฝ: ์˜ค๋””์˜ค ์ถœ๋ ฅ ๋ฐ ์ƒํƒœ
1509
+ with gr.Column(scale=2):
1510
+ audio_output = gr.Audio(
1511
+ label="๐ŸŽง Professional Podcast Audio",
1512
+ type="filepath",
1513
+ interactive=False
1514
+ )
1515
+
1516
+ status_output = gr.Textbox(
1517
+ label="๐Ÿ“Š Status",
1518
+ interactive=False,
1519
+ lines=3,
1520
+ elem_classes="status-box"
1521
+ )
1522
+
1523
+ # ๋„์›€๋ง
1524
+ gr.Markdown("""
1525
+ #### ๐Ÿ’ก Quick Tips:
1526
+ - **URL**: Paste any article link
1527
+ - **PDF**: Upload documents directly
1528
+ - **Keyword**: Enter topics for AI research
1529
+ - Edit conversation before audio generation
1530
+ - Korean (ํ•œ๊ตญ์–ด) fully supported
1531
+ """)
1532
+
1533
+ # ์˜ˆ์ œ ์„น์…˜
1534
+ with gr.Accordion("๐Ÿ“š Examples", open=False):
1535
+ gr.Examples(
1536
+ examples=[
1537
+ ["https://huggingface.co/blog/openfree/cycle-navigator", "URL", "Local", "Edge-TTS", "English"],
1538
+ ["quantum computing breakthroughs", "Keyword", "Local", "Edge-TTS", "English"],
1539
+ ["https://huggingface.co/papers/2505.14810", "URL", "Local", "Edge-TTS", "Korean"],
1540
+ ["์ธ๊ณต์ง€๋Šฅ ์œค๋ฆฌ์™€ ๊ทœ์ œ", "Keyword", "Local", "Edge-TTS", "Korean"],
1541
+ ],
1542
+ inputs=[url_input, input_type_selector, mode_selector, tts_selector, language_selector],
1543
+ outputs=[conversation_output, status_output],
1544
+ fn=synthesize_sync,
1545
+ cache_examples=False,
1546
+ )
1547
+
1548
+ # Input type change handler
1549
+ input_type_selector.change(
1550
+ fn=toggle_input_visibility,
1551
+ inputs=[input_type_selector],
1552
+ outputs=[url_input, pdf_input, keyword_input]
1553
+ )
1554
+
1555
+ # ์–ธ์–ด ๋ณ€๊ฒฝ ์‹œ TTS ์—”์ง„ ์˜ต์…˜ ์—…๋ฐ์ดํŠธ
1556
+ language_selector.change(
1557
+ fn=update_tts_engine_for_korean,
1558
+ inputs=[language_selector],
1559
+ outputs=[tts_selector]
1560
+ )
1561
+
1562
+ # ์ด๋ฒคํŠธ ์—ฐ๊ฒฐ
1563
+ def get_article_input(input_type, url_input, pdf_input, keyword_input):
1564
+ """Get the appropriate input based on input type"""
1565
+ if input_type == "URL":
1566
+ return url_input
1567
+ elif input_type == "PDF":
1568
+ return pdf_input
1569
+ else: # Keyword
1570
+ return keyword_input
1571
+
1572
+ convert_btn.click(
1573
+ fn=lambda input_type, url_input, pdf_input, keyword_input, mode, tts, lang: synthesize_sync(
1574
+ get_article_input(input_type, url_input, pdf_input, keyword_input), input_type, mode, tts, lang
1575
+ ),
1576
+ inputs=[input_type_selector, url_input, pdf_input, keyword_input, mode_selector, tts_selector, language_selector],
1577
+ outputs=[conversation_output, status_output]
1578
+ )
1579
+
1580
+ generate_audio_btn.click(
1581
+ fn=regenerate_audio_sync,
1582
+ inputs=[conversation_output, tts_selector, language_selector],
1583
+ outputs=[status_output, audio_output]
1584
+ )
1585
+
1586
+
1587
+ # Launch the app
1588
+ if __name__ == "__main__":
1589
+ demo.queue(api_open=True, default_concurrency_limit=10).launch(
1590
+ show_api=True,
1591
+ share=False,
1592
+ server_name="0.0.0.0",
1593
+ server_port=7860
1594
+ ) import spaces # ์ถ”๊ฐ€
1595
+ import gradio as gr
1596
+ import os
1597
+ import asyncio
1598
+ import torch
1599
+ import io
1600
+ import json
1601
+ import re
1602
+ import httpx
1603
+ import tempfile
1604
+ import wave
1605
+ import base64
1606
+ import numpy as np
1607
+ import soundfile as sf
1608
+ import subprocess
1609
+ import shutil
1610
+ import requests
1611
+ import logging
1612
+ from datetime import datetime, timedelta
1613
+ from dataclasses import dataclass
1614
+ from typing import List, Tuple, Dict, Optional
1615
+ from pathlib import Path
1616
+ from threading import Thread
1617
+ from dotenv import load_dotenv
1618
+
1619
+ # PDF processing imports
1620
+ from langchain_community.document_loaders import PyPDFLoader
1621
+
1622
+ # Edge TTS imports
1623
+ import edge_tts
1624
+ from pydub import AudioSegment
1625
+
1626
+ # OpenAI imports
1627
+ from openai import OpenAI
1628
+
1629
+ # Transformers imports (for legacy local mode)
1630
+ from transformers import (
1631
+ AutoModelForCausalLM,
1632
+ AutoTokenizer,
1633
+ TextIteratorStreamer,
1634
+ BitsAndBytesConfig,
1635
+ )
1636
+
1637
+ # Llama CPP imports (for new local mode)
1638
+ try:
1639
+ from llama_cpp import Llama
1640
+ from llama_cpp_agent import LlamaCppAgent, MessagesFormatterType
1641
+ from llama_cpp_agent.providers import LlamaCppPythonProvider
1642
+ from llama_cpp_agent.chat_history import BasicChatHistory
1643
+ from llama_cpp_agent.chat_history.messages import Roles
1644
+ from huggingface_hub import hf_hub_download
1645
+ LLAMA_CPP_AVAILABLE = True
1646
+ except ImportError:
1647
+ LLAMA_CPP_AVAILABLE = False
1648
+
1649
+ # Spark TTS imports
1650
+ try:
1651
+ from huggingface_hub import snapshot_download
1652
+ SPARK_AVAILABLE = True
1653
+ except:
1654
+ SPARK_AVAILABLE = False
1655
+
1656
+ # MeloTTS imports (for local mode)
1657
+ try:
1658
+ # unidic ๋‹ค์šด๋กœ๋“œ๋ฅผ ์กฐ๊ฑด๋ถ€๋กœ ์ฒ˜๋ฆฌ
1659
+ if not os.path.exists("/usr/local/lib/python3.10/site-packages/unidic"):
1660
+ try:
1661
+ os.system("python -m unidic download")
1662
+ except:
1663
+ pass
1664
+ from melo.api import TTS as MeloTTS
1665
+ MELO_AVAILABLE = True
1666
+ except:
1667
+ MELO_AVAILABLE = False
1668
+
1669
+ load_dotenv()
1670
+
1671
+ # Brave Search API ์„ค์ •
1672
+ BRAVE_KEY = os.getenv("BSEARCH_API")
1673
+ BRAVE_ENDPOINT = "https://api.search.brave.com/res/v1/web/search"
1674
+
1675
+ @dataclass
1676
+ class ConversationConfig:
1677
+ max_words: int = 8000 # 4000์—์„œ 6000์œผ๋กœ ์ฆ๊ฐ€ (1.5๋ฐฐ)
1678
+ prefix_url: str = "https://r.jina.ai/"
1679
+ api_model_name: str = "meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo"
1680
+ legacy_local_model_name: str = "NousResearch/Hermes-2-Pro-Llama-3-8B"
1681
+ # ์ƒˆ๋กœ์šด ๋กœ์ปฌ ๋ชจ๋ธ ์„ค์ •
1682
+ local_model_name: str = "Private-BitSix-Mistral-Small-3.1-24B-Instruct-2503.gguf"
1683
+ local_model_repo: str = "ginigen/Private-BitSix-Mistral-Small-3.1-24B-Instruct-2503"
1684
+ # ํ† ํฐ ์ˆ˜ ์ฆ๊ฐ€
1685
+ max_tokens: int = 6000 # 3000์—์„œ 4500์œผ๋กœ ์ฆ๊ฐ€ (1.5๋ฐฐ)
1686
+ max_new_tokens: int = 12000 # 6000์—์„œ 9000์œผ๋กœ ์ฆ๊ฐ€ (1.5๋ฐฐ)
1687
+ min_conversation_turns: int = 18 # ์ตœ์†Œ ๋Œ€ํ™” ํ„ด ์ˆ˜
1688
+ max_conversation_turns: int = 20 # ์ตœ๋Œ€ ๋Œ€ํ™” ํ„ด ์ˆ˜
1689
+
1690
+
1691
+ def brave_search(query: str, count: int = 8, freshness_days: int | None = None):
1692
+ """Brave Search API๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ตœ์‹  ์ •๋ณด ๊ฒ€์ƒ‰"""
1693
+ if not BRAVE_KEY:
1694
+ return []
1695
+ params = {"q": query, "count": str(count)}
1696
+ if freshness_days:
1697
+ dt_from = (datetime.utcnow() - timedelta(days=freshness_days)).strftime("%Y-%m-%d")
1698
+ params["freshness"] = dt_from
1699
+ try:
1700
+ r = requests.get(
1701
+ BRAVE_ENDPOINT,
1702
+ headers={"Accept": "application/json", "X-Subscription-Token": BRAVE_KEY},
1703
+ params=params,
1704
+ timeout=15
1705
+ )
1706
+ raw = r.json().get("web", {}).get("results") or []
1707
+ return [{
1708
+ "title": r.get("title", ""),
1709
+ "url": r.get("url", r.get("link", "")),
1710
+ "snippet": r.get("description", r.get("text", "")),
1711
+ "host": re.sub(r"https?://(www\.)?", "", r.get("url", "")).split("/")[0]
1712
+ } for r in raw[:count]]
1713
+ except Exception as e:
1714
+ logging.error(f"Brave search error: {e}")
1715
+ return []
1716
+
1717
+ def format_search_results(query: str, for_keyword: bool = False) -> str:
1718
+ """๊ฒ€์ƒ‰ ๊ฒฐ๊ณผ๋ฅผ ํฌ๋งทํŒ…ํ•˜์—ฌ ๋ฐ˜ํ™˜"""
1719
+ # ํ‚ค์›Œ๋“œ ๊ฒ€์ƒ‰์˜ ๊ฒฝ์šฐ ๋” ๋งŽ์€ ๊ฒฐ๊ณผ ์‚ฌ์šฉ
1720
+ count = 5 if for_keyword else 3
1721
+ rows = brave_search(query, count, freshness_days=7 if not for_keyword else None)
1722
+ if not rows:
1723
+ return ""
1724
+
1725
+ results = []
1726
+ # ํ‚ค์›Œ๋“œ ๊ฒ€์ƒ‰์˜ ๊ฒฝ์šฐ ๋” ์ƒ์„ธํ•œ ์ •๋ณด ํฌํ•จ
1727
+ max_results = 4 if for_keyword else 2
1728
+ for r in rows[:max_results]:
1729
+ if for_keyword:
1730
+ # ํ‚ค์›Œ๋“œ ๊ฒ€์ƒ‰์€ ๋” ๊ธด ์Šค๋‹ˆํŽซ ์‚ฌ์šฉ
1731
+ snippet = r['snippet'][:200] + "..." if len(r['snippet']) > 200 else r['snippet']
1732
+ results.append(f"**{r['title']}**\n{snippet}\nSource: {r['host']}")
1733
+ else:
1734
+ # ์ผ๋ฐ˜ ๊ฒ€์ƒ‰์€ ์งง์€ ์Šค๋‹ˆํŽซ
1735
+ snippet = r['snippet'][:100] + "..." if len(r['snippet']) > 100 else r['snippet']
1736
+ results.append(f"- {r['title']}: {snippet}")
1737
 
1738
+ return "\n\n".join(results) + "\n"
1739
+
1740
+ def extract_keywords_for_search(text: str, language: str = "English") -> List[str]:
1741
+ """ํ…์ŠคํŠธ์—์„œ ๊ฒ€์ƒ‰ํ•  ํ‚ค์›Œ๋“œ ์ถ”์ถœ (๊ฐœ์„ )"""
1742
+ # ํ…์ŠคํŠธ ์•ž๋ถ€๋ถ„๋งŒ ์‚ฌ์šฉ (๋„ˆ๋ฌด ๋งŽ์€ ํ…์ŠคํŠธ ์ฒ˜๋ฆฌ ๋ฐฉ์ง€)
1743
+ text_sample = text[:500]
 
 
 
 
 
 
1744
 
1745
+ if language == "Korean":
1746
+ import re
1747
+ # ํ•œ๊ตญ์–ด ๋ช…์‚ฌ ์ถ”์ถœ (2๊ธ€์ž ์ด์ƒ)
1748
+ keywords = re.findall(r'[๊ฐ€-ํžฃ]{2,}', text_sample)
1749
+ # ์ค‘๋ณต ์ œ๊ฑฐํ•˜๊ณ  ๊ฐ€์žฅ ๊ธด ๋‹จ์–ด 1๊ฐœ๋งŒ ์„ ํƒ
1750
+ unique_keywords = list(dict.fromkeys(keywords))
1751
+ # ๊ธธ์ด ์ˆœ์œผ๋กœ ์ •๋ ฌํ•˜๊ณ  ๊ฐ€์žฅ ์˜๋ฏธ์žˆ์„ ๊ฒƒ ๊ฐ™์€ ๋‹จ์–ด ์„ ํƒ
1752
+ unique_keywords.sort(key=len, reverse=True)
1753
+ return unique_keywords[:1] # 1๊ฐœ๋งŒ ๋ฐ˜ํ™˜
1754
+ else:
1755
+ # ์˜์–ด๋Š” ๋Œ€๋ฌธ์ž๋กœ ์‹œ์ž‘ํ•˜๋Š” ๋‹จ์–ด ์ค‘ ๊ฐ€์žฅ ๊ธด ๊ฒƒ 1๊ฐœ
1756
+ words = text_sample.split()
1757
+ keywords = [word.strip('.,!?;:') for word in words
1758
+ if len(word) > 4 and word[0].isupper()]
1759
+ if keywords:
1760
+ return [max(keywords, key=len)] # ๊ฐ€์žฅ ๊ธด ๋‹จ์–ด 1๊ฐœ
1761
+ return []
1762
+
1763
+ def search_and_compile_content(keyword: str, language: str = "English") -> str:
1764
+ """ํ‚ค์›Œ๋“œ๋กœ ๊ฒ€์ƒ‰ํ•˜์—ฌ ์ถฉ๋ถ„ํ•œ ์ฝ˜ํ…์ธ  ์ปดํŒŒ์ผ"""
1765
+ if not BRAVE_KEY:
1766
+ # API ์—†์„ ๋•Œ๋„ ๊ธฐ๋ณธ ์ฝ˜ํ…์ธ  ์ƒ์„ฑ
1767
+ if language == "Korean":
1768
+ return f"""
1769
+ '{keyword}'์— ๋Œ€ํ•œ ์ข…ํ•ฉ์ ์ธ ์ •๋ณด:
1770
+
1771
+ {keyword}๋Š” ํ˜„๋Œ€ ์‚ฌํšŒ์—์„œ ๋งค์šฐ ์ค‘์š”ํ•œ ์ฃผ์ œ์ž…๋‹ˆ๋‹ค.
1772
+ ์ด ์ฃผ์ œ๋Š” ๋‹ค์–‘ํ•œ ์ธก๋ฉด์—์„œ ์šฐ๋ฆฌ์˜ ์‚ถ์— ์˜ํ–ฅ์„ ๋ฏธ์น˜๊ณ  ์žˆ์œผ๋ฉฐ,
1773
+ ์ตœ๊ทผ ๋“ค์–ด ๋”์šฑ ์ฃผ๋ชฉ๋ฐ›๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค.
1774
+
1775
+ ์ฃผ์š” ํŠน์ง•:
1776
+ 1. ๊ธฐ์ˆ ์  ๋ฐœ์ „๊ณผ ํ˜์‹ 
1777
+ 2. ์‚ฌํšŒ์  ์˜ํ–ฅ๊ณผ ๋ณ€ํ™”
1778
+ 3. ๋ฏธ๋ž˜ ์ „๋ง๊ณผ ๊ฐ€๋Šฅ์„ฑ
1779
+ 4. ์‹ค์šฉ์  ํ™œ์šฉ ๋ฐฉ์•ˆ
1780
+ 5. ๊ธ€๋กœ๋ฒŒ ํŠธ๋ Œ๋“œ์™€ ๋™ํ–ฅ
1781
+
1782
+ ์ „๋ฌธ๊ฐ€๋“ค์€ {keyword}๊ฐ€ ์•ž์œผ๋กœ ๋”์šฑ ์ค‘์š”ํ•ด์งˆ ๊ฒƒ์œผ๋กœ ์˜ˆ์ƒํ•˜๊ณ  ์žˆ์œผ๋ฉฐ,
1783
+ ์ด์— ๋Œ€ํ•œ ๊นŠ์ด ์žˆ๋Š” ์ดํ•ด๊ฐ€ ํ•„์š”ํ•œ ์‹œ์ ์ž…๋‹ˆ๋‹ค.
1784
+ """
1785
+ else:
1786
+ return f"""
1787
+ Comprehensive information about '{keyword}':
1788
+
1789
+ {keyword} is a significant topic in modern society.
1790
+ This subject impacts our lives in various ways and has been
1791
+ gaining increasing attention recently.
1792
+
1793
+ Key aspects:
1794
+ 1. Technological advancement and innovation
1795
+ 2. Social impact and changes
1796
+ 3. Future prospects and possibilities
1797
+ 4. Practical applications
1798
+ 5. Global trends and developments
1799
+
1800
+ Experts predict that {keyword} will become even more important,
1801
+ and it's crucial to develop a deep understanding of this topic.
1802
+ """
1803
+
1804
+ # ์–ธ์–ด์— ๋”ฐ๋ฅธ ๋‹ค์–‘ํ•œ ๊ฒ€์ƒ‰ ์ฟผ๋ฆฌ
1805
+ if language == "Korean":
1806
+ queries = [
1807
+ f"{keyword} ์ตœ์‹  ๋‰ด์Šค 2024",
1808
+ f"{keyword} ์ •๋ณด ์„ค๋ช…",
1809
+ f"{keyword} ํŠธ๋ Œ๋“œ ์ „๋ง",
1810
+ f"{keyword} ์žฅ์  ๋‹จ์ ",
1811
+ f"{keyword} ํ™œ์šฉ ๋ฐฉ๋ฒ•",
1812
+ f"{keyword} ์ „๋ฌธ๊ฐ€ ์˜๊ฒฌ"
1813
+ ]
1814
+ else:
1815
+ queries = [
1816
+ f"{keyword} latest news 2024",
1817
+ f"{keyword} explained comprehensive",
1818
+ f"{keyword} trends forecast",
1819
+ f"{keyword} advantages disadvantages",
1820
+ f"{keyword} how to use",
1821
+ f"{keyword} expert opinions"
1822
+ ]
1823
+
1824
+ all_content = []
1825
+ total_content_length = 0
1826
+
1827
+ for query in queries:
1828
+ results = brave_search(query, count=5) # ๋” ๋งŽ์€ ๊ฒฐ๊ณผ ๊ฐ€์ ธ์˜ค๊ธฐ
1829
+ for r in results[:3]: # ๊ฐ ์ฟผ๋ฆฌ๋‹น ์ƒ์œ„ 3๊ฐœ
1830
+ content = f"**{r['title']}**\n{r['snippet']}\nSource: {r['host']}\n"
1831
+ all_content.append(content)
1832
+ total_content_length += len(r['snippet'])
1833
+
1834
+ # ์ฝ˜ํ…์ธ ๊ฐ€ ๋ถ€์กฑํ•˜๋ฉด ์ถ”๊ฐ€ ์ƒ์„ฑ
1835
+ if total_content_length < 1000: # ์ตœ์†Œ 1000์ž ํ™•๋ณด
1836
+ if language == "Korean":
1837
+ additional_content = f"""
1838
+ ์ถ”๊ฐ€ ์ •๋ณด:
1839
+ {keyword}์™€ ๊ด€๋ จ๋œ ์ตœ๊ทผ ๋™ํ–ฅ์„ ์‚ดํŽด๋ณด๋ฉด, ์ด ๋ถ„์•ผ๋Š” ๋น ๋ฅด๊ฒŒ ๋ฐœ์ „ํ•˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค.
1840
+ ๋งŽ์€ ์ „๋ฌธ๊ฐ€๋“ค์ด ์ด ์ฃผ์ œ์— ๋Œ€ํ•ด ํ™œ๋ฐœํžˆ ์—ฐ๊ตฌํ•˜๊ณ  ์žˆ์œผ๋ฉฐ,
1841
+ ์‹ค์ƒํ™œ์—์„œ์˜ ์‘์šฉ ๊ฐ€๋Šฅ์„ฑ๋„ ๊ณ„์† ํ™•๋Œ€๋˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค.
1842
+
1843
+ ํŠนํžˆ ์ฃผ๋ชฉํ•  ์ ์€:
1844
+ - ๊ธฐ์ˆ  ํ˜์‹ ์˜ ๊ฐ€์†ํ™”
1845
+ - ์‚ฌ์šฉ์ž ๊ฒฝํ—˜์˜ ๊ฐœ์„ 
1846
+ - ์ ‘๊ทผ์„ฑ์˜ ํ–ฅ์ƒ
1847
+ - ๋น„์šฉ ํšจ์œจ์„ฑ ์ฆ๋Œ€
1848
+ - ๊ธ€๋กœ๋ฒŒ ์‹œ์žฅ์˜ ์„ฑ์žฅ
1849
+
1850
+ ์ด๋Ÿฌํ•œ ์š”์†Œ๋“ค์ด {keyword}์˜ ๋ฏธ๋ž˜๋ฅผ ๋”์šฑ ๋ฐ๊ฒŒ ๋งŒ๋“ค๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค.
1851
+ """
1852
+ else:
1853
+ additional_content = f"""
1854
+ Additional insights:
1855
+ Recent developments in {keyword} show rapid advancement in this field.
1856
+ Many experts are actively researching this topic, and its practical
1857
+ applications continue to expand.
1858
+
1859
+ Key points to note:
1860
+ - Accelerating technological innovation
1861
+ - Improving user experience
1862
+ - Enhanced accessibility
1863
+ - Increased cost efficiency
1864
+ - Growing global market
1865
+
1866
+ These factors are making the future of {keyword} increasingly promising.
1867
+ """
1868
+ all_content.append(additional_content)
1869
+
1870
+ # ์ปดํŒŒ์ผ๋œ ์ฝ˜ํ…์ธ  ๋ฐ˜ํ™˜
1871
+ compiled = "\n\n".join(all_content)
1872
+
1873
+ # ํ‚ค์›Œ๋“œ ๊ธฐ๋ฐ˜ ์†Œ๊ฐœ
1874
+ if language == "Korean":
1875
+ intro = f"### '{keyword}'์— ๋Œ€ํ•œ ์ข…ํ•ฉ์ ์ธ ์ •๋ณด์™€ ์ตœ์‹  ๋™ํ–ฅ:\n\n"
1876
+ else:
1877
+ intro = f"### Comprehensive information and latest trends about '{keyword}':\n\n"
1878
+
1879
+ return intro + compiled
1880
+
1881
+
1882
+ def _build_prompt(self, text: str, language: str = "English", search_context: str = "") -> str:
1883
+ """Build prompt for conversation generation with enhanced radio talk show style"""
1884
+ # ํ…์ŠคํŠธ ๊ธธ์ด ์ œํ•œ
1885
+ max_text_length = 4500 if search_context else 6000
1886
+ if len(text) > max_text_length:
1887
+ text = text[:max_text_length] + "..."
1888
+
1889
+ if language == "Korean":
1890
+ # ๋Œ€ํ™” ํ…œํ”Œ๋ฆฟ์„ ๋” ๋งŽ์€ ํ„ด์œผ๋กœ ํ™•์žฅ (15-20ํšŒ)
1891
+ template = """
1892
+ {
1893
+ "conversation": [
1894
+ {"speaker": "์ค€์ˆ˜", "text": ""},
1895
+ {"speaker": "๋ฏผํ˜ธ", "text": ""},
1896
+ {"speaker": "์ค€์ˆ˜", "text": ""},
1897
+ {"speaker": "๋ฏผํ˜ธ", "text": ""},
1898
+ {"speaker": "์ค€์ˆ˜", "text": ""},
1899
+ {"speaker": "๋ฏผํ˜ธ", "text": ""},
1900
+ {"speaker": "์ค€์ˆ˜", "text": ""},
1901
+ {"speaker": "๋ฏผํ˜ธ", "text": ""},
1902
+ {"speaker": "์ค€์ˆ˜", "text": ""},
1903
+ {"speaker": "๋ฏผํ˜ธ", "text": ""},
1904
+ {"speaker": "์ค€์ˆ˜", "text": ""},
1905
+ {"speaker": "๋ฏผํ˜ธ", "text": ""},
1906
+ {"speaker": "์ค€์ˆ˜", "text": ""},
1907
+ {"speaker": "๋ฏผํ˜ธ", "text": ""},
1908
+ {"speaker": "์ค€์ˆ˜", "text": ""},
1909
+ {"speaker": "๋ฏผํ˜ธ", "text": ""},
1910
+ {"speaker": "์ค€์ˆ˜", "text": ""},
1911
+ {"speaker": "๋ฏผํ˜ธ", "text": ""}
1912
+ ]
1913
+ }
1914
+ """
1915
+
1916
+ context_part = ""
1917
+ if search_context:
1918
+ context_part = f"# ์ตœ์‹  ๊ด€๋ จ ์ •๋ณด:\n{search_context}\n"
1919
+
1920
+ base_prompt = (
1921
+ f"# ์›๋ณธ ์ฝ˜ํ…์ธ :\n{text}\n\n"
1922
+ f"{context_part}"
1923
+ f"์œ„ ๋‚ด์šฉ์œผ๋กœ ์ „๋ฌธ์ ์ด๊ณ  ์‹ฌ์ธต์ ์ธ ๋ผ๋””์˜ค ํŒŸ์บ์ŠคํŠธ ๋Œ€๋‹ด ํ”„๋กœ๊ทธ๋žจ ๋Œ€๋ณธ์„ ์ž‘์„ฑํ•ด์ฃผ์„ธ์š”.\n\n"
1924
+ f"## ํ•„์ˆ˜ ์š”๊ตฌ์‚ฌํ•ญ:\n"
1925
+ f"1. **์ตœ์†Œ 18ํšŒ ์ด์ƒ์˜ ๋Œ€ํ™” ๊ตํ™˜** (์ค€์ˆ˜ 9ํšŒ, ๋ฏผํ˜ธ 9ํšŒ ์ด์ƒ)\n"
1926
+ f"2. **๋Œ€ํ™” ์Šคํƒ€์ผ**: ์ „๋ฌธ์ ์ด๊ณ  ๊นŠ์ด ์žˆ๋Š” ํŒŸ์บ์ŠคํŠธ ๋Œ€๋‹ด\n"
1927
+ f"3. **ํ™”์ž ์—ญํ• **:\n"
1928
+ f" - ์ค€์ˆ˜: ์ง„ํ–‰์ž (ํ†ต์ฐฐ๋ ฅ ์žˆ๋Š” ์งˆ๋ฌธ, ํ•ต์‹ฌ ํฌ์ธํŠธ ์ •๋ฆฌ, ์ฒญ์ทจ์ž ๊ด€์  ๋Œ€๋ณ€)\n"
1929
+ f" - ๋ฏผํ˜ธ: ์ „๋ฌธ๊ฐ€ (์ƒ์„ธํ•˜๊ณ  ์ „๋ฌธ์ ์ธ ์„ค๋ช…, ๊ตฌ์ฒด์  ์˜ˆ์‹œ, ๋ฐ์ดํ„ฐ ๊ธฐ๋ฐ˜ ๋ถ„์„)\n"
1930
+ f"4. **๋‹ต๋ณ€ ๊ทœ์น™**:\n"
1931
+ f" - ์ค€์ˆ˜: 1-2๋ฌธ์žฅ์˜ ๋ช…ํ™•ํ•œ ์งˆ๋ฌธ์ด๋‚˜ ์š”์•ฝ\n"
1932
+ f" - ๋ฏผํ˜ธ: **๋ฐ˜๋“œ์‹œ 2-4๋ฌธ์žฅ์œผ๋กœ ์ถฉ์‹คํ•˜๊ฒŒ ๋‹ต๋ณ€** (ํ•ต์‹ฌ ๊ฐœ๋… ์„ค๋ช… + ๋ถ€์—ฐ ์„ค๋ช… + ์˜ˆ์‹œ/๊ทผ๊ฑฐ)\n"
1933
+ f" - ์ „๋ฌธ ์šฉ์–ด๋Š” ์‰ฝ๊ฒŒ ํ’€์–ด์„œ ์„ค๋ช…\n"
1934
+ f" - ๊ตฌ์ฒด์ ์ธ ์ˆ˜์น˜, ์‚ฌ๋ก€, ์—ฐ๊ตฌ ๊ฒฐ๊ณผ ์ธ์šฉ\n"
1935
+ f"5. **๋‚ด์šฉ ๊ตฌ์„ฑ**:\n"
1936
+ f" - ๋„์ž…๋ถ€ (2-3ํšŒ): ์ฃผ์ œ์˜ ์ค‘์š”์„ฑ๊ณผ ๋ฐฐ๊ฒฝ ์„ค๋ช…\n"
1937
+ f" - ์ „๊ฐœ๋ถ€ (12-14ํšŒ): ํ•ต์‹ฌ ๋‚ด์šฉ์„ ๋‹ค๊ฐ๋„๋กœ ์‹ฌ์ธต ๋ถ„์„\n"
1938
+ f" - ๋งˆ๋ฌด๋ฆฌ (2-3ํšŒ): ํ•ต์‹ฌ ์š”์•ฝ๊ณผ ๋ฏธ๋ž˜ ์ „๋ง\n"
1939
+ f"6. **์ „๋ฌธ์„ฑ**: ํ•™์ˆ ์  ๊ทผ๊ฑฐ์™€ ์‹ค๋ฌด์  ํ†ต์ฐฐ์„ ๊ท ํ˜•์žˆ๊ฒŒ ํฌํ•จ\n"
1940
+ f"7. **ํ•„์ˆ˜**: ์„œ๋กœ ์กด๋Œ“๋ง ์‚ฌ์šฉ, ์ฒญ์ทจ์ž๊ฐ€ ์ „๋ฌธ ์ง€์‹์„ ์–ป์„ ์ˆ˜ ์žˆ๋„๋ก ์ƒ์„ธํžˆ ์„ค๋ช…\n\n"
1941
+ f"๋ฐ˜๋“œ์‹œ ์œ„ JSON ํ˜•์‹์œผ๋กœ 18ํšŒ ์ด์ƒ์˜ ์ „๋ฌธ์ ์ธ ๋Œ€ํ™”๋ฅผ ์ž‘์„ฑํ•˜์„ธ์š”:\n{template}"
1942
+ )
1943
+
1944
+ return base_prompt
1945
+
1946
+ else:
1947
+ # ์˜์–ด ํ…œํ”Œ๋ฆฟ๋„ ํ™•์žฅ
1948
+ template = """
1949
+ {
1950
+ "conversation": [
1951
+ {"speaker": "Alex", "text": ""},
1952
+ {"speaker": "Jordan", "text": ""},
1953
+ {"speaker": "Alex", "text": ""},
1954
+ {"speaker": "Jordan", "text": ""},
1955
+ {"speaker": "Alex", "text": ""},
1956
+ {"speaker": "Jordan", "text": ""},
1957
+ {"speaker": "Alex", "text": ""},
1958
+ {"speaker": "Jordan", "text": ""},
1959
+ {"speaker": "Alex", "text": ""},
1960
+ {"speaker": "Jordan", "text": ""},
1961
+ {"speaker": "Alex", "text": ""},
1962
+ {"speaker": "Jordan", "text": ""},
1963
+ {"speaker": "Alex", "text": ""},
1964
+ {"speaker": "Jordan", "text": ""},
1965
+ {"speaker": "Alex", "text": ""},
1966
+ {"speaker": "Jordan", "text": ""},
1967
+ {"speaker": "Alex", "text": ""},
1968
+ {"speaker": "Jordan", "text": ""}
1969
+ ]
1970
+ }
1971
+ """
1972
+
1973
+ context_part = ""
1974
+ if search_context:
1975
+ context_part = f"# Latest Information:\n{search_context}\n"
1976
+
1977
+ base_prompt = (
1978
+ f"# Content:\n{text}\n\n"
1979
+ f"{context_part}"
1980
+ f"Create a professional and in-depth podcast conversation.\n\n"
1981
+ f"## Requirements:\n"
1982
+ f"1. **Minimum 18 conversation exchanges** (Alex 9+, Jordan 9+)\n"
1983
+ f"2. **Style**: Professional, insightful podcast discussion\n"
1984
+ f"3. **Roles**:\n"
1985
+ f" - Alex: Host (insightful questions, key point summaries, audience perspective)\n"
1986
+ f" - Jordan: Expert (detailed explanations, concrete examples, data-driven analysis)\n"
1987
+ f"4. **Response Rules**:\n"
1988
+ f" - Alex: 1-2 sentence clear questions or summaries\n"
1989
+ f" - Jordan: **Must answer in 2-4 sentences** (core concept + elaboration + example/evidence)\n"
1990
+ f" - Explain technical terms clearly\n"
1991
+ f" - Include specific data, cases, research findings\n"
1992
+ f"5. **Structure**:\n"
1993
+ f" - Introduction (2-3 exchanges): Topic importance and context\n"
1994
+ f" - Main content (12-14 exchanges): Multi-angle deep analysis\n"
1995
+ f" - Conclusion (2-3 exchanges): Key takeaways and future outlook\n"
1996
+ f"6. **Expertise**: Balance academic rigor with practical insights\n\n"
1997
+ f"Create exactly 18+ professional exchanges in this JSON format:\n{template}"
1998
+ )
1999
+
2000
+ return base_prompt
2001
+
2002
+ class UnifiedAudioConverter:
2003
+ def __init__(self, config: ConversationConfig):
2004
+ self.config = config
2005
+ self.llm_client = None
2006
+ self.legacy_local_model = None
2007
+ self.legacy_tokenizer = None
2008
+ # ์ƒˆ๋กœ์šด ๋กœ์ปฌ LLM ๊ด€๋ จ
2009
+ self.local_llm = None
2010
+ self.local_llm_model = None
2011
+ self.melo_models = None
2012
+ self.spark_model_dir = None
2013
+ self.device = "cuda" if torch.cuda.is_available() else "cpu"
2014
+
2015
+ def initialize_api_mode(self, api_key: str):
2016
+ """Initialize API mode with Together API (now fallback)"""
2017
+ self.llm_client = OpenAI(api_key=api_key, base_url="https://api.together.xyz/v1")
2018
+
2019
+ @spaces.GPU(duration=120)
2020
+ def initialize_local_mode(self):
2021
+ """Initialize new local mode with Llama CPP"""
2022
+ if not LLAMA_CPP_AVAILABLE:
2023
+ raise RuntimeError("Llama CPP dependencies not available. Please install llama-cpp-python and llama-cpp-agent.")
2024
+
2025
+ if self.local_llm is None or self.local_llm_model != self.config.local_model_name:
2026
+ try:
2027
+ # ๋ชจ๋ธ ๋‹ค์šด๋กœ๋“œ
2028
+ model_path = hf_hub_download(
2029
+ repo_id=self.config.local_model_repo,
2030
+ filename=self.config.local_model_name,
2031
+ local_dir="./models"
2032
+ )
2033
+
2034
+ model_path_local = os.path.join("./models", self.config.local_model_name)
2035
+
2036
+ if not os.path.exists(model_path_local):
2037
+ raise RuntimeError(f"Model file not found at {model_path_local}")
2038
+
2039
+ # Llama ๋ชจ๋ธ ์ดˆ๊ธฐํ™”
2040
+ self.local_llm = Llama(
2041
+ model_path=model_path_local,
2042
+ flash_attn=True,
2043
+ n_gpu_layers=81 if torch.cuda.is_available() else 0,
2044
+ n_batch=1024,
2045
+ n_ctx=16384,
2046
+ )
2047
+ self.local_llm_model = self.config.local_model_name
2048
+ print(f"Local LLM initialized: {model_path_local}")
2049
+
2050
+ except Exception as e:
2051
+ print(f"Failed to initialize local LLM: {e}")
2052
+ raise RuntimeError(f"Failed to initialize local LLM: {e}")
2053
+
2054
+ @spaces.GPU(duration=60)
2055
+ def initialize_legacy_local_mode(self):
2056
+ """Initialize legacy local mode with Hugging Face model (fallback)"""
2057
+ if self.legacy_local_model is None:
2058
+ quantization_config = BitsAndBytesConfig(
2059
+ load_in_4bit=True,
2060
+ bnb_4bit_compute_dtype=torch.float16
2061
+ )
2062
+ self.legacy_local_model = AutoModelForCausalLM.from_pretrained(
2063
+ self.config.legacy_local_model_name,
2064
+ quantization_config=quantization_config
2065
+ )
2066
+ self.legacy_tokenizer = AutoTokenizer.from_pretrained(
2067
+ self.config.legacy_local_model_name,
2068
+ revision='8ab73a6800796d84448bc936db9bac5ad9f984ae'
2069
+ )
2070
+
2071
+ def initialize_spark_tts(self):
2072
+ """Initialize Spark TTS model by downloading if needed"""
2073
+ if not SPARK_AVAILABLE:
2074
+ raise RuntimeError("Spark TTS dependencies not available")
2075
+
2076
+ model_dir = "pretrained_models/Spark-TTS-0.5B"
2077
+
2078
+ # Check if model exists, if not download it
2079
+ if not os.path.exists(model_dir):
2080
+ print("Downloading Spark-TTS model...")
2081
+ try:
2082
+ os.makedirs("pretrained_models", exist_ok=True)
2083
+ snapshot_download(
2084
+ "SparkAudio/Spark-TTS-0.5B",
2085
+ local_dir=model_dir
2086
+ )
2087
+ print("Spark-TTS model downloaded successfully")
2088
+ except Exception as e:
2089
+ raise RuntimeError(f"Failed to download Spark-TTS model: {e}")
2090
+
2091
+ self.spark_model_dir = model_dir
2092
+
2093
+ # Check if we have the CLI inference script
2094
+ if not os.path.exists("cli/inference.py"):
2095
+ print("Warning: Spark-TTS CLI not found. Please clone the Spark-TTS repository.")
2096
+
2097
+ @spaces.GPU(duration=60)
2098
+ def initialize_melo_tts(self):
2099
+ """Initialize MeloTTS models"""
2100
+ if MELO_AVAILABLE and self.melo_models is None:
2101
+ self.melo_models = {"EN": MeloTTS(language="EN", device=self.device)}
2102
+
2103
+ def fetch_text(self, url: str) -> str:
2104
+ """Fetch text content from URL"""
2105
+ if not url:
2106
+ raise ValueError("URL cannot be empty")
2107
+
2108
+ if not url.startswith("http://") and not url.startswith("https://"):
2109
+ raise ValueError("URL must start with 'http://' or 'https://'")
2110
+
2111
+ full_url = f"{self.config.prefix_url}{url}"
2112
+ try:
2113
+ response = httpx.get(full_url, timeout=60.0)
2114
+ response.raise_for_status()
2115
+ return response.text
2116
+ except httpx.HTTPError as e:
2117
+ raise RuntimeError(f"Failed to fetch URL: {e}")
2118
+
2119
+ def extract_text_from_pdf(self, pdf_file) -> str:
2120
+ """Extract text content from PDF file"""
2121
+ try:
2122
+ # Gradio returns file path, not file object
2123
+ if isinstance(pdf_file, str):
2124
+ pdf_path = pdf_file
2125
+ else:
2126
+ # If it's a file object (shouldn't happen with Gradio)
2127
+ with tempfile.NamedTemporaryFile(delete=False, suffix=".pdf") as tmp_file:
2128
+ tmp_file.write(pdf_file.read())
2129
+ pdf_path = tmp_file.name
2130
+
2131
+ # PDF ๋กœ๋“œ ๋ฐ ํ…์ŠคํŠธ ์ถ”์ถœ
2132
+ loader = PyPDFLoader(pdf_path)
2133
+ pages = loader.load()
2134
+
2135
+ # ๋ชจ๋“  ํŽ˜์ด์ง€์˜ ํ…์ŠคํŠธ๋ฅผ ๊ฒฐํ•ฉ
2136
+ text = "\n".join([page.page_content for page in pages])
2137
+
2138
+ # ์ž„์‹œ ํŒŒ์ผ์ธ ๊ฒฝ์šฐ ์‚ญ์ œ
2139
+ if not isinstance(pdf_file, str) and os.path.exists(pdf_path):
2140
+ os.unlink(pdf_path)
2141
+
2142
+ return text
2143
+ except Exception as e:
2144
+ raise RuntimeError(f"Failed to extract text from PDF: {e}")
2145
+
2146
+ def _get_messages_formatter_type(self, model_name):
2147
+ """Get appropriate message formatter for the model"""
2148
+ if "Mistral" in model_name or "BitSix" in model_name:
2149
+ return MessagesFormatterType.CHATML
2150
+ else:
2151
+ return MessagesFormatterType.LLAMA_3
2152
+
2153
+
2154
+ def _build_prompt(self, text: str, language: str = "English", search_context: str = "") -> str:
2155
+ """Build prompt for conversation generation with enhanced professional podcast style"""
2156
+ # ํ…์ŠคํŠธ ๊ธธ์ด ์ œํ•œ
2157
+ max_text_length = 4500 if search_context else 6000
2158
+ if len(text) > max_text_length:
2159
+ text = text[:max_text_length] + "..."
2160
+
2161
+ if language == "Korean":
2162
+ # ๋Œ€ํ™” ํ…œํ”Œ๋ฆฟ์„ ๋” ๋งŽ์€ ํ„ด์œผ๋กœ ํ™•์žฅ
2163
+ template = """
2164
+ {
2165
+ "conversation": [
2166
+ {"speaker": "์ค€์ˆ˜", "text": ""},
2167
+ {"speaker": "๋ฏผํ˜ธ", "text": ""},
2168
+ {"speaker": "์ค€์ˆ˜", "text": ""},
2169
+ {"speaker": "๋ฏผํ˜ธ", "text": ""},
2170
+ {"speaker": "์ค€์ˆ˜", "text": ""},
2171
+ {"speaker": "๋ฏผํ˜ธ", "text": ""},
2172
+ {"speaker": "์ค€์ˆ˜", "text": ""},
2173
+ {"speaker": "๋ฏผํ˜ธ", "text": ""},
2174
+ {"speaker": "์ค€์ˆ˜", "text": ""},
2175
+ {"speaker": "๋ฏผํ˜ธ", "text": ""},
2176
+ {"speaker": "์ค€์ˆ˜", "text": ""},
2177
+ {"speaker": "๋ฏผํ˜ธ", "text": ""}
2178
+ ]
2179
+ }
2180
+ """
2181
+
2182
+ context_part = ""
2183
+ if search_context:
2184
+ context_part = f"# ์ตœ์‹  ๊ด€๋ จ ์ •๋ณด:\n{search_context}\n"
2185
+
2186
+ base_prompt = (
2187
+ f"# ์›๋ณธ ์ฝ˜ํ…์ธ :\n{text}\n\n"
2188
+ f"{context_part}"
2189
+ f"์œ„ ๋‚ด์šฉ์œผ๋กœ ์ „๋ฌธ์ ์ด๊ณ  ์‹ฌ์ธต์ ์ธ ํŒŸ์บ์ŠคํŠธ ๋Œ€๋‹ด ํ”„๋กœ๊ทธ๋žจ ๋Œ€๋ณธ์„ ์ž‘์„ฑํ•ด์ฃผ์„ธ์š”.\n\n"
2190
+ f"## ํ•ต์‹ฌ ์ง€์นจ:\n"
2191
+ f"1. **๋Œ€ํ™” ์Šคํƒ€์ผ**: ์ „๋ฌธ์ ์ด๋ฉด์„œ๋„ ์ดํ•ดํ•˜๊ธฐ ์‰ฌ์šด ํŒŸ์บ์ŠคํŠธ ๋Œ€๋‹ด\n"
2192
+ f"2. **ํ™”์ž ์—ญํ• **:\n"
2193
+ f" - ์ค€์ˆ˜: ์ง„ํ–‰์ž/ํ˜ธ์ŠคํŠธ (ํ•ต์‹ฌ์„ ์งš๋Š” ์งˆ๋ฌธ, ์ฒญ์ทจ์ž ๊ด€์ ์—์„œ ๊ถ๊ธˆํ•œ ์  ์งˆ๋ฌธ)\n"
2194
+ f" - ๋ฏผํ˜ธ: ์ „๋ฌธ๊ฐ€ (๊นŠ์ด ์žˆ๋Š” ์„ค๋ช…, ๊ตฌ์ฒด์  ์‚ฌ๋ก€์™€ ๋ฐ์ดํ„ฐ ์ œ์‹œ)\n"
2195
+ f"3. **์ค‘์š”ํ•œ ๋‹ต๋ณ€ ๊ทœ์น™**:\n"
2196
+ f" - ์ค€์ˆ˜: 1-2๋ฌธ์žฅ์˜ ๋ช…ํ™•ํ•œ ์งˆ๋ฌธ (\"๊ทธ๋ ‡๋‹ค๋ฉด ๊ตฌ์ฒด์ ์œผ๋กœ ์–ด๋–ค ์˜๋ฏธ์ธ๊ฐ€์š”?\", \"์‹ค์ œ ์‚ฌ๋ก€๋ฅผ ๋“ค์–ด์ฃผ์‹œ๊ฒ ์–ด์š”?\")\n"
2197
+ f" - ๋ฏผํ˜ธ: **๋ฐ˜๋“œ์‹œ 2-4๋ฌธ์žฅ์œผ๋กœ ์ถฉ์‹คํžˆ ๋‹ต๋ณ€** (๊ฐœ๋… ์„ค๋ช… + ๊ตฌ์ฒด์  ์„ค๋ช… + ์˜ˆ์‹œ๋‚˜ ํ•จ์˜)\n"
2198
+ f" - ์˜ˆ: \"์ด๊ฒƒ์€ ~๋ฅผ ์˜๋ฏธํ•ฉ๋‹ˆ๋‹ค. ๊ตฌ์ฒด์ ์œผ๋กœ ~ํ•œ ์ธก๋ฉด์—์„œ ์ค‘์š”ํ•œ๋ฐ์š”. ์‹ค์ œ๋กœ ์ตœ๊ทผ ~ํ•œ ์‚ฌ๋ก€๊ฐ€ ์žˆ์—ˆ๊ณ , ์ด๋Š” ~๋ฅผ ๋ณด์—ฌ์ค๋‹ˆ๋‹ค.\"\n"
2199
+ f"4. **์ „๋ฌธ์„ฑ ์š”์†Œ**:\n"
2200
+ f" - ํ†ต๊ณ„๋‚˜ ์—ฐ๊ตฌ ๊ฒฐ๊ณผ ์ธ์šฉ\n"
2201
+ f" - ์‹ค์ œ ์‚ฌ๋ก€์™€ ์ผ€์ด์Šค ์Šคํ„ฐ๋””\n"
2202
+ f" - ์ „๋ฌธ ์šฉ์–ด๋ฅผ ์‰ฝ๊ฒŒ ํ’€์–ด์„œ ์„ค๋ช…\n"
2203
+ f" - ๋‹ค์–‘ํ•œ ๊ด€์ ๊ณผ ์‹œ๊ฐ ์ œ์‹œ\n"
2204
+ f"5. **ํ•„์ˆ˜ ๊ทœ์น™**: ์„œ๋กœ ์กด๋Œ“๋ง ์‚ฌ์šฉ, 12-15ํšŒ ๋Œ€ํ™” ๊ตํ™˜\n\n"
2205
+ f"JSON ํ˜•์‹์œผ๋กœ๋งŒ ๋ฐ˜ํ™˜:\n{template}"
2206
+ )
2207
+
2208
+ return base_prompt
2209
+
2210
+ else:
2211
+ # ์˜์–ด ํ…œํ”Œ๋ฆฟ๋„ ํ™•์žฅ
2212
+ template = """
2213
+ {
2214
+ "conversation": [
2215
+ {"speaker": "Alex", "text": ""},
2216
+ {"speaker": "Jordan", "text": ""},
2217
+ {"speaker": "Alex", "text": ""},
2218
+ {"speaker": "Jordan", "text": ""},
2219
+ {"speaker": "Alex", "text": ""},
2220
+ {"speaker": "Jordan", "text": ""},
2221
+ {"speaker": "Alex", "text": ""},
2222
+ {"speaker": "Jordan", "text": ""},
2223
+ {"speaker": "Alex", "text": ""},
2224
+ {"speaker": "Jordan", "text": ""},
2225
+ {"speaker": "Alex", "text": ""},
2226
+ {"speaker": "Jordan", "text": ""}
2227
+ ]
2228
+ }
2229
+ """
2230
+
2231
+ context_part = ""
2232
+ if search_context:
2233
+ context_part = f"# Latest Information:\n{search_context}\n"
2234
+
2235
+ base_prompt = (
2236
+ f"# Content:\n{text}\n\n"
2237
+ f"{context_part}"
2238
+ f"Create a professional and insightful podcast conversation.\n\n"
2239
+ f"## Key Guidelines:\n"
2240
+ f"1. **Style**: Professional yet accessible podcast discussion\n"
2241
+ f"2. **Roles**:\n"
2242
+ f" - Alex: Host (insightful questions, audience perspective)\n"
2243
+ f" - Jordan: Expert (in-depth explanations, concrete examples and data)\n"
2244
+ f"3. **Critical Response Rules**:\n"
2245
+ f" - Alex: 1-2 sentence clear questions (\"Could you elaborate on that?\", \"What's a real-world example?\")\n"
2246
+ f" - Jordan: **Must answer in 2-4 sentences** (concept + detailed explanation + example/implication)\n"
2247
+ f" - Example: \"This refers to... Specifically, it's important because... For instance, recent studies show... This demonstrates...\"\n"
2248
+ f"4. **Professional Elements**:\n"
2249
+ f" - Cite statistics and research\n"
2250
+ f" - Real cases and case studies\n"
2251
+ f" - Explain technical terms clearly\n"
2252
+ f" - Present multiple perspectives\n"
2253
+ f"5. **Length**: 12-15 exchanges total\n\n"
2254
+ f"Return JSON only:\n{template}"
2255
+ )
2256
+
2257
+ return base_prompt
2258
+
2259
+
2260
+
2261
+ def _build_messages_for_local(self, text: str, language: str = "English", search_context: str = "") -> List[Dict]:
2262
+ """Build messages for local LLM with enhanced professional podcast style"""
2263
+ if language == "Korean":
2264
+ system_message = (
2265
+ "๋‹น์‹ ์€ ํ•œ๊ตญ ์ตœ๊ณ ์˜ ์ „๋ฌธ ํŒŸ์บ์ŠคํŠธ ์ž‘๊ฐ€์ž…๋‹ˆ๋‹ค. "
2266
+ "์ฒญ์ทจ์ž๋“ค์ด ์ „๋ฌธ ์ง€์‹์„ ์‰ฝ๊ฒŒ ์ดํ•ดํ•  ์ˆ˜ ์žˆ๋Š” ๊ณ ํ’ˆ์งˆ ๋Œ€๋‹ด์„ ๋งŒ๋“ค์–ด๋ƒ…๋‹ˆ๋‹ค.\n\n"
2267
+ "ํ•ต์‹ฌ ์›์น™:\n"
2268
+ "1. ์ง„ํ–‰์ž(์ค€์ˆ˜)๋Š” ํ•ต์‹ฌ์„ ์งš๋Š” ํ†ต์ฐฐ๋ ฅ ์žˆ๋Š” ์งˆ๋ฌธ์œผ๋กœ ๋Œ€ํ™”๋ฅผ ์ด๋Œ์–ด๊ฐ‘๋‹ˆ๋‹ค\n"
2269
+ "2. ์ „๋ฌธ๊ฐ€(๋ฏผํ˜ธ)๏ฟฝ๏ฟฝ๏ฟฝ ๋ฐ˜๋“œ์‹œ 2-4๋ฌธ์žฅ์œผ๋กœ ๊นŠ์ด ์žˆ๊ฒŒ ๋‹ต๋ณ€ํ•ฉ๋‹ˆ๋‹ค (๊ฐœ๋…+์„ค๋ช…+์˜ˆ์‹œ)\n"
2270
+ "3. ๊ตฌ์ฒด์ ์ธ ๋ฐ์ดํ„ฐ, ์—ฐ๊ตฌ ๊ฒฐ๊ณผ, ์‹ค์ œ ์‚ฌ๋ก€๋ฅผ ํฌํ•จํ•ฉ๋‹ˆ๋‹ค\n"
2271
+ "4. ์ „๋ฌธ ์šฉ์–ด๋Š” ์‰ฝ๊ฒŒ ํ’€์–ด์„œ ์„ค๋ช…ํ•˜๋˜, ์ •ํ™•์„ฑ์„ ์œ ์ง€ํ•ฉ๋‹ˆ๋‹ค\n"
2272
+ "5. ๋‹ค์–‘ํ•œ ๊ด€์ ์„ ์ œ์‹œํ•˜์—ฌ ๊ท ํ˜•์žกํžŒ ์‹œ๊ฐ์„ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค\n"
2273
+ "6. ๋ฐ˜๋“œ์‹œ ์„œ๋กœ ์กด๋Œ“๋ง์„ ์‚ฌ์šฉํ•˜๋ฉฐ, ์ „๋ฌธ์ ์ด๋ฉด์„œ๋„ ์นœ๊ทผํ•œ ํ†ค์„ ์œ ์ง€ํ•ฉ๋‹ˆ๋‹ค"
2274
+ )
2275
+ else:
2276
+ system_message = (
2277
+ "You are an expert podcast scriptwriter who creates high-quality, "
2278
+ "professional discussions that make complex topics accessible.\n\n"
2279
+ "Key principles:\n"
2280
+ "1. The host (Alex) asks insightful questions that drive the conversation\n"
2281
+ "2. The expert (Jordan) MUST answer in 2-4 sentences (concept+explanation+example)\n"
2282
+ "3. Include specific data, research findings, and real cases\n"
2283
+ "4. Explain technical terms clearly while maintaining accuracy\n"
2284
+ "5. Present multiple perspectives for balanced views\n"
2285
+ "6. Maintain a professional yet approachable tone"
2286
+ )
2287
+
2288
+ return [
2289
+ {"role": "system", "content": system_message},
2290
+ {"role": "user", "content": self._build_prompt(text, language, search_context)}
2291
+ ]
2292
+
2293
+ @spaces.GPU(duration=120)
2294
+ def extract_conversation_local(self, text: str, language: str = "English", progress=None) -> Dict:
2295
+ """Extract conversation using new local LLM with enhanced professional style"""
2296
+ try:
2297
+ # ๊ฒ€์ƒ‰ ์ปจํ…์ŠคํŠธ ์ƒ์„ฑ (ํ‚ค์›Œ๋“œ ๊ธฐ๋ฐ˜์ด ์•„๋‹Œ ๊ฒฝ์šฐ)
2298
+ search_context = ""
2299
+ if BRAVE_KEY and not text.startswith("Keyword-based content:"):
2300
+ try:
2301
+ keywords = extract_keywords_for_search(text, language)
2302
+ if keywords:
2303
+ search_query = keywords[0] if language == "Korean" else f"{keywords[0]} latest news"
2304
+ search_context = format_search_results(search_query)
2305
+ print(f"Search context added for: {search_query}")
2306
+ except Exception as e:
2307
+ print(f"Search failed, continuing without context: {e}")
2308
+
2309
+ # ๋จผ์ € ์ƒˆ๋กœ์šด ๋กœ์ปฌ LLM ์‹œ๋„
2310
+ self.initialize_local_mode()
2311
+
2312
+ chat_template = self._get_messages_formatter_type(self.config.local_model_name)
2313
+ provider = LlamaCppPythonProvider(self.local_llm)
2314
+
2315
+ # ๊ฐ•ํ™”๋œ ์ „๋ฌธ ํŒŸ์บ์ŠคํŠธ ์Šคํƒ€์ผ ์‹œ์Šคํ…œ ๋ฉ”์‹œ์ง€
2316
+ if language == "Korean":
2317
+ system_message = (
2318
+ "๋‹น์‹ ์€ ํ•œ๊ตญ์˜ ์œ ๋ช… ํŒŸ์บ์ŠคํŠธ ์ „๋ฌธ ์ž‘๊ฐ€์ž…๋‹ˆ๋‹ค. "
2319
+ "์ฒญ์ทจ์ž๋“ค์ด ๊นŠ์ด ์žˆ๋Š” ์ „๋ฌธ ์ง€์‹์„ ์–ป์„ ์ˆ˜ ์žˆ๋Š” ๊ณ ํ’ˆ์งˆ ๋Œ€๋‹ด์„ ๋งŒ๋“ญ๋‹ˆ๋‹ค.\n\n"
2320
+ "์ž‘์„ฑ ๊ทœ์น™:\n"
2321
+ "1. ์ง„ํ–‰์ž(์ค€์ˆ˜)๋Š” ํ•ต์‹ฌ์„ ์งš๋Š” 1-2๋ฌธ์žฅ ์งˆ๋ฌธ์„ ํ•ฉ๋‹ˆ๋‹ค\n"
2322
+ "2. ์ „๋ฌธ๊ฐ€(๋ฏผํ˜ธ)๋Š” ๋ฐ˜๋“œ์‹œ 2-4๋ฌธ์žฅ์œผ๋กœ ์ถฉ์‹คํžˆ ๋‹ต๋ณ€ํ•ฉ๋‹ˆ๋‹ค:\n"
2323
+ " - ์ฒซ ๋ฌธ์žฅ: ํ•ต์‹ฌ ๊ฐœ๋… ์„ค๋ช…\n"
2324
+ " - ๋‘˜์งธ ๋ฌธ์žฅ: ๊ตฌ์ฒด์ ์ธ ์„ค๋ช…์ด๋‚˜ ๋งฅ๋ฝ\n"
2325
+ " - ์…‹์งธ-๋„ท์งธ ๋ฌธ์žฅ: ์‹ค์ œ ์˜ˆ์‹œ, ๋ฐ์ดํ„ฐ, ํ•จ์˜\n"
2326
+ "3. ํ†ต๊ณ„, ์—ฐ๊ตฌ ๊ฒฐ๊ณผ, ์‹ค์ œ ์‚ฌ๋ก€๋ฅผ ์ ๊ทน ํ™œ์šฉํ•˜์„ธ์š”\n"
2327
+ "4. ์ „๋ฌธ์„ฑ์„ ์œ ์ง€ํ•˜๋ฉด์„œ๋„ ์ดํ•ดํ•˜๊ธฐ ์‰ฝ๊ฒŒ ์„ค๋ช…ํ•˜์„ธ์š”\n"
2328
+ "5. 12-15ํšŒ์˜ ๋Œ€ํ™” ๊ตํ™˜์œผ๋กœ ๊ตฌ์„ฑํ•˜์„ธ์š”\n"
2329
+ "6. JSON ํ˜•์‹์œผ๋กœ๋งŒ ์‘๋‹ตํ•˜์„ธ์š”"
2330
+ )
2331
+ else:
2332
+ system_message = (
2333
+ "You are a professional podcast scriptwriter creating high-quality, "
2334
+ "insightful discussions that provide deep expertise to listeners.\n\n"
2335
+ "Writing rules:\n"
2336
+ "1. Host (Alex) asks focused 1-2 sentence questions\n"
2337
+ "2. Expert (Jordan) MUST answer in 2-4 substantial sentences:\n"
2338
+ " - First sentence: Core concept explanation\n"
2339
+ " - Second sentence: Specific details or context\n"
2340
+ " - Third-fourth sentences: Real examples, data, implications\n"
2341
+ "3. Actively use statistics, research findings, real cases\n"
2342
+ "4. Maintain expertise while keeping explanations accessible\n"
2343
+ "5. Create 12-15 conversation exchanges\n"
2344
+ "6. Respond only in JSON format"
2345
+ )
2346
+
2347
+ agent = LlamaCppAgent(
2348
+ provider,
2349
+ system_prompt=system_message,
2350
+ predefined_messages_formatter_type=chat_template,
2351
+ debug_output=False
2352
+ )
2353
+
2354
+ settings = provider.get_provider_default_settings()
2355
+ settings.temperature = 0.75 # ์•ฝ๊ฐ„ ๋‚ฎ์ถฐ์„œ ๋” ์ผ๊ด€๋œ ์ „๋ฌธ์  ๋‹ต๋ณ€
2356
+ settings.top_k = 40
2357
+ settings.top_p = 0.95
2358
+ settings.max_tokens = self.config.max_tokens # ์ฆ๊ฐ€๋œ ํ† ํฐ ์ˆ˜ ์‚ฌ์šฉ
2359
+ settings.repeat_penalty = 1.1
2360
+ settings.stream = False
2361
+
2362
+ messages = BasicChatHistory()
2363
+
2364
+ prompt = self._build_prompt(text, language, search_context)
2365
+ response = agent.get_chat_response(
2366
+ prompt,
2367
+ llm_sampling_settings=settings,
2368
+ chat_history=messages,
2369
+ returns_streaming_generator=False,
2370
+ print_output=False
2371
+ )
2372
+
2373
+ # JSON ํŒŒ์‹ฑ
2374
+ pattern = r"\{(?:[^{}]|(?:\{[^{}]*\}))*\}"
2375
+ json_match = re.search(pattern, response)
2376
+
2377
+ if json_match:
2378
+ conversation_data = json.loads(json_match.group())
2379
+ # ๋Œ€ํ™” ๊ธธ์ด ํ™•์ธ ๋ฐ ์กฐ์ •
2380
+ if len(conversation_data["conversation"]) < self.config.min_conversation_turns:
2381
+ print(f"Conversation too short ({len(conversation_data['conversation'])} turns), regenerating...")
2382
+ # ์žฌ์‹œ๋„ ๋กœ์ง ์ถ”๊ฐ€ ๊ฐ€๋Šฅ
2383
+ return conversation_data
2384
+ else:
2385
+ raise ValueError("No valid JSON found in local LLM response")
2386
+
2387
+ except Exception as e:
2388
+ print(f"Local LLM failed: {e}, falling back to legacy local method")
2389
+ return self.extract_conversation_legacy_local(text, language, progress, search_context)
2390
+
2391
+ @spaces.GPU(duration=120)
2392
+ def extract_conversation_legacy_local(self, text: str, language: str = "English", progress=None, search_context: str = "") -> Dict:
2393
+ """Extract conversation using legacy local model with enhanced professional style"""
2394
+ try:
2395
+ self.initialize_legacy_local_mode()
2396
+
2397
+ # ๊ฐ•ํ™”๋œ ์ „๋ฌธ ํŒŸ์บ์ŠคํŠธ ์Šคํƒ€์ผ ์‹œ์Šคํ…œ ๋ฉ”์‹œ์ง€
2398
+ if language == "Korean":
2399
+ system_message = (
2400
+ "๋‹น์‹ ์€ ์ „๋ฌธ ํŒŸ์บ์ŠคํŠธ ์ž‘๊ฐ€์ž…๋‹ˆ๋‹ค. "
2401
+ "์ง„ํ–‰์ž(์ค€์ˆ˜)๋Š” ํ†ต์ฐฐ๋ ฅ ์žˆ๋Š” ์งˆ๋ฌธ์„, ์ „๋ฌธ๊ฐ€(๋ฏผํ˜ธ)๋Š” 2-4๋ฌธ์žฅ์˜ ์ƒ์„ธํ•œ ๋‹ต๋ณ€์„ ํ•ฉ๋‹ˆ๋‹ค. "
2402
+ "๊ตฌ์ฒด์ ์ธ ๋ฐ์ดํ„ฐ์™€ ์‚ฌ๋ก€๋ฅผ ํฌํ•จํ•˜์—ฌ ์ „๋ฌธ์ ์ด๋ฉด์„œ๋„ ์ดํ•ดํ•˜๊ธฐ ์‰ฝ๊ฒŒ ์„ค๋ช…ํ•˜์„ธ์š”. "
2403
+ "12-15ํšŒ ๋Œ€ํ™” ๊ตํ™˜์œผ๋กœ ๊ตฌ์„ฑํ•˜์„ธ์š”."
2404
+ )
2405
+ else:
2406
+ system_message = (
2407
+ "You are a professional podcast scriptwriter. "
2408
+ "Create insightful dialogue where the host (Alex) asks focused questions "
2409
+ "and the expert (Jordan) gives detailed 2-4 sentence answers. "
2410
+ "Include specific data and examples. Create 12-15 exchanges."
2411
+ )
2412
+
2413
+ chat = [
2414
+ {"role": "system", "content": system_message},
2415
+ {"role": "user", "content": self._build_prompt(text, language, search_context)}
2416
+ ]
2417
+
2418
+ terminators = [
2419
+ self.legacy_tokenizer.eos_token_id,
2420
+ self.legacy_tokenizer.convert_tokens_to_ids("<|eot_id|>")
2421
+ ]
2422
+
2423
+ messages = self.legacy_tokenizer.apply_chat_template(
2424
+ chat, tokenize=False, add_generation_prompt=True
2425
+ )
2426
+ model_inputs = self.legacy_tokenizer([messages], return_tensors="pt").to(self.device)
2427
+
2428
+ streamer = TextIteratorStreamer(
2429
+ self.legacy_tokenizer, timeout=10.0, skip_prompt=True, skip_special_tokens=True
2430
+ )
2431
+
2432
+ generate_kwargs = dict(
2433
+ model_inputs,
2434
+ streamer=streamer,
2435
+ max_new_tokens=self.config.max_new_tokens, # ์ฆ๊ฐ€๋œ ํ† ํฐ ์ˆ˜ ์‚ฌ์šฉ
2436
+ do_sample=True,
2437
+ temperature=0.75,
2438
+ eos_token_id=terminators,
2439
+ )
2440
+
2441
+ t = Thread(target=self.legacy_local_model.generate, kwargs=generate_kwargs)
2442
+ t.start()
2443
+
2444
+ partial_text = ""
2445
+ for new_text in streamer:
2446
+ partial_text += new_text
2447
+
2448
+ pattern = r"\{(?:[^{}]|(?:\{[^{}]*\}))*\}"
2449
+ json_match = re.search(pattern, partial_text)
2450
+
2451
+ if json_match:
2452
+ return json.loads(json_match.group())
2453
+ else:
2454
+ raise ValueError("No valid JSON found in legacy local response")
2455
+
2456
+ except Exception as e:
2457
+ print(f"Legacy local model also failed: {e}")
2458
+ # Return enhanced default template
2459
+ if language == "Korean":
2460
+ return self._get_default_korean_conversation()
2461
+ else:
2462
+ return self._get_default_english_conversation()
2463
+
2464
+ def _get_default_korean_conversation(self) -> Dict:
2465
+ """๋” ์ „๋ฌธ์ ์ธ ๊ธฐ๋ณธ ํ•œ๊ตญ์–ด ๋Œ€ํ™” ํ…œํ”Œ๋ฆฟ"""
2466
+ return {
2467
+ "conversation": [
2468
+ {"speaker": "์ค€์ˆ˜", "text": "์•ˆ๋…•ํ•˜์„ธ์š”, ์—ฌ๋Ÿฌ๋ถ„! ์˜ค๋Š˜์€ ์ •๋ง ์ค‘์š”ํ•˜๊ณ  ํฅ๋ฏธ๋กœ์šด ์ฃผ์ œ๋ฅผ ๋‹ค๋ค„๋ณด๋ ค๊ณ  ํ•ฉ๋‹ˆ๋‹ค. ๋ฏผํ˜ธ ๋ฐ•์‚ฌ๋‹˜, ๋จผ์ € ์ด ์ฃผ์ œ๊ฐ€ ์™œ ์ง€๊ธˆ ์ด๋ ‡๊ฒŒ ์ฃผ๋ชฉ๋ฐ›๊ณ  ์žˆ๋Š”์ง€ ์„ค๋ช…ํ•ด์ฃผ์‹œ๊ฒ ์–ด์š”?"},
2469
+ {"speaker": "๋ฏผํ˜ธ", "text": "๋„ค, ์•ˆ๋…•ํ•˜์„ธ์š”. ์ตœ๊ทผ ์ด ๋ถ„์•ผ์—์„œ ํš๊ธฐ์ ์ธ ๋ฐœ์ „์ด ์žˆ์—ˆ์Šต๋‹ˆ๋‹ค. ํŠนํžˆ ์ž‘๋…„ MIT ์—ฐ๊ตฌํŒ€์˜ ๋ฐœํ‘œ์— ๋”ฐ๋ฅด๋ฉด, ์ด ๊ธฐ์ˆ ์˜ ํšจ์œจ์„ฑ์ด ๊ธฐ์กด ๋Œ€๋น„ 300% ํ–ฅ์ƒ๋˜์—ˆ๋‹ค๊ณ  ํ•ฉ๋‹ˆ๋‹ค. ์ด๋Š” ๋‹จ์ˆœํ•œ ๊ธฐ์ˆ ์  ์ง„๋ณด๋ฅผ ๋„˜์–ด์„œ ์šฐ๋ฆฌ ์ผ์ƒ์ƒํ™œ์— ์ง์ ‘์ ์ธ ์˜ํ–ฅ์„ ๋ฏธ์น  ์ˆ˜ ์žˆ๋Š” ๋ณ€ํ™”์ธ๋ฐ์š”. ์‹ค์ œ๋กœ ๊ตฌ๊ธ€๊ณผ ๋งˆ์ดํฌ๋กœ์†Œํ”„ํŠธ ๊ฐ™์€ ๋น…ํ…Œํฌ ๊ธฐ์—…๋“ค์ด ์ด๋ฏธ ์ˆ˜์‹ญ์–ต ๋‹ฌ๋Ÿฌ๋ฅผ ํˆฌ์žํ•˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค."},
2470
+ {"speaker": "์ค€์ˆ˜", "text": "์™€, 300% ํ–ฅ์ƒ์ด๋ผ๋‹ˆ ์ •๋ง ๋†€๋ผ์šด๋ฐ์š”. ๊ทธ๋ ‡๋‹ค๋ฉด ์ด๋Ÿฐ ๊ธฐ์ˆ  ๋ฐœ์ „์ด ์ผ๋ฐ˜์ธ๋“ค์—๊ฒŒ๋Š” ๊ตฌ์ฒด์ ์œผ๋กœ ์–ด๋–ค ํ˜œํƒ์„ ๊ฐ€์ ธ๋‹ค์ค„ ์ˆ˜ ์žˆ์„๊นŒ์š”?"},
2471
+ {"speaker": "๋ฏผํ˜ธ", "text": "๊ฐ€์žฅ ์ง์ ‘์ ์ธ ํ˜œํƒ์€ ๋น„์šฉ ์ ˆ๊ฐ๊ณผ ์ ‘๊ทผ์„ฑ ํ–ฅ์ƒ์ž…๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, ์ด์ „์—๋Š” ์ „๋ฌธ๊ฐ€๋งŒ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋˜ ๊ณ ๊ธ‰ ๊ธฐ๋Šฅ๋“ค์ด ์ด์ œ๋Š” ์Šค๋งˆํŠธํฐ ์•ฑ์œผ๋กœ๋„ ๊ตฌํ˜„ ๊ฐ€๋Šฅํ•ด์กŒ์Šต๋‹ˆ๋‹ค. ๋งฅํ‚จ์ง€ ๋ณด๊ณ ์„œ์— ๋”ฐ๋ฅด๋ฉด, 2025๋…„๊นŒ์ง€ ์ด ๊ธฐ์ˆ ๋กœ ์ธํ•ด ์ „ ์„ธ๊ณ„์ ์œผ๋กœ ์•ฝ 2์กฐ ๋‹ฌ๋Ÿฌ์˜ ๊ฒฝ์ œ์  ๊ฐ€์น˜๊ฐ€ ์ฐฝ์ถœ๋  ๊ฒƒ์œผ๋กœ ์˜ˆ์ƒ๋ฉ๋‹ˆ๋‹ค. ํŠนํžˆ ์˜๋ฃŒ, ๊ต์œก, ๊ธˆ์œต ๋ถ„์•ผ์—์„œ ํ˜์‹ ์ ์ธ ๋ณ€ํ™”๊ฐ€ ์ผ์–ด๋‚  ๊ฒƒ์œผ๋กœ ๋ณด์ž…๋‹ˆ๋‹ค."},
2472
+ {"speaker": "์ค€์ˆ˜", "text": "2์กฐ ๋‹ฌ๋Ÿฌ๋ผ๋Š” ์—„์ฒญ๋‚œ ๊ทœ๋ชจ๋„ค์š”. ์˜๋ฃŒ ๋ถ„์•ผ์—์„œ๋Š” ์–ด๋–ค ๋ณ€ํ™”๊ฐ€ ์˜ˆ์ƒ๋˜๋‚˜์š”?"},
2473
+ {"speaker": "๋ฏผํ˜ธ", "text": "์˜๋ฃŒ ๋ถ„์•ผ์˜ ๋ณ€ํ™”๋Š” ์ •๋ง ํ˜๋ช…์ ์ผ ๊ฒƒ์œผ๋กœ ์˜ˆ์ƒ๋ฉ๋‹ˆ๋‹ค. ์ด๋ฏธ ์Šคํƒ ํฌ๋“œ ๋Œ€ํ•™๋ณ‘์›์—์„œ๋Š” ์ด ๊ธฐ์ˆ ์„ ํ™œ์šฉํ•ด ์•” ์ง„๋‹จ ์ •ํ™•๋„๋ฅผ 95%๊นŒ์ง€ ๋†’์˜€์Šต๋‹ˆ๋‹ค. ๊ธฐ์กด์—๋Š” ์ˆ™๋ จ๋œ ์˜์‚ฌ๋„ ๋†“์น  ์ˆ˜ ์žˆ๋˜ ๋ฏธ์„ธํ•œ ๋ณ‘๋ณ€๋“ค์„ AI๊ฐ€ ๊ฐ์ง€ํ•ด๋‚ด๋Š” ๊ฒƒ์ด์ฃ . ๋” ๋†€๋ผ์šด ๊ฒƒ์€ ์ด๋Ÿฐ ์ง„๋‹จ์ด ๋‹จ ๋ช‡ ๋ถ„ ๋งŒ์— ์ด๋ค„์ง„๋‹ค๋Š” ์ ์ž…๋‹ˆ๋‹ค. WHO ์ถ”์‚ฐ์œผ๋กœ๋Š” ์ด ๊ธฐ์ˆ ์ด ์ „ ์„ธ๊ณ„์ ์œผ๋กœ ๋ณด๊ธ‰๋˜๋ฉด ์—ฐ๊ฐ„ ์ˆ˜๋ฐฑ๋งŒ ๋ช…์˜ ์ƒ๋ช…์„ ๊ตฌํ•  ์ˆ˜ ์žˆ์„ ๊ฒƒ์œผ๋กœ ์˜ˆ์ธกํ•˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค."},
2474
+ {"speaker": "์ค€์ˆ˜", "text": "์ •๋ง ์ธ์ƒ์ ์ด๋„ค์š”. ํ•˜์ง€๋งŒ ์ด๋Ÿฐ ๊ธ‰๊ฒฉํ•œ ๊ธฐ์ˆ  ๋ฐœ์ „์— ๋Œ€ํ•œ ์šฐ๋ ค์˜ ๋ชฉ์†Œ๋ฆฌ๋„ ์žˆ์„ ๊ฒƒ ๊ฐ™์€๋ฐ์š”?"},
2475
+ {"speaker": "๋ฏผํ˜ธ", "text": "๋งž์Šต๋‹ˆ๋‹ค. ์ฃผ์š” ์šฐ๋ ค์‚ฌํ•ญ์€ ํฌ๊ฒŒ ์„ธ ๊ฐ€์ง€์ž…๋‹ˆ๋‹ค. ์ฒซ์งธ๋Š” ์ผ์ž๋ฆฌ ๋Œ€์ฒด ๋ฌธ์ œ๋กœ, ์˜ฅ์Šคํฌ๋“œ ๋Œ€ํ•™ ์—ฐ๊ตฌ์— ๋”ฐ๋ฅด๋ฉด ํ–ฅํ›„ 20๋…„ ๋‚ด์— ํ˜„์žฌ ์ง์—…์˜ 47%๊ฐ€ ์ž๋™ํ™”๋  ์œ„ํ—˜์ด ์žˆ์Šต๋‹ˆ๋‹ค. ๋‘˜์งธ๋Š” ํ”„๋ผ์ด๋ฒ„์‹œ์™€ ๋ณด์•ˆ ๋ฌธ์ œ์ž…๋‹ˆ๋‹ค. ์…‹์งธ๋Š” ๊ธฐ์ˆ  ๊ฒฉ์ฐจ๋กœ ์ธํ•œ ๋ถˆํ‰๋“ฑ ์‹ฌํ™”์ž…๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ ์—ญ์‚ฌ์ ์œผ๋กœ ๋ณด๋ฉด ์ƒˆ๋กœ์šด ๊ธฐ์ˆ ์€ ํ•ญ์ƒ ์ƒˆ๋กœ์šด ๊ธฐํšŒ๋„ ํ•จ๊ป˜ ๋งŒ๋“ค์–ด์™”๊ธฐ ๋•Œ๋ฌธ์—, ์ ์ ˆํ•œ ์ •์ฑ…๊ณผ ๊ต์œก์œผ๋กœ ์ด๋Ÿฐ ๋ฌธ์ œ๋“ค์„ ํ•ด๊ฒฐํ•  ์ˆ˜ ์žˆ์„ ๊ฒƒ์œผ๋กœ ๋ด…๋‹ˆ๋‹ค."},
2476
+ {"speaker": "์ค€์ˆ˜", "text": "๊ท ํ˜•์žกํžŒ ์‹œ๊ฐ์ด ์ค‘์š”ํ•˜๊ฒ ๋„ค์š”. ๊ทธ๋ ‡๋‹ค๋ฉด ์šฐ๋ฆฌ๊ฐ€ ์ด๋Ÿฐ ๋ณ€ํ™”์— ์–ด๋–ป๊ฒŒ ๋Œ€๋น„ํ•ด์•ผ ํ• ๊นŒ์š”?"},
2477
+ {"speaker": "๋ฏผํ˜ธ", "text": "๊ฐ€์žฅ ์ค‘์š”ํ•œ ๊ฒƒ์€ ์ง€์†์ ์ธ ํ•™์Šต๊ณผ ์ ์‘๋ ฅ์ž…๋‹ˆ๋‹ค. ์„ธ๊ณ„๊ฒฝ์ œํฌ๋Ÿผ์€ 2025๋…„๊นŒ์ง€ ์ „ ์„ธ๊ณ„ ๊ทผ๋กœ์ž์˜ 50%๊ฐ€ ์žฌ๊ต์œก์ด ํ•„์š”ํ•  ๊ฒƒ์œผ๋กœ ์˜ˆ์ธกํ–ˆ์Šต๋‹ˆ๋‹ค. ํŠนํžˆ ๋””์ง€ํ„ธ ๋ฆฌํ„ฐ๋Ÿฌ์‹œ, ๋น„ํŒ์  ์‚ฌ๊ณ ๋ ฅ, ์ฐฝ์˜์„ฑ ๊ฐ™์€ ๋Šฅ๋ ฅ์ด ์ค‘์š”ํ•ด์งˆ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๊ฐœ์ธ์ ์œผ๋กœ๋Š” ์˜จ๋ผ์ธ ๊ต์œก ํ”Œ๋žซํผ์„ ํ™œ์šฉํ•œ ์ž๊ธฐ๊ณ„๋ฐœ์„ ์ถ”์ฒœํ•ฉ๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด Coursera๋‚˜ edX ๊ฐ™์€ ํ”Œ๋žซํผ์—์„œ๋Š” ์„ธ๊ณ„ ์ตœ๊ณ  ๋Œ€ํ•™์˜ ๊ฐ•์˜๋ฅผ ๋ฌด๋ฃŒ๋กœ ๋“ค์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค."},
2478
+ {"speaker": "์ค€์ˆ˜", "text": "์‹ค์šฉ์ ์ธ ์กฐ์–ธ ๊ฐ์‚ฌํ•ฉ๋‹ˆ๋‹ค. ๋งˆ์ง€๋ง‰์œผ๋กœ ์ด ๋ถ„์•ผ์˜ ๋ฏธ๋ž˜ ์ „๋ง์€ ์–ด๋–ป๊ฒŒ ๋ณด์‹œ๋‚˜์š”?"},
2479
+ {"speaker": "๋ฏผํ˜ธ", "text": "ํ–ฅํ›„ 10๋…„์€ ์ธ๋ฅ˜ ์—ญ์‚ฌ์ƒ ๊ฐ€์žฅ ๊ธ‰๊ฒฉํ•œ ๊ธฐ์ˆ  ๋ฐœ์ „์„ ๊ฒฝํ—˜ํ•˜๋Š” ์‹œ๊ธฐ๊ฐ€ ๋  ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๊ฐ€ํŠธ๋„ˆ์˜ ํ•˜์ดํ”„ ์‚ฌ์ดํด ๋ถ„์„์— ๋”ฐ๋ฅด๋ฉด, ํ˜„์žฌ ์šฐ๋ฆฌ๋Š” ์ด ๊ธฐ์ˆ ์˜ ์ดˆ๊ธฐ ๋‹จ๊ณ„์— ๋ถˆ๊ณผํ•ฉ๋‹ˆ๋‹ค. 2030๋…„๊นŒ์ง€๋Š” ์ง€๊ธˆ์œผ๋กœ์„œ๋Š” ์ƒ์ƒํ•˜๊ธฐ ์–ด๋ ค์šด ์ˆ˜์ค€์˜ ํ˜์‹ ์ด ์ผ์–ด๋‚  ๊ฒƒ์œผ๋กœ ์˜ˆ์ƒ๋ฉ๋‹ˆ๋‹ค. ์ค‘์š”ํ•œ ๊ฒƒ์€ ์ด๋Ÿฐ ๋ณ€ํ™”๋ฅผ ๋‘๋ ค์›Œํ•˜๊ธฐ๋ณด๋‹ค๋Š” ๊ธฐํšŒ๋กœ ์‚ผ์•„ ๋” ๋‚˜์€ ๋ฏธ๋ž˜๋ฅผ ๋งŒ๋“ค์–ด๊ฐ€๋Š” ๊ฒƒ์ด๋ผ๊ณ  ์ƒ๊ฐํ•ฉ๋‹ˆ๋‹ค."},
2480
+ {"speaker": "์ค€์ˆ˜", "text": "์ •๋ง ํ†ต์ฐฐ๋ ฅ ์žˆ๋Š” ๋ง์”€์ด๋„ค์š”. ์˜ค๋Š˜ ๋„ˆ๋ฌด๋‚˜ ์œ ์ตํ•œ ์‹œ๊ฐ„์ด์—ˆ์Šต๋‹ˆ๋‹ค. ์ฒญ์ทจ์ž ์—ฌ๋Ÿฌ๋ถ„๋„ ์˜ค๋Š˜ ๋…ผ์˜๋œ ๋‚ด์šฉ์„ ๋ฐ”ํƒ•์œผ๋กœ ๋ฏธ๋ž˜๋ฅผ ์ค€๋น„ํ•˜์‹œ๊ธธ ๋ฐ”๋ž๋‹ˆ๋‹ค. ๋ฏผํ˜ธ ๋ฐ•์‚ฌ๋‹˜, ๊ท€์ค‘ํ•œ ์‹œ๊ฐ„ ๋‚ด์ฃผ์…”์„œ ๊ฐ์‚ฌํ•ฉ๋‹ˆ๋‹ค!"},
2481
+ {"speaker": "๋ฏผํ˜ธ", "text": "๊ฐ์‚ฌํ•ฉ๋‹ˆ๋‹ค. ์ฒญ์ทจ์ž ์—ฌ๋Ÿฌ๋ถ„๋“ค์ด ์ด ๋ณ€ํ™”์˜ ์‹œ๋Œ€๋ฅผ ํ˜„๋ช…ํ•˜๊ฒŒ ํ—ค์ณ๋‚˜๊ฐ€์‹œ๊ธธ ๋ฐ”๋ž๋‹ˆ๋‹ค. ๊ธฐ์ˆ ์€ ๋„๊ตฌ์ผ ๋ฟ์ด๊ณ , ๊ทธ๊ฒƒ์„ ์–ด๋–ป๊ฒŒ ๏ฟฝ๏ฟฝ์šฉํ•˜๋Š”์ง€๋Š” ์šฐ๋ฆฌ์—๊ฒŒ ๋‹ฌ๋ ค์žˆ๋‹ค๋Š” ์ ์„ ๊ธฐ์–ตํ•ด์ฃผ์„ธ์š”. ์˜ค๋Š˜ ๋ง์”€๋“œ๋ฆฐ ๋‚ด์šฉ์— ๋Œ€ํ•ด ๋” ๊ถ๊ธˆํ•˜์‹  ์ ์ด ์žˆ์œผ์‹œ๋ฉด ์ œ๊ฐ€ ์šด์˜ํ•˜๋Š” ๋ธ”๋กœ๊ทธ๋‚˜ ์ตœ๊ทผ ์ถœ๊ฐ„ํ•œ ์ฑ…์—์„œ ๋” ์ž์„ธํ•œ ์ •๋ณด๋ฅผ ์ฐพ์œผ์‹ค ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค."}
2482
+ ]
2483
+ }
2484
+
2485
+ def _get_default_english_conversation(self) -> Dict:
2486
+ """Enhanced professional English conversation template"""
2487
+ return {
2488
+ "conversation": [
2489
+ {"speaker": "Alex", "text": "Welcome everyone to our podcast! Today we're diving into a topic that's reshaping our world. Dr. Jordan, could you start by explaining why this subject has become so critical right now?"},
2490
+ {"speaker": "Jordan", "text": "Thanks, Alex. We're witnessing an unprecedented convergence of technological breakthroughs. According to a recent Nature publication, advances in this field have accelerated by 400% in just the past two years. This isn't just incremental progress - it's a fundamental shift in how we approach problem-solving. Major institutions like Harvard and Stanford are completely restructuring their research programs to focus on this area, with combined investments exceeding $5 billion annually."},
2491
+ {"speaker": "Alex", "text": "400% acceleration is staggering! What does this mean for everyday people who might not be tech-savvy?"},
2492
+ {"speaker": "Jordan", "text": "The impact will be profound yet accessible. Think about how smartphones revolutionized communication - this will be similar but across every aspect of life. McKinsey's latest report projects that by 2026, these technologies will create $4.4 trillion in annual value globally. For individuals, this translates to personalized healthcare that can predict illnesses years in advance, educational systems that adapt to each student's learning style, and financial tools that democratize wealth-building strategies previously available only to the ultra-wealthy."},
2493
+ {"speaker": "Alex", "text": "Those applications sound transformative. Can you give us a concrete example of how this is already being implemented?"},
2494
+ {"speaker": "Jordan", "text": "Absolutely. Let me share a compelling case from Johns Hopkins Hospital. They've deployed an AI system that analyzes patient data in real-time, reducing diagnostic errors by 85% and cutting average diagnosis time from days to hours. In one documented case, the system identified a rare genetic disorder in a child that had been misdiagnosed for three years. The accuracy comes from analyzing patterns across millions of cases - something impossible for even the most experienced doctors to do manually. This technology is now being rolled out to rural hospitals, bringing world-class diagnostic capabilities to underserved communities."},
2495
+ {"speaker": "Alex", "text": "That's truly life-changing technology. But I imagine there are significant challenges and risks we need to consider?"},
2496
+ {"speaker": "Jordan", "text": "You're absolutely right to raise this. The challenges are as significant as the opportunities. The World Economic Forum identifies three critical risks: First, algorithmic bias could perpetuate or amplify existing inequalities if not carefully managed. Second, cybersecurity threats become exponentially more dangerous when AI systems control critical infrastructure. Third, there's the socioeconomic disruption - PwC estimates that 30% of jobs could be automated by 2030. However, history shows us that technological revolutions create new opportunities even as they displace old ones. The key is proactive adaptation and responsible development."},
2497
+ {"speaker": "Alex", "text": "How should individuals and organizations prepare for these changes?"},
2498
+ {"speaker": "Jordan", "text": "Preparation requires a multi-faceted approach. For individuals, I recommend focusing on skills that complement rather than compete with AI: critical thinking, emotional intelligence, and creative problem-solving. MIT's recent study shows that professionals who combine domain expertise with AI literacy see salary increases of 40% on average. Organizations need to invest in continuous learning programs - Amazon's $700 million worker retraining initiative is a good model. Most importantly, we need to cultivate an adaptive mindset. The half-life of specific technical skills is shrinking, but the ability to learn and unlearn quickly is becoming invaluable."},
2499
+ {"speaker": "Alex", "text": "That's practical advice. What about the ethical considerations? How do we ensure this technology benefits humanity as a whole?"},
2500
+ {"speaker": "Jordan", "text": "Ethics must be at the forefront of development. The EU's AI Act and similar regulations worldwide are establishing important guardrails. We need transparent AI systems where decisions can be explained and audited. Companies like IBM and Google have established AI ethics boards, but we need industry-wide standards. Additionally, we must address the digital divide - UNESCO reports that 37% of the global population still lacks internet access. Without inclusive development, these technologies could exacerbate global inequality rather than reduce it. The solution requires collaboration between technologists, ethicists, policymakers, and communities."},
2501
+ {"speaker": "Alex", "text": "Looking ahead, what's your vision for how this technology will shape the next decade?"},
2502
+ {"speaker": "Jordan", "text": "The next decade will be transformative beyond our current imagination. Ray Kurzweil's prediction of technological singularity by 2045 seems increasingly plausible. By 2035, I expect we'll see autonomous systems managing entire cities, personalized medicine extending human lifespan by 20-30 years, and educational AI that makes world-class education universally accessible. The convergence of AI with quantum computing, biotechnology, and nanotechnology will unlock possibilities we can barely conceive of today. However, the future isn't predetermined - it's shaped by the choices we make now about development priorities, ethical frameworks, and inclusive access."},
2503
+ {"speaker": "Alex", "text": "That's both exciting and sobering. Any final thoughts for our listeners?"},
2504
+ {"speaker": "Jordan", "text": "I'd encourage everyone to view this as humanity's next great adventure. Yes, there are risks and challenges, but we're also on the cusp of solving problems that have plagued us for millennia - disease, poverty, environmental degradation. The key is engaged participation rather than passive observation. Stay informed through reliable sources, experiment with new technologies, and most importantly, contribute to the conversation about what kind of future we want to build. The decisions we make in the next five years will reverberate for generations."},
2505
+ {"speaker": "Alex", "text": "Dr. Jordan, this has been an incredibly enlightening discussion. Thank you for sharing your expertise and insights with us today."},
2506
+ {"speaker": "Jordan", "text": "Thank you, Alex. It's been a pleasure discussing these crucial topics. For listeners wanting to dive deeper, I've compiled additional resources on my website, including links to the studies we discussed today. Remember, the future isn't something that happens to us - it's something we create together. I look forward to seeing how each of you contributes to shaping this exciting new era."}
2507
+ ]
2508
+ }
2509
+
2510
+ def extract_conversation_api(self, text: str, language: str = "English") -> Dict:
2511
+ """Extract conversation using API with enhanced professional style"""
2512
+ if not self.llm_client:
2513
+ raise RuntimeError("API mode not initialized")
2514
+
2515
+ try:
2516
+ # ๊ฒ€์ƒ‰ ์ปจํ…์ŠคํŠธ ์ƒ์„ฑ
2517
+ search_context = ""
2518
+ if BRAVE_KEY and not text.startswith("Keyword-based content:"):
2519
+ try:
2520
+ keywords = extract_keywords_for_search(text, language)
2521
+ if keywords:
2522
+ search_query = keywords[0] if language == "Korean" else f"{keywords[0]} latest news"
2523
+ search_context = format_search_results(search_query)
2524
+ print(f"Search context added for: {search_query}")
2525
+ except Exception as e:
2526
+ print(f"Search failed, continuing without context: {e}")
2527
+
2528
+ # ๊ฐ•ํ™”๋œ ์ „๋ฌธ ํŒŸ์บ์ŠคํŠธ ์Šคํƒ€์ผ ํ”„๋กฌํ”„ํŠธ
2529
+ if language == "Korean":
2530
+ system_message = (
2531
+ "๋‹น์‹ ์€ ํ•œ๊ตญ์˜ ์ตœ๊ณ  ์ „๋ฌธ ํŒŸ์บ์ŠคํŠธ ์ž‘๊ฐ€์ž…๋‹ˆ๋‹ค. "
2532
+ "์ฒญ์ทจ์ž๋“ค์ด ๊นŠ์ด ์žˆ๋Š” ์ธ์‚ฌ์ดํŠธ๋ฅผ ์–ป์„ ์ˆ˜ ์žˆ๋Š” ๊ณ ํ’ˆ์งˆ ๋Œ€๋‹ด์„ ๋งŒ๋“œ์„ธ์š”.\n"
2533
+ "์ค€์ˆ˜(์ง„ํ–‰์ž)๋Š” ํ•ต์‹ฌ์„ ์งš๋Š” 1-2๋ฌธ์žฅ ์งˆ๋ฌธ์„ ํ•˜๊ณ , "
2534
+ "๋ฏผํ˜ธ(์ „๋ฌธ๊ฐ€)๋Š” ๋ฐ˜๋“œ์‹œ 2-4๋ฌธ์žฅ์œผ๋กœ ์ƒ์„ธํžˆ ๋‹ต๋ณ€ํ•ฉ๋‹ˆ๋‹ค. "
2535
+ "๊ตฌ์ฒด์ ์ธ ๋ฐ์ดํ„ฐ, ์—ฐ๊ตฌ ๊ฒฐ๊ณผ, ์‹ค์ œ ์‚ฌ๋ก€๋ฅผ ํฌํ•จํ•˜์„ธ์š”. "
2536
+ "์ „๋ฌธ ์šฉ์–ด๋Š” ์‰ฝ๊ฒŒ ์„ค๋ช…ํ•˜๊ณ , ๋ฐ˜๋“œ์‹œ ์„œ๋กœ ์กด๋Œ“๋ง์„ ์‚ฌ์šฉํ•˜์„ธ์š”. "
2537
+ "12-15ํšŒ์˜ ๊นŠ์ด ์žˆ๋Š” ๋Œ€ํ™” ๊ตํ™˜์œผ๋กœ ๊ตฌ์„ฑํ•˜์„ธ์š”."
2538
+ )
2539
+ else:
2540
+ system_message = (
2541
+ "You are a top professional podcast scriptwriter. "
2542
+ "Create high-quality discussions that provide deep insights to listeners. "
2543
+ "Alex (host) asks focused 1-2 sentence questions, "
2544
+ "while Jordan (expert) MUST answer in 2-4 detailed sentences. "
2545
+ "Include specific data, research findings, and real cases. "
2546
+ "Explain technical terms clearly. "
2547
+ "Create 12-15 insightful conversation exchanges."
2548
+ )
2549
+
2550
+ chat_completion = self.llm_client.chat.completions.create(
2551
+ messages=[
2552
+ {"role": "system", "content": system_message},
2553
+ {"role": "user", "content": self._build_prompt(text, language, search_context)}
2554
+ ],
2555
+ model=self.config.api_model_name,
2556
+ temperature=0.75,
2557
  )
2558
+
2559
+ pattern = r"\{(?:[^{}]|(?:\{[^{}]*\}))*\}"
2560
+ json_match = re.search(pattern, chat_completion.choices[0].message.content)
2561
+
2562
+ if not json_match:
2563
+ raise ValueError("No valid JSON found in response")
2564
+
2565
+ return json.loads(json_match.group())
2566
+ except Exception as e:
2567
+ raise RuntimeError(f"Failed to extract conversation: {e}")
2568
+
2569
+ def parse_conversation_text(self, conversation_text: str) -> Dict:
2570
+ """Parse conversation text back to JSON format"""
2571
+ lines = conversation_text.strip().split('\n')
2572
+ conversation_data = {"conversation": []}
2573
+
2574
+ for line in lines:
2575
+ if ':' in line:
2576
+ speaker, text = line.split(':', 1)
2577
+ conversation_data["conversation"].append({
2578
+ "speaker": speaker.strip(),
2579
+ "text": text.strip()
2580
+ })
2581
+
2582
+ return conversation_data
2583
+
2584
+ async def text_to_speech_edge(self, conversation_json: Dict, language: str = "English") -> Tuple[str, str]:
2585
+ """Convert text to speech using Edge TTS"""
2586
+ output_dir = Path(self._create_output_directory())
2587
+ filenames = []
2588
+
2589
+ try:
2590
+ # ์–ธ์–ด๋ณ„ ์Œ์„ฑ ์„ค์ • - ํ•œ๊ตญ์–ด๋Š” ๋ชจ๋‘ ๋‚จ์„ฑ ์Œ์„ฑ
2591
+ if language == "Korean":
2592
+ voices = [
2593
+ "ko-KR-HyunsuNeural", # ๋‚จ์„ฑ ์Œ์„ฑ 1 (์ฐจ๋ถ„ํ•˜๊ณ  ์‹ ๋ขฐ๊ฐ ์žˆ๋Š”)
2594
+ "ko-KR-InJoonNeural" # ๋‚จ์„ฑ ์Œ์„ฑ 2 (ํ™œ๊ธฐ์ฐจ๊ณ  ์นœ๊ทผํ•œ)
2595
+ ]
2596
+ else:
2597
+ voices = [
2598
+ "en-US-AndrewMultilingualNeural", # ๋‚จ์„ฑ ์Œ์„ฑ 1
2599
+ "en-US-BrianMultilingualNeural" # ๋‚จ์„ฑ ์Œ์„ฑ 2
2600
+ ]
2601
+
2602
+ for i, turn in enumerate(conversation_json["conversation"]):
2603
+ filename = output_dir / f"output_{i}.wav"
2604
+ voice = voices[i % len(voices)]
2605
+
2606
+ tmp_path = await self._generate_audio_edge(turn["text"], voice)
2607
+ os.rename(tmp_path, filename)
2608
+ filenames.append(str(filename))
2609
+
2610
+ # Combine audio files
2611
+ final_output = os.path.join(output_dir, "combined_output.wav")
2612
+ self._combine_audio_files(filenames, final_output)
2613
 
2614
+ # Generate conversation text
2615
+ conversation_text = "\n".join(
2616
+ f"{turn.get('speaker', f'Speaker {i+1}')}: {turn['text']}"
2617
+ for i, turn in enumerate(conversation_json["conversation"])
 
 
2618
  )
2619
 
2620
+ return final_output, conversation_text
2621
+ except Exception as e:
2622
+ raise RuntimeError(f"Failed to convert text to speech: {e}")
2623
+
2624
+ async def _generate_audio_edge(self, text: str, voice: str) -> str:
2625
+ """Generate audio using Edge TTS"""
2626
+ if not text.strip():
2627
+ raise ValueError("Text cannot be empty")
2628
 
2629
+ voice_short_name = voice.split(" - ")[0] if " - " in voice else voice
2630
+ communicate = edge_tts.Communicate(text, voice_short_name)
2631
+
2632
+ with tempfile.NamedTemporaryFile(delete=False, suffix=".wav") as tmp_file:
2633
+ tmp_path = tmp_file.name
2634
+ await communicate.save(tmp_path)
2635
+
2636
+ return tmp_path
2637
+
2638
+ @spaces.GPU(duration=60)
2639
+ def text_to_speech_spark(self, conversation_json: Dict, language: str = "English", progress=None) -> Tuple[str, str]:
2640
+ """Convert text to speech using Spark TTS CLI"""
2641
+ if not SPARK_AVAILABLE or not self.spark_model_dir:
2642
+ raise RuntimeError("Spark TTS not available")
2643
+
2644
+ try:
2645
+ output_dir = self._create_output_directory()
2646
+ audio_files = []
2647
 
2648
+ # Create different voice characteristics for different speakers
2649
+ if language == "Korean":
2650
+ voice_configs = [
2651
+ {"prompt_text": "์•ˆ๋…•ํ•˜์„ธ์š”, ์˜ค๋Š˜ ํŒŸ์บ์ŠคํŠธ ์ง„ํ–‰์„ ๋งก์€ ์ค€์ˆ˜์ž…๋‹ˆ๋‹ค. ์—ฌ๋Ÿฌ๋ถ„๊ณผ ํ•จ๊ป˜ ํฅ๋ฏธ๋กœ์šด ์ด์•ผ๊ธฐ๋ฅผ ๋‚˜๋ˆ ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค.", "gender": "male"},
2652
+ {"prompt_text": "์•ˆ๋…•ํ•˜์„ธ์š”, ์ €๋Š” ์˜ค๋Š˜ ์ด ์ฃผ์ œ์— ๋Œ€ํ•ด ์„ค๋ช…๋“œ๋ฆด ๋ฏผํ˜ธ์ž…๋‹ˆ๋‹ค. ์‰ฝ๊ณ  ์žฌ๋ฏธ์žˆ๊ฒŒ ์„ค๋ช…๋“œ๋ฆด๊ฒŒ์š”.", "gender": "male"}
2653
+ ]
2654
+ else:
2655
+ voice_configs = [
2656
+ {"prompt_text": "Hello everyone, I'm Alex, your host for today's podcast. Let's explore this fascinating topic together.", "gender": "male"},
2657
+ {"prompt_text": "Hi, I'm Jordan. I'm excited to share my insights on this subject with you all today.", "gender": "male"}
2658
+ ]
2659
+
2660
+ for i, turn in enumerate(conversation_json["conversation"]):
2661
+ text = turn["text"]
2662
+ if not text.strip():
2663
+ continue
2664
+
2665
+ # Use different voice config for each speaker
2666
+ voice_config = voice_configs[i % len(voice_configs)]
2667
+
2668
+ output_file = os.path.join(output_dir, f"spark_output_{i}.wav")
2669
+
2670
+ # Run Spark TTS CLI inference
2671
+ cmd = [
2672
+ "python", "-m", "cli.inference",
2673
+ "--text", text,
2674
+ "--device", "0" if torch.cuda.is_available() else "cpu",
2675
+ "--save_dir", output_dir,
2676
+ "--model_dir", self.spark_model_dir,
2677
+ "--prompt_text", voice_config["prompt_text"],
2678
+ "--output_name", f"spark_output_{i}.wav"
2679
+ ]
2680
+
2681
+ try:
2682
+ # Run the command
2683
+ result = subprocess.run(
2684
+ cmd,
2685
+ capture_output=True,
2686
+ text=True,
2687
+ timeout=60,
2688
+ cwd="." # Make sure we're in the right directory
2689
+ )
2690
+
2691
+ if result.returncode == 0:
2692
+ audio_files.append(output_file)
2693
+ else:
2694
+ print(f"Spark TTS error for turn {i}: {result.stderr}")
2695
+ # Create a short silence as fallback
2696
+ silence = np.zeros(int(22050 * 1.0)) # 1 second of silence
2697
+ sf.write(output_file, silence, 22050)
2698
+ audio_files.append(output_file)
2699
+
2700
+ except subprocess.TimeoutExpired:
2701
+ print(f"Spark TTS timeout for turn {i}")
2702
+ # Create silence as fallback
2703
+ silence = np.zeros(int(22050 * 1.0))
2704
+ sf.write(output_file, silence, 22050)
2705
+ audio_files.append(output_file)
2706
+ except Exception as e:
2707
+ print(f"Error running Spark TTS for turn {i}: {e}")
2708
+ # Create silence as fallback
2709
+ silence = np.zeros(int(22050 * 1.0))
2710
+ sf.write(output_file, silence, 22050)
2711
+ audio_files.append(output_file)
2712
+
2713
+ # Combine all audio files
2714
+ if audio_files:
2715
+ final_output = os.path.join(output_dir, "spark_combined.wav")
2716
+ self._combine_audio_files(audio_files, final_output)
2717
+ else:
2718
+ raise RuntimeError("No audio files generated")
2719
+
2720
+ # Generate conversation text
2721
+ conversation_text = "\n".join(
2722
+ f"{turn.get('speaker', f'Speaker {i+1}')}: {turn['text']}"
2723
+ for i, turn in enumerate(conversation_json["conversation"])
2724
  )
2725
 
2726
+ return final_output, conversation_text
2727
+
2728
+ except Exception as e:
2729
+ raise RuntimeError(f"Failed to convert text to speech with Spark TTS: {e}")
2730
+
2731
+ @spaces.GPU(duration=60)
2732
+ def text_to_speech_melo(self, conversation_json: Dict, progress=None) -> Tuple[str, str]:
2733
+ """Convert text to speech using MeloTTS"""
2734
+ if not MELO_AVAILABLE or not self.melo_models:
2735
+ raise RuntimeError("MeloTTS not available")
2736
+
2737
+ speakers = ["EN-Default", "EN-US"]
2738
+ combined_audio = AudioSegment.empty()
2739
+
2740
+ for i, turn in enumerate(conversation_json["conversation"]):
2741
+ bio = io.BytesIO()
2742
+ text = turn["text"]
2743
+ speaker = speakers[i % 2]
2744
+ speaker_id = self.melo_models["EN"].hps.data.spk2id[speaker]
2745
+
2746
+ # Generate audio
2747
+ self.melo_models["EN"].tts_to_file(
2748
+ text, speaker_id, bio, speed=1.0,
2749
+ pbar=progress.tqdm if progress else None,
2750
+ format="wav"
2751
  )
2752
 
2753
+ bio.seek(0)
2754
+ audio_segment = AudioSegment.from_file(bio, format="wav")
2755
+ combined_audio += audio_segment
2756
+
2757
+ # Save final audio
2758
+ final_audio_path = "melo_podcast.mp3"
2759
+ combined_audio.export(final_audio_path, format="mp3")
2760
+
2761
+ # Generate conversation text
2762
+ conversation_text = "\n".join(
2763
+ f"{turn.get('speaker', f'Speaker {i+1}')}: {turn['text']}"
2764
+ for i, turn in enumerate(conversation_json["conversation"])
2765
+ )
2766
+
2767
+ return final_audio_path, conversation_text
2768
+
2769
+ def _create_output_directory(self) -> str:
2770
+ """Create a unique output directory"""
2771
+ random_bytes = os.urandom(8)
2772
+ folder_name = base64.urlsafe_b64encode(random_bytes).decode("utf-8")
2773
+ os.makedirs(folder_name, exist_ok=True)
2774
+ return folder_name
2775
+
2776
+ def _combine_audio_files(self, filenames: List[str], output_file: str) -> None:
2777
+ """Combine multiple audio files into one"""
2778
+ if not filenames:
2779
+ raise ValueError("No input files provided")
2780
+
2781
+ try:
2782
+ audio_segments = []
2783
+ for filename in filenames:
2784
+ if os.path.exists(filename):
2785
+ audio_segment = AudioSegment.from_file(filename)
2786
+ audio_segments.append(audio_segment)
2787
+
2788
+ if audio_segments:
2789
+ combined = sum(audio_segments)
2790
+ combined.export(output_file, format="wav")
2791
+
2792
+ # Clean up temporary files
2793
+ for filename in filenames:
2794
+ if os.path.exists(filename):
2795
+ os.remove(filename)
2796
+
2797
+ except Exception as e:
2798
+ raise RuntimeError(f"Failed to combine audio files: {e}")
2799
+
2800
+
2801
+ # Global converter instance
2802
+ converter = UnifiedAudioConverter(ConversationConfig())
2803
+
2804
+
2805
+ async def synthesize(article_input, input_type: str = "URL", mode: str = "Local", tts_engine: str = "Edge-TTS", language: str = "English"):
2806
+ """Main synthesis function - handles URL, PDF, and Keyword inputs"""
2807
+ try:
2808
+ # Extract text based on input type
2809
+ if input_type == "URL":
2810
+ if not article_input or not isinstance(article_input, str):
2811
+ return "Please provide a valid URL.", None
2812
+ text = converter.fetch_text(article_input)
2813
+ elif input_type == "PDF":
2814
+ if not article_input:
2815
+ return "Please upload a PDF file.", None
2816
+ text = converter.extract_text_from_pdf(article_input)
2817
+ else: # Keyword
2818
+ if not article_input or not isinstance(article_input, str):
2819
+ return "Please provide a keyword or topic.", None
2820
+ # ํ‚ค์›Œ๋“œ๋กœ ๊ฒ€์ƒ‰ํ•˜์—ฌ ์ฝ˜ํ…์ธ  ์ƒ์„ฑ
2821
+ text = search_and_compile_content(article_input, language)
2822
+ text = f"Keyword-based content:\n{text}" # ๋งˆ์ปค ์ถ”๊ฐ€
2823
+
2824
+ # Limit text to max words
2825
+ words = text.split()
2826
+ if len(words) > converter.config.max_words:
2827
+ text = " ".join(words[:converter.config.max_words])
2828
+
2829
+ # Extract conversation based on mode
2830
+ if mode == "Local":
2831
+ # ๋กœ์ปฌ ๋ชจ๋“œ๊ฐ€ ๊ธฐ๋ณธ (์ƒˆ๋กœ์šด Local LLM ์‚ฌ์šฉ)
2832
+ try:
2833
+ conversation_json = converter.extract_conversation_local(text, language)
2834
+ except Exception as e:
2835
+ print(f"Local mode failed: {e}, trying API fallback")
2836
+ # API ํด๋ฐฑ
2837
+ api_key = os.environ.get("TOGETHER_API_KEY")
2838
+ if api_key:
2839
+ converter.initialize_api_mode(api_key)
2840
+ conversation_json = converter.extract_conversation_api(text, language)
2841
+ else:
2842
+ raise RuntimeError("Local mode failed and no API key available for fallback")
2843
+ else: # API mode (now secondary)
2844
+ api_key = os.environ.get("TOGETHER_API_KEY")
2845
+ if not api_key:
2846
+ print("API key not found, falling back to local mode")
2847
+ conversation_json = converter.extract_conversation_local(text, language)
2848
+ else:
2849
+ try:
2850
+ converter.initialize_api_mode(api_key)
2851
+ conversation_json = converter.extract_conversation_api(text, language)
2852
+ except Exception as e:
2853
+ print(f"API mode failed: {e}, falling back to local mode")
2854
+ conversation_json = converter.extract_conversation_local(text, language)
2855
+
2856
+ # Generate conversation text
2857
+ conversation_text = "\n".join(
2858
+ f"{turn.get('speaker', f'Speaker {i+1}')}: {turn['text']}"
2859
+ for i, turn in enumerate(conversation_json["conversation"])
2860
+ )
2861
+
2862
+ return conversation_text, None
2863
+
2864
+ except Exception as e:
2865
+ return f"Error: {str(e)}", None
2866
+
2867
+
2868
+ async def regenerate_audio(conversation_text: str, tts_engine: str = "Edge-TTS", language: str = "English"):
2869
+ """Regenerate audio from edited conversation text"""
2870
+ if not conversation_text.strip():
2871
+ return "Please provide conversation text.", None
2872
+
2873
+ try:
2874
+ # Parse the conversation text back to JSON format
2875
+ conversation_json = converter.parse_conversation_text(conversation_text)
2876
+
2877
+ if not conversation_json["conversation"]:
2878
+ return "No valid conversation found in the text.", None
2879
+
2880
+ # ํ•œ๊ตญ์–ด์ธ ๊ฒฝ์šฐ Edge-TTS๋งŒ ์‚ฌ์šฉ (๋‹ค๋ฅธ TTS๋Š” ํ•œ๊ตญ์–ด ์ง€์›์ด ์ œํ•œ์ )
2881
+ if language == "Korean" and tts_engine != "Edge-TTS":
2882
+ tts_engine = "Edge-TTS" # ์ž๋™์œผ๋กœ Edge-TTS๋กœ ๋ณ€๊ฒฝ
2883
+
2884
+ # Generate audio based on TTS engine
2885
+ if tts_engine == "Edge-TTS":
2886
+ output_file, _ = await converter.text_to_speech_edge(conversation_json, language)
2887
+ elif tts_engine == "Spark-TTS":
2888
+ if not SPARK_AVAILABLE:
2889
+ return "Spark TTS not available. Please install required dependencies and clone the Spark-TTS repository.", None
2890
+ converter.initialize_spark_tts()
2891
+ output_file, _ = converter.text_to_speech_spark(conversation_json, language)
2892
+ else: # MeloTTS
2893
+ if not MELO_AVAILABLE:
2894
+ return "MeloTTS not available. Please install required dependencies.", None
2895
+ if language == "Korean":
2896
+ return "MeloTTS does not support Korean. Please use Edge-TTS for Korean.", None
2897
+ converter.initialize_melo_tts()
2898
+ output_file, _ = converter.text_to_speech_melo(conversation_json)
2899
+
2900
+ return "Audio generated successfully!", output_file
2901
+
2902
+ except Exception as e:
2903
+ return f"Error generating audio: {str(e)}", None
2904
+
2905
+
2906
+ def synthesize_sync(article_input, input_type: str = "URL", mode: str = "Local", tts_engine: str = "Edge-TTS", language: str = "English"):
2907
+ """Synchronous wrapper for async synthesis"""
2908
+ return asyncio.run(synthesize(article_input, input_type, mode, tts_engine, language))
2909
+
2910
+
2911
+ def regenerate_audio_sync(conversation_text: str, tts_engine: str = "Edge-TTS", language: str = "English"):
2912
+ """Synchronous wrapper for async audio regeneration"""
2913
+ return asyncio.run(regenerate_audio(conversation_text, tts_engine, language))
2914
+
2915
+
2916
+ def update_tts_engine_for_korean(language):
2917
+ """ํ•œ๊ตญ์–ด ์„ ํƒ ์‹œ TTS ์—”์ง„ ์˜ต์…˜ ์—…๋ฐ์ดํŠธ"""
2918
+ if language == "Korean":
2919
+ return gr.Radio(
2920
+ choices=["Edge-TTS"],
2921
+ value="Edge-TTS",
2922
+ label="TTS Engine",
2923
+ info="ํ•œ๊ตญ์–ด๋Š” Edge-TTS๋งŒ ์ง€์›๋ฉ๋‹ˆ๋‹ค",
2924
+ interactive=False
2925
+ )
2926
+ else:
2927
+ return gr.Radio(
2928
+ choices=["Edge-TTS", "Spark-TTS", "MeloTTS"],
2929
+ value="Edge-TTS",
2930
+ label="TTS Engine",
2931
+ info="Edge-TTS: Cloud-based, natural voices | Spark-TTS: Local AI model | MeloTTS: Local, requires GPU",
2932
+ interactive=True
2933
+ )
2934
+
2935
+
2936
+ def toggle_input_visibility(input_type):
2937
+ """Toggle visibility of URL input, file upload, and keyword input based on input type"""
2938
+ if input_type == "URL":
2939
+ return gr.update(visible=True), gr.update(visible=False), gr.update(visible=False)
2940
+ elif input_type == "PDF":
2941
+ return gr.update(visible=False), gr.update(visible=True), gr.update(visible=False)
2942
+ else: # Keyword
2943
+ return gr.update(visible=False), gr.update(visible=False), gr.update(visible=True)
2944
+
2945
+
2946
+ # ๋ชจ๋ธ ์ดˆ๊ธฐํ™” (์•ฑ ์‹œ์ž‘ ์‹œ)
2947
+ if LLAMA_CPP_AVAILABLE:
2948
+ try:
2949
+ model_path = hf_hub_download(
2950
+ repo_id=converter.config.local_model_repo,
2951
+ filename=converter.config.local_model_name,
2952
+ local_dir="./models"
2953
+ )
2954
+ print(f"Model downloaded to: {model_path}")
2955
+ except Exception as e:
2956
+ print(f"Failed to download model at startup: {e}")
2957
+
2958
+
2959
+ # Gradio Interface - ๊ฐœ์„ ๋œ ๋ ˆ์ด์•„์›ƒ
2960
+ with gr.Blocks(theme='soft', title="AI Podcast Generator", css="""
2961
+ .container {max-width: 1200px; margin: auto; padding: 20px;}
2962
+ .header-text {text-align: center; margin-bottom: 30px;}
2963
+ .input-group {background: #f7f7f7; padding: 20px; border-radius: 10px; margin-bottom: 20px;}
2964
+ .output-group {background: #f0f0f0; padding: 20px; border-radius: 10px;}
2965
+ .status-box {background: #e8f4f8; padding: 15px; border-radius: 8px; margin-top: 10px;}
2966
+ """) as demo:
2967
+ with gr.Column(elem_classes="container"):
2968
+ # ํ—ค๏ฟฝ๏ฟฝ๏ฟฝ
2969
+ with gr.Row(elem_classes="header-text"):
2970
+ gr.Markdown("""
2971
+ # ๐ŸŽ™๏ธ AI Podcast Generator - Professional Edition
2972
+ ### Convert any article, blog, PDF document, or topic into an engaging professional podcast conversation with in-depth analysis!
2973
+ """)
2974
+
2975
+ with gr.Row(elem_classes="discord-badge"):
2976
+ gr.HTML("""
2977
+ <p style="text-align: center;">
2978
+ <a href="https://discord.gg/openfreeai" target="_blank">
2979
+ <img src="https://img.shields.io/static/v1?label=Discord&message=Openfree%20AI&color=%230000ff&labelColor=%23800080&logo=discord&logoColor=white&style=for-the-badge" alt="badge">
2980
+ </a>
2981
+ </p>
2982
+ """)
2983
+
2984
+
2985
+
2986
+ # ์ƒํƒœ ํ‘œ์‹œ ์„น์…˜
2987
+ with gr.Row():
2988
+ with gr.Column(scale=1):
2989
+ gr.Markdown(f"""
2990
+ #### ๐Ÿค– System Status
2991
+ - **LLM**: {converter.config.local_model_name.split('.')[0]}
2992
+ - **Fallback**: {converter.config.api_model_name.split('/')[-1]}
2993
+ - **Llama CPP**: {"โœ… Ready" if LLAMA_CPP_AVAILABLE else "โŒ Not Available"}
2994
+ - **Search**: {"โœ… Brave API" if BRAVE_KEY else "โŒ No API"}
2995
+ """)
2996
+ with gr.Column(scale=1):
2997
  gr.Markdown("""
2998
+ #### ๐Ÿ“ป Podcast Features
2999
+ - **Length**: 12-15 professional exchanges
3000
+ - **Style**: Expert discussions with data & insights
3001
+ - **Languages**: English & Korean (ํ•œ๊ตญ์–ด)
3002
+ - **Input**: URL, PDF, or Keywords
 
 
 
 
 
 
 
 
 
 
 
 
3003
  """)
3004
+
3005
+ # ๋ฉ”์ธ ์ž…๋ ฅ ์„น์…˜
3006
+ with gr.Group(elem_classes="input-group"):
 
 
 
 
 
 
 
 
 
 
 
 
3007
  with gr.Row():
3008
+ # ์™ผ์ชฝ: ์ž…๋ ฅ ์˜ต์…˜๋“ค
3009
+ with gr.Column(scale=2):
3010
+ # ์ž…๋ ฅ ํƒ€์ž… ์„ ํƒ
3011
+ input_type_selector = gr.Radio(
3012
+ choices=["URL", "PDF", "Keyword"],
3013
+ value="URL",
3014
+ label="๐Ÿ“ฅ Input Type",
3015
+ info="Choose your content source"
3016
+ )
3017
+
3018
+ # URL ์ž…๋ ฅ
3019
+ url_input = gr.Textbox(
3020
+ label="๐Ÿ”— Article URL",
3021
+ placeholder="Enter the article URL here...",
3022
+ value="",
3023
+ visible=True,
3024
+ lines=2
3025
+ )
3026
+
3027
+ # PDF ์—…๋กœ๋“œ
3028
+ pdf_input = gr.File(
3029
+ label="๐Ÿ“„ Upload PDF",
3030
+ file_types=[".pdf"],
3031
+ visible=False
3032
+ )
3033
+
3034
+ # ํ‚ค์›Œ๋“œ ์ž…๋ ฅ
3035
+ keyword_input = gr.Textbox(
3036
+ label="๐Ÿ” Topic/Keyword",
3037
+ placeholder="Enter a topic (e.g., 'AI trends 2024', '์ธ๊ณต์ง€๋Šฅ ์ตœ์‹  ๋™ํ–ฅ')",
3038
+ value="",
3039
+ visible=False,
3040
+ info="System will search and compile latest information",
3041
+ lines=2
3042
+ )
3043
+
3044
+ # ์˜ค๋ฅธ์ชฝ: ์„ค์ • ์˜ต์…˜๋“ค
3045
+ with gr.Column(scale=1):
3046
+ # ์–ธ์–ด ์„ ํƒ
3047
+ language_selector = gr.Radio(
3048
+ choices=["English", "Korean"],
3049
+ value="English",
3050
+ label="๐ŸŒ Language / ์–ธ์–ด",
3051
+ info="Output language"
3052
+ )
3053
+
3054
+ # ์ฒ˜๋ฆฌ ๋ชจ๋“œ
3055
+ mode_selector = gr.Radio(
3056
+ choices=["Local", "API"],
3057
+ value="Local",
3058
+ label="โš™๏ธ Processing Mode",
3059
+ info="Local: On-device | API: Cloud"
3060
+ )
3061
+
3062
+ # TTS ์—”์ง„
3063
+ tts_selector = gr.Radio(
3064
+ choices=["Edge-TTS", "Spark-TTS", "MeloTTS"],
3065
+ value="Edge-TTS",
3066
+ label="๐Ÿ”Š TTS Engine",
3067
+ info="Voice synthesis engine"
3068
+ )
3069
 
3070
+ # ์ƒ์„ฑ ๋ฒ„ํŠผ
3071
+ with gr.Row():
3072
+ convert_btn = gr.Button(
3073
+ "๐ŸŽฏ Generate Professional Conversation",
3074
+ variant="primary",
3075
+ size="lg",
3076
+ scale=1
3077
+ )
3078
+
3079
+ # ์ถœ๋ ฅ ์„น์…˜
3080
+ with gr.Group(elem_classes="output-group"):
3081
+ with gr.Row():
3082
+ # ์™ผ์ชฝ: ๋Œ€ํ™” ํ…์ŠคํŠธ
3083
+ with gr.Column(scale=3):
3084
+ conversation_output = gr.Textbox(
3085
+ label="๐Ÿ’ฌ Generated Professional Conversation (Editable)",
3086
+ lines=25,
3087
+ max_lines=50,
3088
+ interactive=True,
3089
+ placeholder="Professional podcast conversation will appear here...\n์ „๋ฌธ ํŒŸ์บ์ŠคํŠธ ๋Œ€ํ™”๊ฐ€ ์—ฌ๊ธฐ์— ํ‘œ์‹œ๋ฉ๋‹ˆ๋‹ค...",
3090
+ info="Edit the conversation as needed. Format: 'Speaker Name: Text'"
3091
+ )
3092
+
3093
+ # ์˜ค๋””์˜ค ์ƒ์„ฑ ๋ฒ„ํŠผ
3094
+ with gr.Row():
3095
+ generate_audio_btn = gr.Button(
3096
+ "๐ŸŽ™๏ธ Generate Audio from Text",
3097
+ variant="secondary",
3098
+ size="lg"
3099
+ )
3100
+
3101
+ # ์˜ค๋ฅธ์ชฝ: ์˜ค๋””์˜ค ์ถœ๋ ฅ ๋ฐ ์ƒํƒœ
3102
+ with gr.Column(scale=2):
3103
+ audio_output = gr.Audio(
3104
+ label="๐ŸŽง Professional Podcast Audio",
3105
+ type="filepath",
3106
+ interactive=False
3107
+ )
3108
+
3109
+ status_output = gr.Textbox(
3110
+ label="๐Ÿ“Š Status",
3111
+ interactive=False,
3112
+ lines=3,
3113
+ elem_classes="status-box"
3114
+ )
3115
+
3116
+ # ๋„์›€๋ง
3117
+ gr.Markdown("""
3118
+ #### ๐Ÿ’ก Quick Tips:
3119
+ - **URL**: Paste any article link
3120
+ - **PDF**: Upload documents directly
3121
+ - **Keyword**: Enter topics for AI research
3122
+ - Edit conversation before audio generation
3123
+ - Korean (ํ•œ๊ตญ์–ด) fully supported
3124
+ """)
3125
+
3126
+ # ์˜ˆ์ œ ์„น์…˜
3127
+ with gr.Accordion("๐Ÿ“š Examples", open=False):
3128
+ gr.Examples(
3129
+ examples=[
3130
+ ["https://huggingface.co/blog/openfree/cycle-navigator", "URL", "Local", "Edge-TTS", "English"],
3131
+ ["quantum computing breakthroughs", "Keyword", "Local", "Edge-TTS", "English"],
3132
+ ["https://huggingface.co/papers/2505.14810", "URL", "Local", "Edge-TTS", "Korean"],
3133
+ ["์ธ๊ณต์ง€๋Šฅ ์œค๋ฆฌ์™€ ๊ทœ์ œ", "Keyword", "Local", "Edge-TTS", "Korean"],
3134
+ ],
3135
+ inputs=[url_input, input_type_selector, mode_selector, tts_selector, language_selector],
3136
+ outputs=[conversation_output, status_output],
3137
+ fn=synthesize_sync,
3138
+ cache_examples=False,
3139
  )
3140
 
3141
+ # Input type change handler
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3142
  input_type_selector.change(
3143
  fn=toggle_input_visibility,
3144
  inputs=[input_type_selector],
 
3152
  outputs=[tts_selector]
3153
  )
3154
 
3155
+ # ์ด๋ฒคํŠธ ์—ฐ๊ฒฐ
3156
  def get_article_input(input_type, url_input, pdf_input, keyword_input):
3157
  """Get the appropriate input based on input type"""
3158
  if input_type == "URL":
 
3184
  share=False,
3185
  server_name="0.0.0.0",
3186
  server_port=7860
3187
+ )