fix typos

This commit is contained in:
thinkwee 2024-07-25 09:46:26 +08:00
parent df53ccd81f
commit e644770fb1
3 changed files with 38 additions and 2 deletions

View File

@ -8,7 +8,7 @@
6,./images/autonomous_agents_for_collaborative_20240621.png,Avalon's Game of Thoughts: Battle Against Deception through Recursive Contemplation,"Shenzhi Wang, Chang Liu, Zilong Zheng, Siyuan Qi, Shuo Chen, Qisen Yang, Andrew Zhao, Chaofei Wang, Shiji Song, Gao Huang","Recent breakthroughs in large language models (LLMs) have brought remark-able success in the field of LLM-as-Agent. Nevertheless, a prevalent assumptionis that the information processed by LLMs is consistently honest, neglecting thepervasive deceptive or misleading information in human society and AI-generatedcontent.This oversight makes LLMs susceptible to malicious manipulations,potentially resulting in detrimental outcomes. This study utilizes the intricateAvalon game as a testbed to explore LLMs potential in deceptive environments.Avalon, full of misinformation and requiring sophisticated logic, manifests as a“Game-of-Thoughts”. Inspired by the efficacy of humans recursive thinking andperspective-taking in the Avalon game, we introduce a novel framework, Recur-sive Contemplation (ReCon), to enhance LLMs ability to identify and counteractdeceptive information. ReCon combines formulation and refinement contempla-tion processes; formulation contemplation produces initial thoughts and speech,while refinement contemplation further polishes them. Additionally, we incor-porate first-order and second-order perspective transitions into these processesrespectively. Specifically, the first-order allows an LLM agent to infer othersmental states, and the second-order involves understanding how others perceivethe agents mental state.......","Tsinghua University, BIGAI, Technical University of Munich"
7,./images/avalon's_game_of_thoughts_20231002.png,Beyond Natural Language: LLMs Leveraging Alternative Formats for Enhanced Reasoning and Communication,"Weize Chen, Chenfei Yuan, Jiarui Yuan, Yusheng Su, Chen Qian, Cheng Yang, Ruobing Xie, Zhiyuan Liu, Maosong Sun","Natural language (NL) has long been the predominant format for human cognition and communication, and by extension, has been similarly pivotal in the development and application of Large Language Models (LLMs). Yet, besides NL, LLMs have seen various non-NL formats during pre-training, such as code and logical expression. NL's status as the optimal format for LLMs, particularly in single-LLM reasoning and multi-agent communication, has not been thoroughly examined. In this work, we challenge the default use of NL by exploring the utility of non-NL formats in these contexts. We show that allowing LLMs to autonomously select the most suitable format before reasoning or communicating leads to a 3.3 to 5.7\% improvement in reasoning efficiency for different LLMs, and up to a 72.7\% reduction in token usage in multi-agent communication, all while maintaining communicative effectiveness. Our comprehensive analysis further reveals that LLMs can devise a format from limited task instructions and that the devised format is effectively transferable across different LLMs. Intriguingly, the structured communication format decided by LLMs exhibits notable parallels with established agent communication languages, suggesting a natural evolution towards efficient, structured communication in agent communication.","Tsinghua University, Tencent, Beijing University of Posts and Telecommunications"
8,./images/beyond_natural_language_llms_20240228.png,Building Cooperative Embodied Agents Modularly with Large Language Models,"Hongxin Zhang, Weihua Du, Jiaming Shan, Qinhong Zhou, Yilun Du, Joshua B. Tenenbaum, Tianmin Shu, Chuang Gan","In this work, we address challenging multi-agent cooperation problems with de-centralized control, raw sensory observations, costly communication, and multi-objective tasks instantiated in various embodied environments. While previous re-search either presupposes a cost-free communication channel or relies on a central-ized controller with shared observations, we harness the commonsense knowledge,reasoning ability, language comprehension, and text generation prowess of LLMsand seamlessly incorporate them into a cognitive-inspired modular framework thatintegrates with perception, memory, and execution. Thus building a CooperativeEmbodied Language Agent CoELA, who can plan, communicate, and cooperatewith others to accomplish long-horizon tasks efficiently. Our experiments on C-WAH and TDW-MAT demonstrate that CoELA driven by GPT-4 can surpass strongplanning-based methods and exhibit emergent effective communication. Thoughcurrent Open LMs like LLAMA-2 still underperform, we fine-tune a CoLLAMAwith data collected with our agents and show how they can achieve promisingperformance. We also conducted a user study for human-agent interaction anddiscovered that CoELA communicating in natural language can earn more trust andcooperate more effectively with humans. Our research underscores the potential ofLLMs for future research in multi-agent cooperation. Videos can be found on theproject website https://vis-www.cs.umass.edu/Co-LLM-Agents/.","University of Massachusetts Amherst, Tsinghua University, Shanghai Jiao Tong University, MIT, MIT-IBM Watson AI Lab"
9,./images/building_cooperative_embodied_agents_20230705.png,"CAMEL: Communicative Agents for ""Mind"" Exploration of Large Language Model Society","Guohao Li, Hasan Abed Al Kader Hammoud, Hani Itani, Dmitrii Khizbullin, Bernard Ghanem","The rapid advancement of chat-based language models has led to remarkableprogress in complex task-solving. However, their success heavily relies on humaninput to guide the conversation, which can be challenging and time-consuming.This paper explores the potential of building scalable techniques to facilitate au-tonomous cooperation among communicative agents, and provides insight intotheir “cognitive” processes. To address the challenges of achieving autonomouscooperation, we propose a novel communicative agent framework named role-playing . Our approach involves using inception prompting to guide chat agentstoward task completion while maintaining consistency with human intentions. Weshowcase how role-playing can be used to generate conversational data for studyingthe behaviors and capabilities of a society of agents, providing a valuable resourcefor investigating conversational language models. In particular, we conduct com-prehensive studies on instruction-following cooperation in multi-agent settings.Our contributions include introducing a novel communicative agent framework,offering a scalable approach for studying the cooperative behaviors and capabili-ties of multi-agent systems, and open-sourcing our library to support research oncommunicative agents and beyond: https://github.com/camel-ai/camel.",King Abdullah University of Science and Technology
9,./images/building_cooperative_embodied_agents_20230705.png,"CAMEL: Communicative Agents for ""Mind"" Exploration of Large Language Model Society","Guohao Li, Hasan Abed Al Kader Hammoud, Hani Itani, Dmitrii Khizbullin, Bernard Ghanem","The rapid advancement of chat-based language models has led to remarkableprogress in complex task-solving. However, their success heavily relies on humaninput to guide the conversation, which can be challenging and time-consuming.This paper explores the potential of building scalable techniques to facilitate au-tonomous cooperation among communicative agents, and provides insight intotheir “cognitive” processes. To address the challenges of achieving autonomouscooperation, we propose a novel communicative agent framework named role-playing . Our approach involves using inception prompting to guide chat agentstoward task completion while maintaining consistency with human intentions. We showcase how role-playing can be used to generate conversational data for studyingthe behaviors and capabilities of a society of agents, providing a valuable resourcefor investigating conversational language models. In particular, we conduct com-prehensive studies on instruction-following cooperation in multi-agent settings.Our contributions include introducing a novel communicative agent framework,offering a scalable approach for studying the cooperative behaviors and capabili-ties of multi-agent systems, and open-sourcing our library to support research oncommunicative agents and beyond: https://github.com/camel-ai/camel.",King Abdullah University of Science and Technology
10,./images/camel_communicative_agents_for_20230331.png,ChatDev: Communicative Agents for Software Development,"Chen Qian, Wei Liu, Hongzhang Liu, Nuo Chen, Yufan Dang, Jiahao Li, Cheng Yang, Weize Chen, Yusheng Su, Xin Cong, Juyuan Xu, Dahai Li, Zhiyuan Liu, Maosong Sun","Software development is a complex task thatnecessitates cooperation among multiple mem-bers with diverse skills. Numerous studies useddeep learning to improve specific phases in awaterfall model, such as design, coding, andtesting.However, the deep learning modelin each phase requires unique designs, lead-ing to technical inconsistencies across variousphases, which results in a fragmented and in-effective development process. In this paper,we introduce ChatDev, a chat-powered soft-ware development framework in which special-ized agents driven by large language models(LLMs) are guided in what to communicate(via chat chain) and how to communicate (viacommunicative dehallucination). These agentsactively contribute to the design, coding, andtesting phases through unified language-basedcommunication, with solutions derived fromtheir multi-turn dialogues. We found their uti-lization of natural language is advantageousfor system design, and communicating in pro-gramming language proves helpful in debug-ging. This paradigm demonstrates how linguis-tic communication facilitates multi-agent col-laboration, establishing language as a unify-ing bridge for autonomous task-solving amongLLM agents. The code and data are availableat https://github.com/OpenBMB/ChatDev.","Tsinghua University, The University of Sydney, BUPT, Modelbest Inc."
11,./images/chatdev_communicative_agents_for_20230716.png,Encouraging Divergent Thinking in Large Language Models through Multi-Agent Debate,"Tian Liang, Zhiwei He, Wenxiang Jiao, Xing Wang, Yan Wang, Rui Wang, Yujiu Yang, Zhaopeng Tu, Shuming Shi","Modern large language models (LLMs) likeChatGPT have shown remarkable performanceon general language tasks but still struggle oncomplex reasoning tasks, which drives the re-search on cognitive behaviors of LLMs to ex-plore human-like problem-solving strategies.Along this direction, one representative strat-egy is self-reflection, which asks an LLM torefine the solution with the feedback gener-ated by itself iteratively. However, our studyshows that such reflection-style methods suf-fer from the Degeneration-of-Thought (DoT)problem: once the LLM has established confi-dence in its solutions, it is unable to generatenovel thoughts later through reflection even ifits initial stance is incorrect. To address theDoT problem, we propose a Multi-Agent De-bate (MAD) framework, in which multipleagents express their arguments in the state of“tit for tat” and a judge manages the debateprocess to obtain a final solution. Clearly, ourMAD framework encourages divergent think-ing in LLMs which would be helpful for tasksthat require deep levels of contemplation. Ex-periment results on two challenging datasets,commonsense machine translation and counter-intuitive arithmetic reasoning, demonstrate theeffectiveness of our MAD framework. Exten-sive analyses suggest that the adaptive break ofdebate and the modest level of “tit for tat” stateare required for MAD to obtain good perfor-mance. Moreover, we find that LLMs might notbe a fair judge if different LLMs are used foragents. Code is available at https://github.com/Skytliang/Multi-Agents-Debate.","Tsinghua University, Shanghai Jiao Tong University, Tencent AI Lab"
12,./images/encouraging_divergent_thinking_in_20230530.png,Examining Inter-Consistency of Large Language Models Collaboration: An In-depth Analysis via Debate,"Kai Xiong, Xiao Ding, Yixin Cao, Ting Liu, Bing Qin","Large Language Models (LLMs) have shownimpressive capabilities in various applications,but they still face various inconsistency issues.Existing works primarily focus on the incon-sistency issues within a single LLM, while wecomplementarily explore the inter-consistencyamong multiple LLMs for collaboration. Toexamine whether LLMs can collaborate effec-tively to achieve a consensus for a shared goal,we focus on commonsense reasoning, and in-troduce a formal debate framework (FORD)to conduct a three-stage debate among LLMswith real-world scenarios alignment: fair de-bate, mismatched debate, and roundtable de-bate. Through extensive experiments on var-ious datasets, LLMs can effectively collabo-rate to reach a consensus despite noticeableinter-inconsistencies, but imbalances in theirabilities can lead to domination by superiorLLMs. Leveraging a more advanced LLM likeGPT-4 as an authoritative judge can boost col-laboration performance. Our work contributesto understanding the inter-consistency amongLLMs and lays the foundation for develop-ing future collaboration methods. Codes anddata are available at https://github.com/Waste-Wood/FORD.","Harbin Institute of Technology, Singapore Management University"

1 image_path title author summary affiliation
8 6 ./images/autonomous_agents_for_collaborative_20240621.png Avalon's Game of Thoughts: Battle Against Deception through Recursive Contemplation Shenzhi Wang, Chang Liu, Zilong Zheng, Siyuan Qi, Shuo Chen, Qisen Yang, Andrew Zhao, Chaofei Wang, Shiji Song, Gao Huang Recent breakthroughs in large language models (LLMs) have brought remark-able success in the field of LLM-as-Agent. Nevertheless, a prevalent assumptionis that the information processed by LLMs is consistently honest, neglecting thepervasive deceptive or misleading information in human society and AI-generatedcontent.This oversight makes LLMs susceptible to malicious manipulations,potentially resulting in detrimental outcomes. This study utilizes the intricateAvalon game as a testbed to explore LLMs’ potential in deceptive environments.Avalon, full of misinformation and requiring sophisticated logic, manifests as a“Game-of-Thoughts”. Inspired by the efficacy of humans’ recursive thinking andperspective-taking in the Avalon game, we introduce a novel framework, Recur-sive Contemplation (ReCon), to enhance LLMs’ ability to identify and counteractdeceptive information. ReCon combines formulation and refinement contempla-tion processes; formulation contemplation produces initial thoughts and speech,while refinement contemplation further polishes them. Additionally, we incor-porate first-order and second-order perspective transitions into these processesrespectively. Specifically, the first-order allows an LLM agent to infer others’mental states, and the second-order involves understanding how others perceivethe agent’s mental state....... Tsinghua University, BIGAI, Technical University of Munich
9 7 ./images/avalon's_game_of_thoughts_20231002.png Beyond Natural Language: LLMs Leveraging Alternative Formats for Enhanced Reasoning and Communication Weize Chen, Chenfei Yuan, Jiarui Yuan, Yusheng Su, Chen Qian, Cheng Yang, Ruobing Xie, Zhiyuan Liu, Maosong Sun Natural language (NL) has long been the predominant format for human cognition and communication, and by extension, has been similarly pivotal in the development and application of Large Language Models (LLMs). Yet, besides NL, LLMs have seen various non-NL formats during pre-training, such as code and logical expression. NL's status as the optimal format for LLMs, particularly in single-LLM reasoning and multi-agent communication, has not been thoroughly examined. In this work, we challenge the default use of NL by exploring the utility of non-NL formats in these contexts. We show that allowing LLMs to autonomously select the most suitable format before reasoning or communicating leads to a 3.3 to 5.7\% improvement in reasoning efficiency for different LLMs, and up to a 72.7\% reduction in token usage in multi-agent communication, all while maintaining communicative effectiveness. Our comprehensive analysis further reveals that LLMs can devise a format from limited task instructions and that the devised format is effectively transferable across different LLMs. Intriguingly, the structured communication format decided by LLMs exhibits notable parallels with established agent communication languages, suggesting a natural evolution towards efficient, structured communication in agent communication. Tsinghua University, Tencent, Beijing University of Posts and Telecommunications
10 8 ./images/beyond_natural_language_llms_20240228.png Building Cooperative Embodied Agents Modularly with Large Language Models Hongxin Zhang, Weihua Du, Jiaming Shan, Qinhong Zhou, Yilun Du, Joshua B. Tenenbaum, Tianmin Shu, Chuang Gan In this work, we address challenging multi-agent cooperation problems with de-centralized control, raw sensory observations, costly communication, and multi-objective tasks instantiated in various embodied environments. While previous re-search either presupposes a cost-free communication channel or relies on a central-ized controller with shared observations, we harness the commonsense knowledge,reasoning ability, language comprehension, and text generation prowess of LLMsand seamlessly incorporate them into a cognitive-inspired modular framework thatintegrates with perception, memory, and execution. Thus building a CooperativeEmbodied Language Agent CoELA, who can plan, communicate, and cooperatewith others to accomplish long-horizon tasks efficiently. Our experiments on C-WAH and TDW-MAT demonstrate that CoELA driven by GPT-4 can surpass strongplanning-based methods and exhibit emergent effective communication. Thoughcurrent Open LMs like LLAMA-2 still underperform, we fine-tune a CoLLAMAwith data collected with our agents and show how they can achieve promisingperformance. We also conducted a user study for human-agent interaction anddiscovered that CoELA communicating in natural language can earn more trust andcooperate more effectively with humans. Our research underscores the potential ofLLMs for future research in multi-agent cooperation. Videos can be found on theproject website https://vis-www.cs.umass.edu/Co-LLM-Agents/. University of Massachusetts Amherst, Tsinghua University, Shanghai Jiao Tong University, MIT, MIT-IBM Watson AI Lab
11 9 ./images/building_cooperative_embodied_agents_20230705.png CAMEL: Communicative Agents for "Mind" Exploration of Large Language Model Society Guohao Li, Hasan Abed Al Kader Hammoud, Hani Itani, Dmitrii Khizbullin, Bernard Ghanem The rapid advancement of chat-based language models has led to remarkableprogress in complex task-solving. However, their success heavily relies on humaninput to guide the conversation, which can be challenging and time-consuming.This paper explores the potential of building scalable techniques to facilitate au-tonomous cooperation among communicative agents, and provides insight intotheir “cognitive” processes. To address the challenges of achieving autonomouscooperation, we propose a novel communicative agent framework named role-playing . Our approach involves using inception prompting to guide chat agentstoward task completion while maintaining consistency with human intentions. Weshowcase how role-playing can be used to generate conversational data for studyingthe behaviors and capabilities of a society of agents, providing a valuable resourcefor investigating conversational language models. In particular, we conduct com-prehensive studies on instruction-following cooperation in multi-agent settings.Our contributions include introducing a novel communicative agent framework,offering a scalable approach for studying the cooperative behaviors and capabili-ties of multi-agent systems, and open-sourcing our library to support research oncommunicative agents and beyond: https://github.com/camel-ai/camel. The rapid advancement of chat-based language models has led to remarkableprogress in complex task-solving. However, their success heavily relies on humaninput to guide the conversation, which can be challenging and time-consuming.This paper explores the potential of building scalable techniques to facilitate au-tonomous cooperation among communicative agents, and provides insight intotheir “cognitive” processes. To address the challenges of achieving autonomouscooperation, we propose a novel communicative agent framework named role-playing . Our approach involves using inception prompting to guide chat agentstoward task completion while maintaining consistency with human intentions. We showcase how role-playing can be used to generate conversational data for studyingthe behaviors and capabilities of a society of agents, providing a valuable resourcefor investigating conversational language models. In particular, we conduct com-prehensive studies on instruction-following cooperation in multi-agent settings.Our contributions include introducing a novel communicative agent framework,offering a scalable approach for studying the cooperative behaviors and capabili-ties of multi-agent systems, and open-sourcing our library to support research oncommunicative agents and beyond: https://github.com/camel-ai/camel. King Abdullah University of Science and Technology
12 10 ./images/camel_communicative_agents_for_20230331.png ChatDev: Communicative Agents for Software Development Chen Qian, Wei Liu, Hongzhang Liu, Nuo Chen, Yufan Dang, Jiahao Li, Cheng Yang, Weize Chen, Yusheng Su, Xin Cong, Juyuan Xu, Dahai Li, Zhiyuan Liu, Maosong Sun Software development is a complex task thatnecessitates cooperation among multiple mem-bers with diverse skills. Numerous studies useddeep learning to improve specific phases in awaterfall model, such as design, coding, andtesting.However, the deep learning modelin each phase requires unique designs, lead-ing to technical inconsistencies across variousphases, which results in a fragmented and in-effective development process. In this paper,we introduce ChatDev, a chat-powered soft-ware development framework in which special-ized agents driven by large language models(LLMs) are guided in what to communicate(via chat chain) and how to communicate (viacommunicative dehallucination). These agentsactively contribute to the design, coding, andtesting phases through unified language-basedcommunication, with solutions derived fromtheir multi-turn dialogues. We found their uti-lization of natural language is advantageousfor system design, and communicating in pro-gramming language proves helpful in debug-ging. This paradigm demonstrates how linguis-tic communication facilitates multi-agent col-laboration, establishing language as a unify-ing bridge for autonomous task-solving amongLLM agents. The code and data are availableat https://github.com/OpenBMB/ChatDev. Tsinghua University, The University of Sydney, BUPT, Modelbest Inc.
13 11 ./images/chatdev_communicative_agents_for_20230716.png Encouraging Divergent Thinking in Large Language Models through Multi-Agent Debate Tian Liang, Zhiwei He, Wenxiang Jiao, Xing Wang, Yan Wang, Rui Wang, Yujiu Yang, Zhaopeng Tu, Shuming Shi Modern large language models (LLMs) likeChatGPT have shown remarkable performanceon general language tasks but still struggle oncomplex reasoning tasks, which drives the re-search on cognitive behaviors of LLMs to ex-plore human-like problem-solving strategies.Along this direction, one representative strat-egy is self-reflection, which asks an LLM torefine the solution with the feedback gener-ated by itself iteratively. However, our studyshows that such reflection-style methods suf-fer from the Degeneration-of-Thought (DoT)problem: once the LLM has established confi-dence in its solutions, it is unable to generatenovel thoughts later through reflection even ifits initial stance is incorrect. To address theDoT problem, we propose a Multi-Agent De-bate (MAD) framework, in which multipleagents express their arguments in the state of“tit for tat” and a judge manages the debateprocess to obtain a final solution. Clearly, ourMAD framework encourages divergent think-ing in LLMs which would be helpful for tasksthat require deep levels of contemplation. Ex-periment results on two challenging datasets,commonsense machine translation and counter-intuitive arithmetic reasoning, demonstrate theeffectiveness of our MAD framework. Exten-sive analyses suggest that the adaptive break ofdebate and the modest level of “tit for tat” stateare required for MAD to obtain good perfor-mance. Moreover, we find that LLMs might notbe a fair judge if different LLMs are used foragents. Code is available at https://github.com/Skytliang/Multi-Agents-Debate. Tsinghua University, Shanghai Jiao Tong University, Tencent AI Lab
14 12 ./images/encouraging_divergent_thinking_in_20230530.png Examining Inter-Consistency of Large Language Models Collaboration: An In-depth Analysis via Debate Kai Xiong, Xiao Ding, Yixin Cao, Ting Liu, Bing Qin Large Language Models (LLMs) have shownimpressive capabilities in various applications,but they still face various inconsistency issues.Existing works primarily focus on the incon-sistency issues within a single LLM, while wecomplementarily explore the inter-consistencyamong multiple LLMs for collaboration. Toexamine whether LLMs can collaborate effec-tively to achieve a consensus for a shared goal,we focus on commonsense reasoning, and in-troduce a formal debate framework (FORD)to conduct a three-stage debate among LLMswith real-world scenarios alignment: fair de-bate, mismatched debate, and roundtable de-bate. Through extensive experiments on var-ious datasets, LLMs can effectively collabo-rate to reach a consensus despite noticeableinter-inconsistencies, but imbalances in theirabilities can lead to domination by superiorLLMs. Leveraging a more advanced LLM likeGPT-4 as an authoritative judge can boost col-laboration performance. Our work contributesto understanding the inter-consistency amongLLMs and lays the foundation for develop-ing future collaboration methods. Codes anddata are available at https://github.com/Waste-Wood/FORD. Harbin Institute of Technology, Singapore Management University

View File

@ -411,7 +411,7 @@ tonomous cooperation among communicative agents, and provides insight into
their “cognitive” processes. To address the challenges of achieving autonomous
cooperation, we propose a novel communicative agent framework named role-
playing . Our approach involves using inception prompting to guide chat agents
toward task completion while maintaining consistency with human intentions. We
toward task completion while maintaining consistency with human intentions. We
showcase how role-playing can be used to generate conversational data for studying
the behaviors and capabilities of a society of agents, providing a valuable resource
for investigating conversational language models. In particular, we conduct com-

1 Title Authors Date Abstract Url AwesomeListCategory Categories PaperIndex Affiliation
411
412
413
414
415
416
417

View File

@ -0,0 +1,36 @@
import pandas as pd
input_file = 'papers.csv'
df_raw = pd.read_csv(input_file, on_bad_lines='warn')
cat2id = {'Communication':'1',
'Organization':'2',
'Evolution':'3',
'Simulation':'4'}
for cat in ['Communication','Evolution','Simulation','Organization']:
df = df_raw[df_raw['AwesomeListCategory'] == cat]
new_df = pd.DataFrame(columns=['image_path','title','author','summary','affiliation'])
index = 0
first_title = df.iloc[0]['Title']
first_author = df.iloc[0]['Authors']
first_affiliation = df.iloc[0]['Affiliation']
first_summary = df.iloc[0]['Abstract'].replace("\n","")
first_cover_path = "./images/" + cat2id[cat] + "d.png"
first_line = pd.DataFrame([[first_cover_path,first_title,first_author,first_summary,first_affiliation]], columns=['image_path','title','author','summary','affiliation'])
new_df = pd.concat([new_df, first_line], ignore_index=True)
image_path_list = df['PaperIndex'].tolist()
for _, line in df[1:].iterrows():
print(line['Title'])
new_line = pd.DataFrame([["./images/{}.png".format(image_path_list[index]),line['Title'],line['Authors'],str(line['Abstract']).replace("\n",""),line['Affiliation']]], columns=['image_path','title','author','summary','affiliation'])
new_df = pd.concat([new_df, new_line], ignore_index=True)
index += 1
last_line = pd.DataFrame([["./images/{}.png".format(image_path_list[index]),"To be Continued...","Your Contributions are Welcome!","",""]], columns=['image_path','title','author','summary','affiliation'])
new_df = pd.concat([new_df, last_line], ignore_index=True)
new_df.to_csv("./book_{}/data.csv".format(cat.lower()))