FIU Libraries Logo

  •   LibGuides
  •   A-Z List
  •   Help

Artificial Intelligence

  • Background Information
  • Getting started
  • Browse Journals
  • Dissertations & Theses
  • Datasets and Repositories
  • Research Data Management 101
  • Scientific Writing
  • Find Videos
  • Related Topics
  • Quick Links
  • Ask Us/Contact Us

FIU dissertations

master thesis artificial intelligence

Non-FIU dissertations

Many   universities   provide full-text access to their dissertations via a digital repository.  If you know the title of a particular dissertation or thesis, try doing a Google search.  

Aims to be the best possible resource for finding open access graduate theses and dissertations published around the world with metadata from over 800 colleges, universities, and research institutions. Currently, indexes over 1 million theses and dissertations.

This is a discovery service for open access research theses awarded by European universities.

A union catalog of Canadian theses and dissertations, in both electronic and analog formats, is available through the search interface on this portal.

There are currently more than 90 countries and over 1200 institutions represented. CRL has catalog records for over 800,000 foreign doctoral dissertations.

An international collaborative resource, the NDLTD Union Catalog contains more than one million records of electronic theses and dissertations. Use BASE, the VTLS Visualizer or any of the geographically specific search engines noted lower on their webpage.

Indexes doctoral dissertations and masters' theses in all areas of academic research includes international coverage.

ProQuest Dissertations & Theses global

Related Sites

master thesis artificial intelligence

  • << Previous: Browse Journals
  • Next: Datasets and Repositories >>
  • Last Updated: Aug 7, 2024 10:21 AM
  • URL: https://library.fiu.edu/artificial-intelligence

Information

Fiu libraries floorplans, green library, modesto a. maidique campus.

Floor Resources
One
Two
Three
Four
Five
Six
Seven
Eight

Hubert Library, Biscayne Bay Campus

Floor Resources
One
Two
Three

Federal Depository Library Program logo

Directions: Green Library, MMC

Directions: Hubert Library, BBC

  • Menu  Close 
  • Search 

Artificial Intelligence Master's Program Online

Jobs in artificial intelligence are in high demand and projected to grow. You can meet this demand and advance your career with an online master's degree in Artificial Intelligence from Johns Hopkins University. From topics in machine learning and natural language processing to expert systems and robotics, start here to define your career as an artificial intelligence engineer.

  • Request Info
  • View Info Session

employees at the Applied Physics Laboratory

Artificial Intelligence Program Overview

With the expertise of the Johns Hopkins Applied Physics Lab, we’ve developed one of the nation’s first online artificial intelligence master’s programs to prepare engineers like you to take full advantage of opportunities in this field. The highly advanced curriculum is designed to deeply explore AI areas, including computer robotics, natural language processing, image processing, and more.

We have assembled a team of top-level researchers, scientists, and engineers to guide you through our rigorous online academic courses. Because we are a hub and frontrunner in artificial intelligence, we can tailor our artificial intelligence online master’s content to include the most up-to-date practices and offer core courses that address the AI-driven technologies, techniques, and issues that power our modern world.

The online master’s in Artificial Intelligence program balances theoretical concepts with the practical knowledge you can apply to real-world systems and processes. Courses deeply explore areas of AI, including robotics, natural language processing, image processing, and more—fully online.

At the program’s completion, you will:

  • Be able to describe the requirements, drivers, functions, components, interdependencies, risks, and quality factors for various systems and processes.
  • Possess the advanced skills needed to develop new artificial intelligence features.
  • Select courses to tailor your degree and gain the knowledge that works best for you.

We offer two program options for Artificial Intelligence; you can earn a Master of Science in Artificial Intelligence or a graduate certificate.

Artificial Intelligence Courses

Get details about course requirements, prerequisites, and electives offered within the program. All courses are taught by subject-matter experts who are executing the technologies and techniques they teach. For exact dates, times, locations, fees, and instructors, please refer to the course schedule published each term.

Proficiency Exams

A proficiency exam is available in Artificial Intelligence. If you have not completed the necessary prerequisite(s) in a formal college-level course but have extensive experience in these areas, may apply to take a proficiency exam provided by the Engineering for Professionals program. Successful completion of the exam(s) allows you to opt-out of certain prerequisites.

Program Contacts

Barton paulhamus, tony johnson.

master thesis artificial intelligence

Meghan Stewart

Valeria alfaro, tuition and fees.

Did you know that 78 percent of our enrolled students’ tuition is covered by employer contribution programs? Find out more about the cost of tuition for prerequisite and program courses and the Dean’s Fellowship.

Why Hopkins?

When ambition meets opportunity, anything is possible. Earn your degree on your terms at Johns Hopkins Engineering for Professionals.

workers at apl

Network and Connect - Your knowledge is stronger with a network. In the applied and computational mathematics program, you will make career-advancing connections with accomplished scientists and engineers who represent a variety of disciplines across many industries.

Engineers See the World Differently - Watch our video to revisit the inspiration that sparked your curiosity in science and engineering.

homewood campus

Beyond Rankings: We help you fulfill your vision. - We are proud to be ranked among the top online graduate engineering schools by U.S. News & World Report . But we’re about more than just numbers and rankings—we’re focused on making sure you flourish as a learner and engineer.

Academic Calendar

Find out when registration opens, classes start, transcript deadlines and more. Applications are accepted year-round, so you can apply any time.

Related News

An engineer in a high-tech manufacturing facility interacts with a holographic projection of a mechanical component, surrounded by robotic arms and advanced machinery.

The Impact of AI on the Engineering Field

  • Artificial Intelligence

AI has made strides in various fields recently. But what does it mean for engineers? JHU EP discusses how engineers can use AI to their benefit.

Hands typing on a laptop. Various charts, graphs and data visualizations are overlayed on the photo.

Careers in Machine Learning vs. Data Science vs. Artificial Intelligence

  • Computer Science
  • Data Science

Explore the similarities, salary prospects, and transferable skills of careers in data science, machine learning, and artificial intelligence.

A headshot of Barton Paulhamus wearing a blue shirt.

Bart Paulhamus Named Chair of Artificial Intelligence

Bart Paulhamus, co-founder and chief of the Johns Hopkins Applied Physics Lab’s (APL) Intelligent Systems Center, has been appointed chair of the Artificial Intelligence program at Johns Hopkins University’s Engineering…

Master of Science in Artificial Intelligence (AI)

WPI Professor Elke Rundensteiner works with students on an NSF-funded project on fairness in artificial intelligence.

WPI Professor Elke Rundensteiner works with students on an NSF-funded project on fairness in artificial intelligence.

Prepare for the AI Career Opportunities of the Future  

We’re all experiencing the transformative impact artificial intelligence is having in our everyday lives and to nearly all sectors of the economy. Are you prepared to take advantage of the opportunities AI presents in your professional career? Leveraging decades of AI expertise, WPI is offering an MS in Artificial Intelligence. Through project-based courses and either a capstone or an MS thesis, you’ll grow your technical expertise in understanding, developing, deploying, and innovating AI techniques and systems with a responsible approach in this rapidly growing area.

WPI's MS in AI is available on campus or online.

Value Proposition Description

Program Highlights:

  • Build in demand skills sought by employers in machine learning, deep learning, natural language processing, generative AI, robotics planning, computer vision, responsible AI, and much more.
  • Benefit from WPI’s deep history of teaching and furthering artificial intelligence innovations through impactful project work with industrial partners.
  • Take advantage of WPI’s interdisciplinary approach and hone your AI skills in courses you are interested in selected from academic units across the entire campus.
  • Gain real-world experience as you learn to develop, deploy, and innovate with AI techniques and systems in your team-based capstone project working with industrial mentors.
  • Customize your degree with thirteen specializations , including AI & Security, AI & Health, AI & Software Systems, and many more.
  • Learn from world-class faculty who are industry and scholarly leaders working on cutting-edge research projects in areas critical to our economy and to society.

WPI students huddling around a robot on the ground.

CS Faculty Research

Faster, fairer, and more accurate ai models with graph data.

WPI Prof. Fabricio Murai discusses their current research related to FRV AI.

Extracting Knowledge with Natural Language Processing

Prof. Kyumin Lee describes his research on information retrieval and AI.

Better Data for Better AI Results

Prof. Roee Shraga describes his work to improve the data used by AI systems and algorithms.

Understanding Brain Networks using Deep Learning

WPI Prof. Xiangnan Kong discusses their current research related to AI.

CS Graduate Student Research

Clustering for confidentiality.

Adam Beauchaine presents his cyber security research: Clustering for Confidentiality, An Exploration of Unsupervised Learning for the Security of Data Assets. 

Reinforcement Learning for Education

WPI student Morgan Lee presents their CS research: Expert Features for a Student Support Recommendation Contextual Bandit Algorithm.

Data-Driven Optimization of Wire Arc DED Manufacturing Conditions for Improved Bead Shape Prediction

WPI student Stephen Price shared his cutting-edge research in which he is trying to improve additive manufacturing techniques to create more precise and useful technologies. 

Annual job openings through 2032

Bureau of Labor Statistics

Median salary for computer and information research scientists

Bureau of Labor Statistics (2022)

WPI Professor Elke Rundensteiner works with students on an NSF-funded project on fairness in artificial intelligence.

Professor Elke Rundensteiner Speaks with BestColleges

Elke Rundensteiner, a WPI professor of computer science, told BestColleges that WPI is uniquely positioned to help students excel in the rapidly changing AI landscape.

"In some sense, we've been doing that for the last 50 years ... When I came to WPI 30 years ago, we had already the AI course itself."

Curriculum for Master of Science in Artificial Intelligence  

Students must complete at least 30 credit hours of study in the MS program, which is equivalent to a minimum of 10 three-credit graduate courses. As part of these 30 credits, you may select the MS thesis-option, which requires a 9-credit master’s thesis, or the project-based capstone option, which requires a three-credit capstone project course, referred to as the Graduate Qualifying Project (GQP) or Capstone Project. Each student should carefully weigh the pros and cons of these alternatives in consultation with their academic advisor prior to selecting an option, typically in the second year of study. The AI department will allow a student to change between the thesis or GQP options only once.

All entering students must submit a Plan of Study identifying the courses to be taken. The Plan of Study must be approved by the student’s advisor and the MS in AI Graduate Committee and must include the minimum requirements listed below. These M.S. degree requirements have been designed to provide a comprehensive yet flexible program to students who are pursuing an M.S. degree exclusively and students who are pursuing a combined BS/MS degree.

Preparatory Courses  

You are encouraged to take preparatory courses that have been designed to help you fill limited background knowledge or skills fundamental to AI including programming or mathematical foundations. MS in AI students may take at most six graduate credits towards the degree of these preparatory courses:

  • CS 5007 Introduction to Programming Concepts, Data Structures, and Algorithms
  • CS 5008 Introduction to Systems and Network Programming
  • CS 5084 Introduction to Algorithms: Design and Analysis
  • DS 517/MA517 Mathematical Foundations for Data Science
  • DS 501 Introduction to Data Science
  • DS 577 Machine Learning for Engineering & Science Applications
  • MIS 587 Business Applications in Machine Learning

Core Courses

MS in AI students must complete a five-course core by taking one course each in the five core MS-AI bins in AI, Ethics & AI, Machine Learning, Knowledge Representation & Reasoning, and Interaction & Action. Students may choose to take additional core courses, beyond the five required core courses, from below bins:

  • CS 534 Introduction to Artificial Intelligence  
  • DS555/CS555 Responsible Artificial Intelligence
  • SS560 Artificial Intelligence: Exploring Technology and Policy
  • MIS520 Artificial Intelligence and its Ethical Application in Business
  • WR 513 Ethical Impact and Communication in Robotics and Artificial Intelligence Research
  • CS 548 Knowledge Discovery and Data Mining
  • DS 502/MA543 Statistical Methods for Data Science
  • CS 539 Machine Learning
  • DS541/CS 541 Deep Learning
  • CS586/DS504: Big Data Analytics
  • DS551/CS551 Reinforcement Learning
  • ECE571 Machine Learning for Engineering Applications
  • ECE557/CS557/DS557 Machine Learning for Cybersecurity
  • ECE556/CS556/DS556 On-Device Deep Learning
  • RBE577 Machine Learning for Robotics
  • DS553/CS553 Machine Learning Development & Operations (ML OPS)
  • CS542 Database Management Systems
  • CS585/DS503 Big Data Management
  • CS 509 Design of Software Systems
  • MIS 502 Data Management for Analytics
  • OIE 559 Advanced Prescriptive Analytics
  • RBE 550 Robot Motion Planning
  • RBE 575 Safety and Guarantees in Autonomous Robotics
  • RBE 511 Swarm Intelligence
  • DS 552/CS 552 Generative Artificial Intelligence
  • DS 554/CS 554 Natural Language Processing
  • DS 547/CS 547 Information Retrieval
  • CS 549/RBE 549 Computer Vision
  • RBE 526/CS 526 Human-Robot Interaction
  • ECE 545/CS 545 Digital Image Processing

Capstone Project or MS Thesis

MS in AI students must complete either a three-credit capstone project experience or a nine-credit MS thesis from the list below .

For the capstone project, the MS-AI student can select one of the three capstone courses based on their primary interest and with approval of their MS-AI advisor and the instructor of the course.

  • DS 598 Graduate Qualifying Project in Data Science (3 credits)
  • CS 594/DS 594 Graduate Qualifying Project in Artificial Intelligence (3 credits)
  • RBE 594 Capstone Project Experience in Robotics Engineering (3 credits)
  • CS 599/DS 599/RBE 599 Master's Thesis (9 credits)

This three-credit graduate qualifying project, typically done in teams, provides a capstone experience in applying Artificial Intelligence skills to a real-world problem. It will be carried out in cooperation with an industrial sponsor and is approved and overseen by a core or collaborative faculty member in the Artificial Intelligence Program. This offering integrates theory and practice of Artificial Intelligence and includes the utilization of tools and techniques acquired in the Artificial Intelligence Program to a real-world problem. In addition to a written report, this project must be presented in a formal presentation to faculty of the AI program and sponsors. Professional development skills, such as communication, teamwork, leadership, and collaboration, will be practiced.  This course is a degree requirement for the Master of Science in Artificial Intelligence (MS-AI) and may not be taken before completion of 21 credits in the program. Students outside the MS-AI program must get the instructor’s approval before.

Prerequisite: Completion of at least 24 credits of the AI degree, or consent of the instructor. With permission of the instructor, the GQP can be taken a 2nd time for a total of 6 credits.

The MS thesis in the Artificial Intelligence Program consists of a research or development project worth a minimum of 9 graduate credit hours. Students interested in research, and in particular those who consider pursuing a Ph.D. degree in a related area, are encouraged to select the M.S. thesis option. The student can sign up for MS thesis credits such as CS599, DS599, or RBE599, as long as a faculty affiliated with the MS-AI program serves as thesis advisor and the thesis topic relates to AI. Students must submit a thesis proposal for approval by the program by the end of the semester in which a student has registered for a third thesis credit and by the advisor. Proposals will be considered only at regularly scheduled program meetings. Students funded by a teaching assistantship, research assistantship or fellowship are expected to pursue the thesis option. The student then must satisfactorily complete a written thesis and present the results to the AI faculty in a public presentation.

Want to view all course listings and descriptions?

Meet Our World-Class Faculty

Elke Rundensteiner

As founding Head of the interdisciplinary Data Science program here at WPI, I take great pleasure in doing all in my power to support the Data Science community in all its facets from research collaborations, and new educational initiatives to our innovative industry-sponsored and mentored Graduate Qualifying projects at the graduate level.

Kyumin Lee

Dr. Lee’s research interests are in information retrieval, natural language processing, social computing, machine learning, and cybersecurity over large-scale networked information systems like the Web and social media. He focuses on threats to these systems and design methods to mitigate negative behaviors (e.g., misinformation, hate speech), and looks for positive opportunities to mine and analyze these systems for developing next generation algorithms and architectures (e.g., recommender system, natural language understanding).

Xiangnan Kong

Professor Kong’s research interests focus on data mining and machine learning, with emphasis on addressing the data science problems in biomedical and social applications. Data today involves an increasing number of data types that need to be handled differently from conventional data records, and an increasing number of data sources that need to be fused together. Dr. Kong is particularly interested in designing algorithms to tame data variety issues in various research fields, such as biomedical research, social computing, neuroscience, and business intelligence.

Carlo Pinciroli

The focus of my research is designing innovative tools for swarm robotics. I am developing Buzz, a programming language specifically designed for real-world robot swarms. During my Ph.D., I have designed ARGoS, which is currently the fastest general-purpose robot simulator in the literature. Recent work focuses on human-swarm interaction and multi-robot learning. I am also working on swarm robotics solutions for disaster response scenarios, such as search-and-rescue and firefighting.

Carolina Ruiz

Carolina Ruiz is the Associate Dean of Arts and Sciences and the Harold L. Jurist ’61 and Heather E. Jurist Dean's Professor of Computer Science. She joined the WPI faculty in 1997. Prof. Ruiz’s research is in Artificial Intelligence, Machine Learning, and Data Mining, and their applications to Medicine and Health. She has worked on several clinical domains including sleep, stroke, obesity and pancreatic cancer. Prof.

Xiaozhong Liu

Dr. Xiaozhong Liu is an Associate Professor at Computer Science and Data Science, WPI. Before that, he was Associate Professor at School of Informatics, Computing and Engineering Indiana University Bloomington. His research interests include natural language processing (NLP), text/graph mining, information retrieval/recommendation, metadata, and computational social science. His dissertation at Syracuse University (advisor Dr. Elizabeth D. Liddy) explored an innovative ranking method that weighted the retrieved results by leveraging dynamic community interests.

Jacob Whitehill

My research interests are in applied machine learning, computer vision, data science and their applications to education, affective computing, and human behavior recognition. My work is highly interdisciplinary and frequently intersects cognitive science, psychology, and education. Before joining WPI, I was a research scientist at the Office of the Vice Provost for Advances in Learning at Harvard University. In 2012, I co-founded Emotient, a San Diego-based startup company for automatic emotion and facial expression recognition.

Make Our Program Yours

Elective courses.

As an MS in AI student, you may choose to take additional elective or other AI-related courses from the two options below to reach the 30-credit requirement for the degree:

Other AI-Related Courses: With permission from your academic advisor, you may take any number of AI-related special topics courses, including CS525/DS595/RBE595, Independent Study (ISG), and Directed Research (CS598/DS597/RBE596). In order for these to be counted toward your degree, they must be offered by faculty with a core or a collaborative appointment in the MS in AI program.

Specializations: You may choose to take up six graduate credits in courses that are not part of the MS in AI core bins in any discipline and count them toward your degree. All requirements by the respective unit offering a course must be followed. Students may gain a specialization “AI&X” by taking six elective credits in a discipline thematically related to AI and approved by your advisor. These areas of specialization include, but are not limited to, the ones listed below:

  • AI & Business: ML for Business, Project Management, Supply-Chain Optimization
  • AI & Engineered Systems: Digital Signal Processing, Medical Signal Analysis, Foundations of Robotics, Sensor Engineering
  • AI & Foundations: Mathematical Optimization, Multi-variate Data Analysis, Advanced Statistics
  • AI & Game Development: Serious and Applied Games, Design of Interactive Experiences
  • AI & Global Development: Sustainability, Climate Change, Social Justice, Global Health
  • AI & Health: Bioinformatics, Health Sciences, Neuroscience, Biology
  • AI & Human Experiences: Human-Computer Interaction, Tangible & Embodied Interaction, Human-Robot Interaction
  • AI & Learning Sciences: Foundations of Learning Sciences. Learning Environments in Education.
  • AI & Material Sciences: Smart Materials, Nanomaterials, Manufacturing Processes
  • AI & Neuroscience: Computational Neuroscience, Brain-Computer Interaction, Advanced Psychophysiology
  • AI & Robotics: Robot Dynamics, Biomedical Robotics, Soft Robotics
  • AI & Security: Software Security Design and Analysis, Machine Learning in Cybersecurity, Cryptography
  • AI & Software Systems: Adv. Software Eng., Algorithms, Mobile & Ubiquitous Computing, Distributed Systems

Note 1: Less than 50% of the credits in the MS in Artificial Intelligence can be taken from the Business School, that is, a maximum of 14 credits of a 30-credit program.  For 3-credit courses, a maximum of 4 courses may be taken from the Business School (any course with a prefix of ACC, BUS, ETR, FIN, MIS, MKT, OBC, or OIE). Note 2: A single course cannot be used to meet two or more requirements of the MS-AI degree. For instance, if a course is used to meet one particular bin requirement, it cannot also be used to meet a second bin requirement, nor can it be counted towards fulfilling a thematically related specialization.

Important Dates

Next Start: August 22, 2024  

Application Deadline: Apply anytime!  

How WPI Professors Teach AI Classes

Teaching in ai and ml.

Prof. Kyumin Lee talks about classes for recommendation systems, machine learning, and artificial intelligence.

Fusing Project-Based Learning into AI Education

Prof. Rodica Neamtu talks about how she incorporates projects into her classes and projects.

From the University Magazine

robot on wheels

Off Road Brawn with AI Brains

The Autonomous Vehicle Mobility Institute develops technology that will keep tomorrow’s off-road vehicles rolling along.

Similar Majors

A student working with professor

Graduate Studies Series

Learn from our enrollment team members and other guests by attending quick and convenient 30-minute webinars we designed to highlight popular topics when starting grad school. Take a deep dive into specific areas of interest such as how to secure funding, how to ace your application, an overview of student services, and more!

Take the First Step Today

Receive information about enrolling in WPI’s new MS in Artificial Intelligence program.  

Refer a Friend

Do you have a friend, colleague, or family member who might be interested in Worcester Polytechnic Institute’s (WPI) graduate programs? Click below to tell them about our programs.

master thesis artificial intelligence

  • Onsite training

3,000,000+ delegates

15,000+ clients

1,000+ locations

  • KnowledgePass
  • Log a ticket

01344203999 Available 24/7

master thesis artificial intelligence

12 Best Artificial Intelligence Topics for Research in 2024

Explore the "12 Best Artificial Intelligence Topics for Research in 2024." Dive into the top AI research areas, including Natural Language Processing, Computer Vision, Reinforcement Learning, Explainable AI (XAI), AI in Healthcare, Autonomous Vehicles, and AI Ethics and Bias. Stay ahead of the curve and make informed choices for your AI research endeavours.

stars

Exclusive 40% OFF

Training Outcomes Within Your Budget!

We ensure quality, budget-alignment, and timely delivery by our expert instructors.

Share this Resource

  • AI Tools in Performance Marketing Training
  • Deep Learning Course
  • Natural Language Processing (NLP) Fundamentals with Python
  • Machine Learning Course
  • Duet AI for Workspace Training

course

Table of Contents  

1) Top Artificial Intelligence Topics for Research 

     a) Natural Language Processing 

     b) Computer vision 

     c) Reinforcement Learning 

     d) Explainable AI (XAI) 

     e) Generative Adversarial Networks (GANs) 

     f) Robotics and AI 

     g) AI in healthcare 

     h) AI for social good 

     i) Autonomous vehicles 

     j) AI ethics and bias 

2) Conclusion 

Top Artificial Intelligence Topics for Research   

This section of the blog will expand on some of the best Artificial Intelligence Topics for research.

Top Artificial Intelligence Topics for Research

Natural Language Processing   

Natural Language Processing (NLP) is centred around empowering machines to comprehend, interpret, and even generate human language. Within this domain, three distinctive research avenues beckon: 

1) Sentiment analysis: This entails the study of methodologies to decipher and discern emotions encapsulated within textual content. Understanding sentiments is pivotal in applications ranging from brand perception analysis to social media insights. 

2) Language generation: Generating coherent and contextually apt text is an ongoing pursuit. Investigating mechanisms that allow machines to produce human-like narratives and responses holds immense potential across sectors. 

3) Question answering systems: Constructing systems that can grasp the nuances of natural language questions and provide accurate, coherent responses is a cornerstone of NLP research. This facet has implications for knowledge dissemination, customer support, and more. 

Computer Vision   

Computer Vision, a discipline that bestows machines with the ability to interpret visual data, is replete with intriguing avenues for research: 

1) Object detection and tracking: The development of algorithms capable of identifying and tracking objects within images and videos finds relevance in surveillance, automotive safety, and beyond. 

2) Image captioning: Bridging the gap between visual and textual comprehension, this research area focuses on generating descriptive captions for images, catering to visually impaired individuals and enhancing multimedia indexing. 

3) Facial recognition: Advancements in facial recognition technology hold implications for security, personalisation, and accessibility, necessitating ongoing research into accuracy and ethical considerations. 

Reinforcement Learning   

Reinforcement Learning revolves around training agents to make sequential decisions in order to maximise rewards. Within this realm, three prominent Artificial Intelligence Topics emerge: 

1) Autonomous agents: Crafting AI agents that exhibit decision-making prowess in dynamic environments paves the way for applications like autonomous robotics and adaptive systems. 

2) Deep Q-Networks (DQN): Deep Q-Networks, a class of reinforcement learning algorithms, remain under active research for refining value-based decision-making in complex scenarios. 

3) Policy gradient methods: These methods, aiming to optimise policies directly, play a crucial role in fine-tuning decision-making processes across domains like gaming, finance, and robotics.  

Introduction To Artificial Intelligence Training

Explainable AI (XAI)   

The pursuit of Explainable AI seeks to demystify the decision-making processes of AI systems. This area comprises Artificial Intelligence Topics such as: 

1) Model interpretability: Unravelling the inner workings of complex models to elucidate the factors influencing their outputs, thus fostering transparency and accountability. 

2) Visualising neural networks: Transforming abstract neural network structures into visual representations aids in comprehending their functionality and behaviour. 

3) Rule-based systems: Augmenting AI decision-making with interpretable, rule-based systems holds promise in domains requiring logical explanations for actions taken. 

Generative Adversarial Networks (GANs)   

The captivating world of Generative Adversarial Networks (GANs) unfolds through the interplay of generator and discriminator networks, birthing remarkable research avenues: 

1) Image generation: Crafting realistic images from random noise showcases the creative potential of GANs, with applications spanning art, design, and data augmentation. 

2) Style transfer: Enabling the transfer of artistic styles between images, merging creativity and technology to yield visually captivating results. 

3) Anomaly detection: GANs find utility in identifying anomalies within datasets, bolstering fraud detection, quality control, and anomaly-sensitive industries. 

Robotics and AI   

The synergy between Robotics and AI is a fertile ground for exploration, with Artificial Intelligence Topics such as: 

1) Human-robot collaboration: Research in this arena strives to establish harmonious collaboration between humans and robots, augmenting industry productivity and efficiency. 

2) Robot learning: By enabling robots to learn and adapt from their experiences, Researchers foster robots' autonomy and the ability to handle diverse tasks. 

3) Ethical considerations: Delving into the ethical implications surrounding AI-powered robots helps establish responsible guidelines for their deployment. 

AI in healthcare   

AI presents a transformative potential within healthcare, spurring research into: 

1) Medical diagnosis: AI aids in accurately diagnosing medical conditions, revolutionising early detection and patient care. 

2) Drug discovery: Leveraging AI for drug discovery expedites the identification of potential candidates, accelerating the development of new treatments. 

3) Personalised treatment: Tailoring medical interventions to individual patient profiles enhances treatment outcomes and patient well-being. 

AI for social good   

Harnessing the prowess of AI for Social Good entails addressing pressing global challenges: 

1) Environmental monitoring: AI-powered solutions facilitate real-time monitoring of ecological changes, supporting conservation and sustainable practices. 

2) Disaster response: Research in this area bolsters disaster response efforts by employing AI to analyse data and optimise resource allocation. 

3) Poverty alleviation: Researchers contribute to humanitarian efforts and socioeconomic equality by devising AI solutions to tackle poverty. 

Unlock the potential of Artificial Intelligence for effective Project Management with our Artificial Intelligence (AI) for Project Managers Course . Sign up now!  

Autonomous vehicles   

Autonomous Vehicles represent a realm brimming with potential and complexities, necessitating research in Artificial Intelligence Topics such as: 

1) Sensor fusion: Integrating data from diverse sensors enhances perception accuracy, which is essential for safe autonomous navigation. 

2) Path planning: Developing advanced algorithms for path planning ensures optimal routes while adhering to safety protocols. 

3) Safety and ethics: Ethical considerations, such as programming vehicles to make difficult decisions in potential accident scenarios, require meticulous research and deliberation. 

AI ethics and bias   

Ethical underpinnings in AI drive research efforts in these directions: 

1) Fairness in AI: Ensuring AI systems remain impartial and unbiased across diverse demographic groups. 

2) Bias detection and mitigation: Identifying and rectifying biases present within AI models guarantees equitable outcomes. 

3) Ethical decision-making: Developing frameworks that imbue AI with ethical decision-making capabilities aligns technology with societal values. 

Future of AI  

The vanguard of AI beckons Researchers to explore these horizons: 

1) Artificial General Intelligence (AGI): Speculating on the potential emergence of AI systems capable of emulating human-like intelligence opens dialogues on the implications and challenges. 

2) AI and creativity: Probing the interface between AI and creative domains, such as art and music, unveils the coalescence of human ingenuity and technological prowess. 

3) Ethical and regulatory challenges: Researching the ethical dilemmas and regulatory frameworks underpinning AI's evolution fortifies responsible innovation. 

AI and education   

The intersection of AI and Education opens doors to innovative learning paradigms: 

1) Personalised learning: Developing AI systems that adapt educational content to individual learning styles and paces. 

2) Intelligent tutoring systems: Creating AI-driven tutoring systems that provide targeted support to students. 

3) Educational data mining: Applying AI to analyse educational data for insights into learning patterns and trends. 

Unleash the full potential of AI with our comprehensive Introduction to Artificial Intelligence Training . Join now!  

Conclusion  

The domain of AI is ever-expanding, rich with intriguing topics about Artificial Intelligence that beckon Researchers to explore, question, and innovate. Through the pursuit of these twelve diverse Artificial Intelligence Topics, we pave the way for not only technological advancement but also a deeper understanding of the societal impact of AI. By delving into these realms, Researchers stand poised to shape the trajectory of AI, ensuring it remains a force for progress, empowerment, and positive transformation in our world. 

Unlock your full potential with our extensive Personal Development Training Courses. Join today!  

Frequently Asked Questions

Upcoming data, analytics & ai resources batches & dates.

Fri 15th Nov 2024

Fri 24th Jan 2025

Fri 28th Mar 2025

Fri 23rd May 2025

Fri 25th Jul 2025

Fri 26th Sep 2025

Fri 28th Nov 2025

Get A Quote

WHO WILL BE FUNDING THE COURSE?

My employer

By submitting your details you agree to be contacted in order to respond to your enquiry

  • Business Analysis
  • Lean Six Sigma Certification

Share this course

Our biggest summer sale.

red-star

We cannot process your enquiry without contacting you, please tick to confirm your consent to us for contacting you about your enquiry.

By submitting your details you agree to be contacted in order to respond to your enquiry.

We may not have the course you’re looking for. If you enquire or give us a call on 01344203999 and speak to our training experts, we may still be able to help with your training requirements.

Or select from our popular topics

  • ITIL® Certification
  • Scrum Certification
  • ISO 9001 Certification
  • Change Management Certification
  • Microsoft Azure Certification
  • Microsoft Excel Courses
  • Explore more courses

Press esc to close

Fill out your  contact details  below and our training experts will be in touch.

Fill out your   contact details   below

Thank you for your enquiry!

One of our training experts will be in touch shortly to go over your training requirements.

Back to Course Information

Fill out your contact details below so we can get in touch with you regarding your training requirements.

* WHO WILL BE FUNDING THE COURSE?

Preferred Contact Method

No preference

Back to course information

Fill out your  training details  below

Fill out your training details below so we have a better idea of what your training requirements are.

HOW MANY DELEGATES NEED TRAINING?

HOW DO YOU WANT THE COURSE DELIVERED?

Online Instructor-led

Online Self-paced

WHEN WOULD YOU LIKE TO TAKE THIS COURSE?

Next 2 - 4 months

WHAT IS YOUR REASON FOR ENQUIRING?

Looking for some information

Looking for a discount

I want to book but have questions

One of our training experts will be in touch shortly to go overy your training requirements.

Your privacy & cookies!

Like many websites we use cookies. We care about your data and experience, so to give you the best possible experience using our site, we store a very limited amount of your data. Continuing to use this site or clicking “Accept & close” means that you agree to our use of cookies. Learn more about our privacy policy and cookie policy cookie policy .

We use cookies that are essential for our site to work. Please visit our cookie policy for more information. To accept all cookies click 'Accept & close'.

  • Skip to Content
  • Catalog Home

New Jersey Institute of Technology

University Catalog 2024-2025

University catalog.

  • About the University
  • Undergraduate Catalog
  • Graduate Catalog
  • Degree Programs
  • Catalog A-​Z Index
  • Academic Policies and Procedures
  • Admissions and Financial Support
  • Campus Life and Student Services
  • Continuing Professional Education
  • Graduate Studies
  • Hillier College of Architecture and Design
  • Computer Science
  • Informatics
  • Data Science
  • Jordan Hu College of Science and Liberal Arts
  • Newark College of Engineering
  • Martin Tuchman School of Management
  • Catalog A-​Z Index
  • Catalog Archives

M.S. in Artificial Intelligence

  • Graduate Catalog »
  • Ying Wu College of Computing »
  • Data Science »

The M.S. program in Artificial Intelligence acclimates students to the ongoing AI revolution that has already produced computer programs with problem-solving and content-generating abilities that complement and enhance human abilities. The program offers theoretical and practical knowledge in various areas of AI, including Natural Language Understanding and Generation, Image Understanding, Reasoning, and Planning. It empowers students to apply AI techniques in a wide range of application domains. 

Prerequisites

Applicants should have a bachelor's degree in the general area of Computing, from an accredited University. Applicants with a bachelor's degree in STEM or related professional experience can start with the  graduate certificate and then apply to the M.S. program. Further information can be found in the program's webpage . 

Degree Requirements 

The program requires the completion of 30 credits. These are satisfied by taking 10 courses, as indicated in the following table.

Students in the Master of Science in Artificial Intelligence (MS-AI) program must successfully complete 30 credits based on any of the following options:

  • Courses only (30 credits)
  • Courses (27 credits) + MS Project (3 credits)
  • Courses (24 credits) + MS Thesis (6 credits)

Independent of the chosen option, 4 out of 7 core courses are required (detailed below).

If a student chooses the MS thesis option, the thesis must be related to Artificial Intelligence and requires approval from the Program Director.

Students may choose an elective outside the list after approval of their respective advisor.

Course List
Code Title Credits
Core Courses
Select at least four of the following:
Introduction to Big Data
Reinforcement Learning
Artificial Intelligence
Machine Learning
Deep Learning
Natural Language Processing
Trustworthy Artificial Intelligence
After the 4 core courses are completed, any of the remaining core courses listed can count towards the elective requirements.
Elective Courses
Data Mining
Image Processing and Analysis
Computer Vision
Advanced Machine Learning
High Performance Data Analytics
Pattern Recognition and Applications
Python and Mathematics for Machine Learning
Selected Topics in Data Science
Information Theory
Probability Distributions
Introduction to Biostatistics
Statistical Inference
Statistical Methods in Data Science
Introduction to Robotics
Deep Learning in Business
Project and Thesis Courses
Master's Project
Master's Thesis

DS 637  is recommended as an introductory course, offering a review of mathematics for machine learning to students with a limited background in mathematics or programming.

Master's Project and Thesis Policies

The contents of this section apply only to students who elect to do a DS 700B Master's Project or a DS 701B Master's Thesis in topics related to Artificial Intelligence.

Students must first find a research advisor who must be a tenure-track faculty of the DS department, including faculty with a joint appointment. Tenure-track faculty are the department members including those who hold joint appointments with the rank of Assistant Professor, Associate Professor, Professor, and Distinguished Professor.

In order to find a research advisor, students are encouraged to attend special presentations offered by the department or to directly contact professors. Professors may not always have availability for conducting an MS project/thesis. Students are therefore encouraged to start looking for an advisor as early as possible, especially if they are considering pursuing a Master’s Thesis that takes two semesters.

The students must be in close coordination with their research advisor who will determine the topic of the Project/Thesis and guide them to take specific elective courses that will prepare them for the research.

Registration:

  • Master’s Project: With permission of their research advisor, students must register in the DS 700B Master's Project course. To register for the Master's Project, students must have completed at least 9 credits and must be in good standing.
  • Master’s Thesis: With permission of their research advisor, students must register in the DS 701B Master's Thesis course. 
  • They must receive a satisfactory (S) grade in DS 700B before DS 701B registration in the immediately following semester, with the same advisor. The MS thesis topic should be continuation of the work done in DS 700B.

Thesis Requirements:

  • An MS Thesis Committee must be formed, according to the requirements set forth by the Office of Graduate Studies.
  • A written thesis must be submitted. The thesis must adhere to the style requirements set forth by the Office of Graduate Studies.
  • An oral defense is required. The defense must take place before the last day of the Examinations.
  • Maps & Directions

Print Options

Print this page.

The PDF will include all information unique to this page.

Select language

GSNS Master Header AI Artificial Intelligence 2020

Artificial Intelligence

Thesis project.

In the final thesis project, the student carries out a research project under the supervision of one of the staff members of the research groups offering the AI programme. The project can be done based at Utrecht University, at a company or research institute, or at a foreign university (see also: ‘ stay abroad - traineeship ’).

Before starting the thesis project students are strongly advised to first attend the thesis information session meeting, which is offered at the start of each teaching period. See course INFOMTIMAI for more info .

When looking for a project, please check the following sources.

  • Konjoin always has a number of AI projects.
  • Jobteaser also has interesting external internships for AI students.

General description

The AI Thesis Project is split into a 14 EC project proposal phase (INFOMAI1) and a 30 EC thesis phase (INFOMAI2). The thesis project takes about 8 months (three periods). The set up phase that is necessary to arrange your project is not counted. 

The thesis project consists of a project idea, a UU graduation supervisor, and a graduation project facilitator. The project facilitator can either be a company or the University. Original ideas from the students are welcome, as long as they are aligned with the research interests and/or proposed projects by the supervisors. 

For a thesis project, the student always needs a supervisor from one of the research groups of the UU offering the AI programme. If the final project is conducted within a company or external institute, both a local supervisor within the company/institute and a supervisor of the AI programme teaching staff monitor and guide the student. 

When can a thesis be started? When all courses are successfully completed, with the exception of Dilemmas of the Scientist (FI-MHPSDL1 and FI-MHPSDL2), for which you can do the second workshop (FI-MHPSDL2) during your theses process. Further exceptions can be given by the AI programme coordinator for students with one pending course. Note that you should start looking for a supervisor and a subject before you have finished all your courses (see “Set Up” below). 

Where do I start? Read the information on the various stages of the thesis project below. If you have any questions not covered here contact the programme coordinator  ( [email protected] ). 

How long does the thesis take? Normally, a thesis project (phase 1 + phase 2) runs for 3 periods/terms (see the schedules). However, holidays, courses or other activities may lead to a thesis projects that takes slightly longer. Please Please see below what to do when your thesis is delayed and you have to apply for a thesis deadline extension (part 1 and/or part 2) 

Previous theses. To get an overview of what an AI thesis looks like, you can consult previous theses online . 

Learning goals. After completing your thesis project, you will:

  • have advanced knowledge about a specific subject within AI
  • be able to findings on a specific subject within the broader, interdisciplinary field of AI
  • be able to independently perform a critical literature study
  • be able to formulate a research question of interest to AI and a plan / method to answer this research question
  • be able to perform scientific research according to a predetermined plan and a standard method within AI
  • be able to report the research findings in the form of a scientific thesis
  • be able to report the research findings by means of an oral presentation

This preliminary step is executed before the official start of Phase 1. The duration largely depends on how quickly a supervisor is found and a topic is agreed upon. This part is excluded from the duration of the thesis project. 

1. Find a project and a supervisor   You can do an external or an internal (UU) project. The following tips might come in handy when looking for a project. 

  • Think about the courses you found interesting and ask the lecturers of these courses if they have/know of any projects.
  • Jobteaser also has interesting external internships for AI students. 

Note that any topic has to be agreed with the UU staff member who will act as a first supervisor. Arrange meetings with staff members to discuss possible options, based on their research interests (look at their webpages, their Google Scholar profile, or ask the Programme Coordinator ( [email protected] ). If unsure about possible topics, please arrange a meeting with the Programme coordinator. Students can also try to arrange a project that fits within an internship with a company. Any project, however, requires a first supervisor from the department who guarantees the scientific quality of the thesis project, so it is advisable to talk to potential supervisors and/or the graduation coordinator before agreeing on an internship. 

2. Define your project  Together with the first supervisor, describe your project's title, problem, aims, and research goals. Come up with a short textual description (about 200 words). Also make clear arrangements with your first supervisor concerning planning, holidays, supervision meetings and so forth. Please make sure you have a clear understanding with your first supervisor regarding deadlines and extra work, holidays etc. to be done during the thesis project. Normally, a thesis project runs for 3 periods/terms, but you can set any reasonable deadline in agreement with your supervisor. Please see below what to do when your thesis is delayed and you have to apply for a thesis deadline extension (part 1 and/or part 2).

3. Ensure adherence to Ethics and Privacy regulations -  Quick Scan From Period 2 of 2022-23, all Master AI thesis projects require ethics and privacy approval. For projects that do not involve human users and data privacy issues this will be a very brief and straightforward process, but you still need to complete an ethics checklist. If you are doing your project with a supervisor in a department that already has ethics approval process in place (such as Cognitive Psychology), then ask the supervisor what you need to do in order to obtain ethics approval. Otherwise, please inform your supervisor that you need to obtain an ethics and privacy approval. Go to the website that contains the ethics checklist and sample Information sheets and consent forms:  https://www.uu.nl/en/research/institute-of-information-and-computing-sciences/ethics-and-privacy . First, download the Word form and discuss how to fill it in with your supervisor. Then fill in the Qualtrics form. Please fill in as the moderator email: [email protected] .

4. Work placement agreement If you conduct a project outside UU, the GSNS Work Placement Agreement (WPA)  should be filled in, and signed by the student, company supervisor, and the Science Research Project Coordinator . Deviations to the standard contract shall be discussed with the Science Research Project Coordinator. 

You need to fill out and upload your WPA with your Research Project application form (see next step) in OSIRIS student.

5. Formalize the start of your Research Project via submitting the Research Project application form  

Use Osiris student  (select 'MyCases', 'Start Case', ‘Research Project GSNS’) to submit your research project application form; if applicable, you will also upload the signed Work Placement Agreement with your application form in OSIRIS. 

Important: in order to apply completely and correctly, you must have discussed the project setup with your intended project supervisor beforehand! We advise you to study the request form previous to discussing it with your supervisor, or fill it out together, to make sure you obtain all of the information required. 

After submitting your application form in OSIRIS, your form will be forwarded to your 1st and 2nd Examiner (supervisors), master’s programme coordinator, the Board of Examiners and Student Affairs for checks and approvals. You may be asked for modifications, should they find any problems with the form. 

Please note. You cannot register yourself in OSIRIS for the relevant research project courses (INFOMAI1 and INFOMAI2). You will be automatically registered for part 1 of the project upon approval of the Research Project Application Form.

Phase 1 - Project proposal

The phase comprises 14 EC (i.e. 10 weeks of full-time work) and is intended for you to do a preliminary study (usually in the form of literature study), and to propose and plan your research. Importantly, this phase will give a go/no-go decision towards Phase-2. You are expected to deliver a research proposal consisting of the following: 

  • A literature study section, summarizing works that are relevant to your research. 
  • Well formulated research question(s). 
  • A plan for the second part of the thesis.

Additionally, depending on the nature of the project, your supervisor may require you to perform some initial research work in Phase-1, either in order to provide a convincing argument towards the prospect and feasibility of your Phase-2, or for efficiency to already do some work of Phase-2, e.g. developing an initial theory or building a first prototype of an algorithm. If such work is required, make an agreement with your supervisor on the scope of this work. 

At the end of Phase-1 the supervisor(s) will make a go/no-go decision. This decision, in terms of pass or not pass, will be entered in Osiris. Phase-1 assessment criteria: 

  • Scientific quality. This concerns the quality of the literature study, the relevance and impact of the research questions, the merit of proposed research method. 
  • Writing skills. This concerns the quality of your writing, use of English, textual structure, and coherence/consistency of your text. 
  • Planning. This concerns the clarity and feasibility of the proposed planning. 
  • The quality of additional work, if such is required.

An  example assessment form  with more detailed criteria is available. Please use this form only as a discussion piece and do not send in paper or scanned forms.

Phase 2 - Thesis

The second part comprises 30EC (i.e. 21 weeks full-time). You will complete (at least) the following items: 

  • Perform and complete your research according to your plan (Phase 1). 
  • Write your thesis that presents your research and its results. 
  • Present and defend your results and conclusion. You are asked to prepare a presentation about your research that is understandable by fellow students. The defence will be 45 minutes long; 30 minutes for your presentation, and 15 minutes for questions. 

Content of the thesis.  In addition to the main text describing the research, the master thesis should at least contain: 

  • a front page, containing: name of the student, name of the supervisors, student number, date, name of the program (master Artificial Intelligence, Utrecht University); 
  • an abstract; 
  • an introduction and a conclusion;  
  • a brief discussion of the relevance of the thesis topic for the field of AI; 
  • a list of references.   

Please discuss the exact requirements for your thesis with your daily supervisor/first examiner at the beginning of your project.  

Phase-2 assessment criteria.  Your thesis is assessed using the following criteria: 

  • Project process (30%). This concerns your ability to work independently, to take initiative, to position your work in a broader context, to adapt to new requirements and developments, and to finish the thesis on time. 
  • Project report (30%). This concerns the ability to clearly formulate problems, to summarize the results, to compare them with related scientific work elsewhere, and to suggest future research lines. This also concerns clear, consistent, and unambiguous use of language in the thesis. The text should give the readers confidence in that you understand the chronology, structure, and logical entities in your own text; and thus know what you write. 
  • Project results (30%). This concerns the level and importance of your results. Are the results publishable as a scientific paper? The difficulty of the problem that you solve also plays an important role, as well as the amount/extent of the work you carry out. These are aspects that are important: the effectiveness of the chosen approach, completeness and preciseness of the literature study, arguments for the choices made, insight in the limitations of the chosen approach, proper interpretation of the results achieved, level of abstraction, convincing argument, proofs or statistical analysis. 
  • Project presentation (10%). The ability to orally present your project and its results clearly and concisely. 

An  example assessment form  with more detailed criteria is available. Please use this form only as a discussion piece and do not send in paper or scanned forms. 

Phase 2 - Wrap up

When approaching the finalization of the thesis (i.e, when the supervisors think so), it is time to wrap up the project and graduate. 

  • Set date for graduation presentation : both supervisors should agree on the date, including the time. 

Arrange (virtual) room for defence : The public defence can take place in Teams. If desired by the candidate and/or the supervisors, you can also defend your thesis in a lecture room on campus, ideally with a livestream or in a hybrid form so that e.g. fellow students or friends can also watch online. You can make a Teams meeting yourself, and send an e-mail to the secretariat ( [email protected] ) to arrange for a suitable room for your presentation. Please make sure to include the time, date, name of the thesis, supervisor, and the number of expected attendees. 

Inform the AI coordinator  ( [email protected] ) about the details of your defence (title, abstract, date, time, room and/or Teams link). The coordinator will announce the defence on Teams and via the mailing list. 

Thesis defence : the student gives a presentation of 30 minutes, followed by a question-and-answer session that typically lasts about 15-20 minutes. Your first and second supervisor will decide on your grade and announce this after your presentation. 

Upload thesis  to Osiris Student:  After the defence, the student must  upload the final version of their thesis through Osiris Student > my cases . 

Archiving and publishing thesis to Thesis Archive You will be asked once more to upload the final version of your thesis through OSIRIS Student, yet, this time this is for archiving and publishing purposes. The Case will not be available by default via OSIRIS Student. You will receive an email as soon as the Case in OSIRIS Student is available to you. More information on thesis archiving and publication can be found here . 

Graduation checks and ceremony

The Student Desk at Student Affairs keeps track of your study progress in Osiris. When Osiris indicates that you have completed all the required elements of your degree your file is forwarded to the Board of Examiners. These checks only occur around the 15th of each month. Therefore, do you wish to graduate by the end of the month, please ensure you have completed all elements of your degree before the 15th of the month so all your credits are registered in OSIRIS. This also includes the uploads of your final thesis.  

The Board of Examiners then checks whether you meet all examination requirements. Following the Board's approval your graduation date will be emailed to you on your UU email account.  

Please DO NOT terminate your enrolment in Studielink until the Student Desk has informed you about the decision of the Board of Examiners and you have received your graduation date. For further information, please check the graduation  page.

What do I need to do if my Research Project gets delayed?

Students with a research project from 1 september 2024:.

This is what you need to do if you foresee a delay of your Research Project or extension/addition to the project is necessary.

The protocol*

  • The student and examiners need to finish the Research Project before the in Osiris Zaak specified end date. The end date is the last date by which the final grade is determined. The end date is based on full-time study.
  • If the end date cannot be met, the student, first and second examiner agree on a new end date. This new end date will be passed on to the Board of Examiners by the student via  Osiris Student > ‘My Cases’> ‘Start Case’  > ‘Request to the Board of Examiners’ > ‘request type ‘New end date thesis project’ . This needs to happen before the initial end date is reached. Valid reasons for agreeing on a new end date can be both personal circumstances and research-related circumstances.
  • The student and examiners can impose an examination on the agreed end date. In the case the other party does not agree with this, they can turn to the programme leader. A student who due to circumstances beyond their control cannot be present during examination can request the Board of Examiners for a special testing provision.

The student and/or examiners can turn to the Board of Examiners in cases of disagreement on the implementation of this protocol or other conflicts not covered by this protocol. In these cases, the Board of Examiners decides in line with the spirit of this protocol.

*This protocol is translated from the Dutch version in the EER/OER and no rights can be derived from any errors in translation.

Students with a Research Project from before 1 September 2024:

This is what you need to do if you, due to circumstances beyond your control, foresee a delay of your Research Project in Part 1 or Part 2.

The procedure:

  • Discuss this first with your supervisor(s). If all agree a new realistic end date will be set for the Research Project.
  • After that, contact the Study Advisor and the programme coordinator and ask for consent to determine a new end date for your thesis.
  • Apply for an extension of the research project deadline for Part 1 or Part 2 to the Board of Examiners via Osiris Student > ‘My Cases’> ‘Start Case’  > ‘Request to the Board of Examiners’ > ‘request type 7 'Delay of research or thesis project'.

What information is needed for the application form:

  • A statement from the Study Advisor
  • A copy of an email in which the supervisors support the request for a deadline extension
  • A proposed new deadline
  • Short statement to support your request

Follow Utrecht University

Utrecht University Heidelberglaan 8 3584 CS Utrecht The Netherlands Tel. +31 (0)30 253 35 50

Artificial Intelligence

Completed Theses

State space search solves navigation tasks and many other real world problems. Heuristic search, especially greedy best-first search, is one of the most successful algorithms for state space search. We improve the state of the art in heuristic search in three directions.

In Part I, we present methods to train neural networks as powerful heuristics for a given state space. We present a universal approach to generate training data using random walks from a (partial) state. We demonstrate that our heuristics trained for a specific task are often better than heuristics trained for a whole domain. We show that the performance of all trained heuristics is highly complementary. There is no clear pattern, which trained heuristic to prefer for a specific task. In general, model-based planners still outperform planners with trained heuristics. But our approaches exceed the model-based algorithms in the Storage domain. To our knowledge, only once before in the Spanner domain, a learning-based planner exceeded the state-of-the-art model-based planners.

A priori, it is unknown whether a heuristic, or in the more general case a planner, performs well on a task. Hence, we trained online portfolios to select the best planner for a task. Today, all online portfolios are based on handcrafted features. In Part II, we present new online portfolios based on neural networks, which receive the complete task as input, and not just a few handcrafted features. Additionally, our portfolios can reconsider their choices. Both extensions greatly improve the state-of-the-art of online portfolios. Finally, we show that explainable machine learning techniques, as the alternative to neural networks, are also good online portfolios. Additionally, we present methods to improve our trust in their predictions.

Even if we select the best search algorithm, we cannot solve some tasks in reasonable time. We can speed up the search if we know how it behaves in the future. In Part III, we inspect the behavior of greedy best-first search with a fixed heuristic on simple tasks of a domain to learn its behavior for any task of the same domain. Once greedy best-first search expanded a progress state, it expands only states with lower heuristic values. We learn to identify progress states and present two methods to exploit this knowledge. Building upon this, we extract the bench transition system of a task and generalize it in such a way that we can apply it to any task of the same domain. We can use this generalized bench transition system to split a task into a sequence of simpler searches.

In all three research directions, we contribute new approaches and insights to the state of the art, and we indicate interesting topics for future work.

Greedy best-first search (GBFS) is a sibling of A* in the family of best-first state-space search algorithms. While A* is guaranteed to find optimal solutions of search problems, GBFS does not provide any guarantees but typically finds satisficing solutions more quickly than A*. A classical result of optimal best-first search shows that A* with admissible and consistent heuristic expands every state whose f-value is below the optimal solution cost and no state whose f-value is above the optimal solution cost. Theoretical results of this kind are useful for the analysis of heuristics in different search domains and for the improvement of algorithms. For satisficing algorithms a similarly clear understanding is currently lacking. We examine the search behavior of GBFS in order to make progress towards such an understanding.

We introduce the concept of high-water mark benches, which separate the search space into areas that are searched by GBFS in sequence. High-water mark benches allow us to exactly determine the set of states that GBFS expands under at least one tie-breaking strategy. We show that benches contain craters. Once GBFS enters a crater, it has to expand every state in the crater before being able to escape.

Benches and craters allow us to characterize the best-case and worst-case behavior of GBFS in given search instances. We show that computing the best-case or worst-case behavior of GBFS is NP-complete in general but can be computed in polynomial time for undirected state spaces.

We present algorithms for extracting the set of states that GBFS potentially expands and for computing the best-case and worst-case behavior. We use the algorithms to analyze GBFS on benchmark tasks from planning competitions under a state-of-the-art heuristic. Experimental results reveal interesting characteristics of the heuristic on the given tasks and demonstrate the importance of tie-breaking in GBFS.

Classical planning tackles the problem of finding a sequence of actions that leads from an initial state to a goal. Over the last decades, planning systems have become significantly better at answering the question whether such a sequence exists by applying a variety of techniques which have become more and more complex. As a result, it has become nearly impossible to formally analyze whether a planning system is actually correct in its answers, and we need to rely on experimental evidence.

One way to increase trust is the concept of certifying algorithms, which provide a witness which justifies their answer and can be verified independently. When a planning system finds a solution to a problem, the solution itself is a witness, and we can verify it by simply applying it. But what if the planning system claims the task is unsolvable? So far there was no principled way of verifying this claim.

This thesis contributes two approaches to create witnesses for unsolvable planning tasks. Inductive certificates are based on the idea of invariants. They argue that the initial state is part of a set of states that we cannot leave and that contains no goal state. In our second approach, we define a proof system that proves in an incremental fashion that certain states cannot be part of a solution until it has proven that either the initial state or all goal states are such states.

Both approaches are complete in the sense that a witness exists for every unsolvable planning task, and can be verified efficiently (in respect to the size of the witness) by an independent verifier if certain criteria are met. To show their applicability to state-of-the-art planning techniques, we provide an extensive overview how these approaches can cover several search algorithms, heuristics and other techniques. Finally, we show with an experimental study that generating and verifying these explanations is not only theoretically possible but also practically feasible, thus making a first step towards fully certifying planning systems.

Heuristic search with an admissible heuristic is one of the most prominent approaches to solving classical planning tasks optimally. In the first part of this thesis, we introduce a new family of admissible heuristics for classical planning, based on Cartesian abstractions, which we derive by counterexample-guided abstraction refinement. Since one abstraction usually is not informative enough for challenging planning tasks, we present several ways of creating diverse abstractions. To combine them admissibly, we introduce a new cost partitioning algorithm, which we call saturated cost partitioning. It considers the heuristics sequentially and uses the minimum amount of costs that preserves all heuristic estimates for the current heuristic before passing the remaining costs to subsequent heuristics until all heuristics have been served this way.

In the second part, we show that saturated cost partitioning is strongly influenced by the order in which it considers the heuristics. To find good orders, we present a greedy algorithm for creating an initial order and a hill-climbing search for optimizing a given order. Both algorithms make the resulting heuristics significantly more accurate. However, we obtain the strongest heuristics by maximizing over saturated cost partitioning heuristics computed for multiple orders, especially if we actively search for diverse orders.

The third part provides a theoretical and experimental comparison of saturated cost partitioning and other cost partitioning algorithms. Theoretically, we show that saturated cost partitioning dominates greedy zero-one cost partitioning. The difference between the two algorithms is that saturated cost partitioning opportunistically reuses unconsumed costs for subsequent heuristics. By applying this idea to uniform cost partitioning we obtain an opportunistic variant that dominates the original. We also prove that the maximum over suitable greedy zero-one cost partitioning heuristics dominates the canonical heuristic and show several non-dominance results for cost partitioning algorithms. The experimental analysis shows that saturated cost partitioning is the cost partitioning algorithm of choice in all evaluated settings and it even outperforms the previous state of the art in optimal classical planning.

Classical planning is the problem of finding a sequence of deterministic actions in a state space that lead from an initial state to a state satisfying some goal condition. The dominant approach to optimally solve planning tasks is heuristic search, in particular A* search combined with an admissible heuristic. While there exist many different admissible heuristics, we focus on abstraction heuristics in this thesis, and in particular, on the well-established merge-and-shrink heuristics.

Our main theoretical contribution is to provide a comprehensive description of the merge-and-shrink framework in terms of transformations of transition systems. Unlike previous accounts, our description is fully compositional, i.e. can be understood by understanding each transformation in isolation. In particular, in addition to the name-giving merge and shrink transformations, we also describe pruning and label reduction as such transformations. The latter is based on generalized label reduction, a new theory that removes all of the restrictions of the previous definition of label reduction. We study the four types of transformations in terms of desirable formal properties and explain how these properties transfer to heuristics being admissible and consistent or even perfect. We also describe an optimized implementation of the merge-and-shrink framework that substantially improves the efficiency compared to previous implementations.

Furthermore, we investigate the expressive power of merge-and-shrink abstractions by analyzing factored mappings, the data structure they use for representing functions. In particular, we show that there exist certain families of functions that can be compactly represented by so-called non-linear factored mappings but not by linear ones.

On the practical side, we contribute several non-linear merge strategies to the merge-and-shrink toolbox. In particular, we adapt a merge strategy from model checking to planning, provide a framework to enhance existing merge strategies based on symmetries, devise a simple score-based merge strategy that minimizes the maximum size of transition systems of the merge-and-shrink computation, and describe another framework to enhance merge strategies based on an analysis of causal dependencies of the planning task.

In a large experimental study, we show the evolution of the performance of merge-and-shrink heuristics on planning benchmarks. Starting with the state of the art before the contributions of this thesis, we subsequently evaluate all of our techniques and show that state-of-the-art non-linear merge-and-shrink heuristics improve significantly over the previous state of the art.

Admissible heuristics are the main ingredient when solving classical planning tasks optimally with heuristic search. Higher admissible heuristic values are more accurate, so combining them in a way that dominates their maximum and remains admissible is an important problem.

The thesis makes three contributions in this area. Extensions to cost partitioning (a well-known heuristic combination framework) allow to produce higher estimates from the same set of heuristics. The new heuristic family called operator-counting heuristics unifies many existing heuristics and offers a new way to combine them. Another new family of heuristics called potential heuristics allows to cast the problem of finding a good heuristic as an optimization problem.

Both operator-counting and potential heuristics are closely related to cost partitioning. They offer a new look on cost partitioned heuristics and already sparked research beyond their use as classical planning heuristics.

Master's theses

Optimal planning is an ongoing topic of research, and requires efficient heuristic search algorithms. One way of calculating such heuristics is through the use of Linear Programs (LPs) and solvers thereof. This thesis investigates the efficiency of LP-based heuristic search strategies of different heuristics, focusing on how different LP solving strategies and solver settings impact the performance of calculating these heuristics. Using the Fast Downward planning system and a comprehensive benchmark set of planning tasks, we conducted a series of experiments to determine the effectiveness of the primal and dual simplex methods and the primal-dual logarithmic barrier method. Our results show that the choice of the LP solver and the application of specific solver settings influence the efficiency of calculating the required heuristics, and showed that the default setting of CPLEX is not optimal in some cases and can be enhanced by specifying an LP-solver or using other non-default solver settings. This thesis lays the groundwork for future research of using different LP solving algorithms and solver settings in the context of LP-based heuristic search in optimal planning.

Classical planning tasks are typically formulated in PDDL. Some of them can be described more concisely using derived variables. Contrary to basic variables, their values cannot be changed by operators and are instead determined by axioms which specify conditions under which they take a certain value. Planning systems often support axioms in their search component, but their heuristics’ support is limited or nonexistent. This leads to decreased search performance with tasks that use axioms. We compile axioms away using our implementation of a known algorithm in the Fast Downward planner. Our results show that the compilation has a negative impact on search performance with its only benefit being the ability to use heuristics that have no axiom support. As a compromise between performance and expressivity, we identify axioms of a simple form and devise a compilation for them. We compile away all axioms in several of the tested domains without a decline in search performance.

The International Planning Competitions (IPCs) serve as a testing suite for planning systems. These domains are well-motivated as they are derived from, or possess characteristics analogous to real-life applications. In this thesis, we study the computational complexity of the plan existence and bounded plan existence decision problems of the following grid-based IPC domains: VisitAll, TERMES, Tidybot, Floortile, and Nurikabe. In all of these domains, there are one or more agents moving through a rectangular grid (potentially with obstacles) performing actions along the way. In many cases, we engineer instances that can be solved only if the movement of the agent or agents follows a Hamiltonian path or cycle in a grid graph. This gives rise to many NP-hardness reductions from Hamiltonian path/cycle problems on grid graphs. In the case of VisitAll and Floortile, we give necessary and sufficient conditions for deciding the plan existence problem in polynomial time. We also show that Tidybot has the game Push -1F as a special case, and its plan existence problem is thus PSPACE-complete. The hardness proofs in this thesis highlight hard instances of these domains. Moreover, by assigning a complexity class to each domain, researchers and practitioners can better assess the strengths and limitations of new and existing algorithms in these domains.

Planning tasks can be used to describe many real world problems of interest. Solving those tasks optimally is thus an avenue of great interest. One established and successful approach for optimal planning is the merge-and-shrink framework, which decomposes the task into a factored transition system. The factors initially represent the behaviour of one state variable and are repeatedly combined and abstracted. The solutions of these abstract states is then used as a heuristic to guide search in the original planning task. Existing merge-and-shrink transformations keep the factored transition system orthogonal, meaning that the variables of the planning task are represented in no more than one factor at any point. In this thesis we introduce the clone transformation, which duplicates a factor of the factored transition system, making it non-orthogonal. We test two classes of clone strategies, which we introduce and implement in the Fast Downward planning system and conclude that, while theoretically promising, our clone strategies are practically inefficient as their performance was worse than state-of-the-art methods for merge-and-shrink.

Abstractions are a common way to obtain heuristic estimates that can be used for optimal planning. Abstractions typically preserve the transition behavior of the original state space to explicitly search for optimal plans in the abstract state space. Implicit abstractions on the other hand, do not preserve the transition behavior. A planning task is instead decomposed into multiple implicit abstractions such that constraints, similar to cost partitioning, are fulfilled.

Operator-counting constraints are constraints that must be fulfilled in every plan. They allow us to combine different types of constraints under a linear program formulation by using a fixed optimization function and using the result as a heuristic estimate. In this thesis, we construct operator-counting constraints for fork abstractions. Fork abstractions are concrete instances of implicit abstractions. We derive the operator-counting constraints from the inherent cost-partition constraints of the implicit fork abstractions. Our experimental evaluation shows that the heuristic obtained from operator-counting constraints for implicit fork abstractions is computationally too expensive and does not provide a clear accuracy advantage over other operator-counting constraint based heuristics to make up for it.

This thesis aims to present a novel approach for improving the performance of classical planning algorithms by integrating cost partitioning with merge-and-shrink techniques. Cost partitioning is a well-known technique for admissibly adding multiple heuristic values. Merge-and-shrink, on the other hand, is a technique to generate well-informed abstractions. The "merge” part of the technique is based on creating an abstract representation of the original problem by replacing two transition systems with their synchronised product. In contrast, the ”shrink” part refers to reducing the size of the factor. By combining these two approaches, we aim to leverage the strengths of both methods to achieve better scalability and efficiency in solving classical planning problems. Considering a range of benchmark domains and the Fast Downward planning system, the experimental results show that the proposed method achieves the goal of fusing merge and shrink with cost partitioning towards better outcomes in classical planning.

Planning is the process of finding a path in a planning task from the initial state to a goal state. Multiple algorithms have been implemented to solve such planning tasks, one of them being the Property-Directed Reachability algorithm. Property-Directed Reachability utilizes a series of propositional formulas called layers to represent a super-set of states with a goal distance of at most the layer index. The algorithm iteratively improves the layers such that they represent a minimum number of states. This happens by strengthening the layer formulas and therefore excluding states with a goal distance higher than the layer index. The goal of this thesis is to implement a pre-processing step to seed the layers with a formula that already excludes as many states as possible, to potentially improve the run-time performance. We use the pattern database heuristic and its associated pattern generators to make use of the planning task structure for the seeding algorithm. We found that seeding does not consistently improve the performance of the Property-Directed Reachability algorithm. Although we observed a significant reduction in planning time for some tasks, it significantly increased for others.

Certifying algorithms is a concept developed to increase trust by demanding affirmation of the computed result in form of a certificate. By inspecting the certificate, it is possible to determine correctness of the produced output. Modern planning systems have been certifying for long time in the case of solvable instances, where a generated plan acts as a certificate.

Only recently there have been the first steps towards certifying unsolvability judgments in the form of inductive certificates which represent certain sets of states. Inductive certificates are expressed with the help of propositional formulas in a specific formalism.

In this thesis, we investigate the use of propositional formulas in conjunctive normal form (CNF) as a formalism for inductive certificates. At first, we look into an approach that allows us to construct formulas representing inductive certificates in CNF. To show general applicability of this approach, we extend this to the family of delete relaxation heuristics. Furthermore, we present how a planning system is able to generate an inductive validation formula, a single formula that can be used to validate if the set found by the planner is indeed an inductive certificate. At last, we show with an experimental evaluation that the CNF formalism can be feasible in practice for the generation and validation of inductive validation formulas.

In generalized planning the aim is to solve whole classes of planning tasks instead of single tasks one at a time. Generalized representations provide information or knowledge about such classes to help solving them. This work compares the expressiveness of three generalized representations, generalized potential heuristics, policy sketches and action schema networks, in terms of compilability. We use a notion of equivalence that requires two generalized representations to decompose the tasks of a class into the same subtasks. We present compilations between pairs of equivalent generalized representations and proofs where a compilation is impossible.

A Digital Microfluidic Biochip (DMFB) is a digitally controllable lab-on-a-chip. Droplets of fluids are moved, merged and mixed on a grid. Routing these droplets efficiently has been tackled by various different approaches. We try to use temporal planning to do droplet routing, inspired by the use of it in quantum circuit compilation. We test a model for droplet routing in both classical and temporal planning and compare both versions. We show that our classical planning model is an efficient method to find droplet routes on DMFBs. Then we extend our model and include spawning, disposing, merging, splitting and mixing of droplets. The results of these extensions show that we are able to find plans for simple experiments. When scaling the problem size to real life experiments our model fails to find plans.

Cost partitioning is a technique used to calculate heuristics in classical optimal planning. It involves solving a linear program. This linear program can be decomposed into a master and pricing problems. In this thesis we combine Fourier-Motzkin elimination and the double description method in different ways to precompute the generating rays of the pricing problems. We further empirically evaluate these approaches and propose a new method that replaces the Fourier-Motzkin elimination. Our new method improves the performance of our approaches with respect to runtime and peak memory usage.

The increasing number of data nowadays has contributed to new scheduling approaches. Aviation is one of the domains concerned the most, as the aircraft engine implies millions of maintenance events operated by staff worldwide. In this thesis we present a constraint programming-based algorithm to solve the aircraft maintenance scheduling problem. We want to find the best time to do the maintenance by determining which employee will perform the work and when. Here we report how the scheduling process in aviation can be automatized.

To solve stochastic state-space tasks, the research field of artificial intelligence is mainly used. PROST2014 is state of the art when determining good actions in an MDP environment. In this thesis, we aimed to provide a heuristic by using neural networks to outperform the dominating planning system PROST2014. For this purpose, we introduced two variants of neural networks that allow to estimate the respective Q-value for a pair of state and action. Since we envisaged the learning method of supervised learning, in addition to the architecture as well as the components of the neural networks, the generation of training data was also one of the main tasks. To determine the most suitable network parameters, we performed a sequential parameter search, from which we expected a local optimum of the model settings. In the end, the PROST2014 planning system could not be surpassed in the total rating evaluation. Nevertheless, in individual domains, we could establish increased final scores on the side of the neural networks. The result shows the potential of this approach and points to eventual adaptations in future work pursuing this procedure furthermore.

In classical planning, there are tasks that are hard and tasks that are easy. We can measure the complexity of a task with the correlation complexity, the improvability width, and the novelty width. In this work, we compare these measures.

We investigate what causes a correlation complexity of at least 2. To do so we translate the state space into a vector space which allows us to make use of linear algebra and convex cones.

Additionally, we introduce the Basel measure, a new measure that is based on potential heuristics and therefore similar to the correlation complexity but also comparable to the novelty width. We show that the Basel measure is a lower bound for the correlation complexity and that the novelty width +1 is an upper bound for the Basel measure.

Furthermore, we compute the Basel measure for some tasks of the International Planning Competitions and show that the translation of a task can increase the Basel measure by removing seemingly irrelevant state variables.

Unsolvability is an important result in classical planning and has seen increased interest in recent years. This thesis explores unsolvability detection by automatically generating parity arguments, a well-known way of proving unsolvability. The argument requires an invariant measure, whose parity remains constant across all reachable states, while all goal states are of the opposite parity. We express parity arguments using potential functions in the field F 2 . We develop a set of constraints that describes potential functions with the necessary separating property, and show that the constraints can be represented efficiently for up to two-dimensional features. Enhanced with mutex information, an algorithm is formed that tests whether a parity function exists for a given planning task. The existence of such a function proves the task unsolvable. To determine its practical use, we empirically evaluate our approach on a benchmark of unsolvable problems and compare its performance to a state of the art unsolvability planner. We lastly analyze the arguments found by our algorithm to confirm their validity, and understand their expressive power.

We implemented the invariant synthesis algorithm proposed by Rintanen and experimentally compared it against Helmert’s mutex group synthesis algorithm as implemented in Fast Downward.

The context for the comparison is the translation of propositional STRIPS tasks to FDR tasks, which requires the identification of mutex groups.

Because of its dominating lead in translation speed, combined with few and marginal advantages in performance during search, Helmert’s algorithm is clearly better for most uses. Meanwhile Rintanen’s algorithm is capable of finding invariants other than mutexes, which Helmert’s algorithm per design cannot do.

The International Planning Competition (IPC) is a competition of state-of-the-art planning systems. The evaluation of these planning systems is done by measuring them with different problems. It focuses on the challenges of AI planning by analyzing classical, probabilistic and temporal planning and by presenting new problems for future research. Some of the probabilistic domains introduced in IPC 2018 are Academic Advising, Chromatic Dice, Cooperative Recon, Manufacturer, Push Your Luck, Red-finned Blue-eyes, etc.

This thesis aims to solve (near)-optimally two probabilistic IPC 2018 domains, Academic Advising and Chromatic Dice. We use different techniques to solve these two domains. In Academic Advising, we use a relevance analysis to remove irrelevant actions and state variables from the planning task. We then convert the problem from probabilistic to classical planning, which helped us solve it efficiently. In Chromatic Dice, we implement backtracking search to solve the smaller instances optimally. More complex instances are partitioned into several smaller planning tasks, and a near-optimal policy is derived as a combination of the optimal solutions to the small instances.

The motivation for finding (near)-optimal policies is related to the IPC score, which measures the quality of the planners. By providing the optimal upper bound of the domains, we contribute to the stabilization of the IPC score evaluation metric for these domains.

Most well-known and traditional online planners for probabilistic planning are in some way based on Monte-Carlo Tree Search. SOGBOFA, symbolic online gradient-based optimization for factored action MDPs, offers a new perspective on this: it constructs a function graph encoding the expected reward for a given input state using independence assumptions for states and actions. On this function, they use gradient ascent to perform a symbolic search optimizing the actions for the current state. This unique approach to probabilistic planning has shown very strong results and even more potential. In this thesis, we attempt to integrate the new ideas SOGBOFA presents into the traditionally successful Trial-based Heuristic Tree Search framework. Specifically, we design and evaluate two heuristics based on the aforementioned graph and its Q value estimations, but also the search using gradient ascent. We implement and evaluate these heuristics in the Prost planner, along with a version of the current standalone planner.

In this thesis, we consider cyclical dependencies between landmarks for cost-optimal planning. Landmarks denote properties that must hold at least once in all plans. However, if the orderings between them induce cyclical dependencies, one of the landmarks in each cycle must be achieved an additional time. We propose the generalized cycle-covering heuristic which considers this in addition to the cost for achieving all landmarks once.

Our research is motivated by recent applications of cycle-covering in the Freecell and logistics domain where it yields near-optimal results. We carry it over to domain-independent planning using a linear programming approach. The relaxed version of a minimum hitting set problem for the landmarks is enhanced by constraints concerned with cyclical dependencies between them. In theory, this approach surpasses a heuristic that only considers landmarks.

We apply the cycle-covering heuristic in practice where its theoretical dominance is confirmed; Many planning tasks contain cyclical dependencies and considering them affects the heuristic estimates favorably. However, the number of tasks solved using the improved heuristic is virtually unaffected. We still believe that considering this feature of landmarks offers great potential for future work.

Potential heuristics are a class of heuristics used in classical planning to guide a search algorithm towards a goal state. Most of the existing research on potential heuristics is focused on finding heuristics that are admissible, such that they can be used by an algorithm such as A* to arrive at an optimal solution. In this thesis, we focus on the computation of potential heuristics for satisficing planning, where plan optimality is not required and the objective is to find any solution. Specifically, our focus is on the computation of potential heuristics that are descending and dead-end avoiding (DDA), since these prop- erties guarantee favorable search behavior when used with greedy search algorithms such as hillclimbing. We formally prove that the computation of DDA heuristics is a PSPACE-complete problem and propose several approximation algorithms. Our evaluation shows that the resulting heuristics are competitive with established approaches such as Pattern Databases in terms of heuristic quality but suffer from several performance bottlenecks.

Most automated planners use heuristic search to solve the tasks. Usually, the planners get as input a lifted representation of the task in PDDL, a compact formalism describing the task using a fragment of first-order logic. The planners then transform this task description into a grounded representation where the task is described in propositional logic. This new grounded format can be exponentially larger than the lifted one, but many planners use this grounded representation because it is easier to implement and reason about.

However, sometimes this transformation between lifted and grounded representations is not tractable. When this is the case, there is not much that planners based on heuristic search can do. Since this transformation is a required preprocess, when this fails, the whole planner fails.

To solve the grounding problem, we introduce new methods to deal with tasks that cannot be grounded. Our work aims to find good ways to perform heuristic search while using a lifted representation of planning problems. We use the point-of-view of planning as a database progression problem and borrow solutions from the areas of relational algebra and database theory.

Our theoretical and empirical results are motivating: several instances that were never solved by any planner in the literature are now solved by our new lifted planner. For example, our planner can solve the challenging Organic Synthesis domain using a breadth-first search, while state-of-the-art planners cannot solve more than 60% of the instances. Furthermore, our results offer a new perspective and a deep theoretical study of lifted representations for planning tasks.

The generation of independently verifiable proofs for the unsolvability of planning tasks using different heuristics, including linear Merge-and-Shrink heuristics, is possible by usage of a proof system framework. Proof generation in the case of non-linear Merge-and-Shrink heuristic, however, is currently not supported. This is due to the lack of a suitable state set representation formalism that allows to compactly represent states mapped to a certain value in the belonging Merge-and-Shrink representation (MSR). In this thesis, we overcome this shortcoming using Sentential Decision Diagrams (SDDs) as set representations. We describe an algorithm that constructs the desired SDD from the MSR, and show that efficient proof verification is possible with SDDs as representation formalism. Aditionally, we use a proof of concept implementation to analyze the overhead occurred by the proof generation functionality and the runtime of the proof verification.

The operator-counting framework is a framework in classical planning for heuristics that are based on linear programming. The operator-counting framework covers several kinds of state-of-the-art linear programming heuristics, among them the post-hoc optimization heuristic. In this thesis we will use post-hoc optimization constraints and evaluate them under altered cost functions instead of the original cost function of the planning task. We show that such cost-altered post-hoc optimization constraints are also covered by the operator-counting framework and that it is possible to achieve improved heuristic estimates with them, compared with post-hoc optimization constraints under the original cost function. In our experiments we have not been able to achieve improved problem coverage, as we were not able to find a method for generating favorable cost functions that work well in all domains.

Heuristic forward search is the state-of-the-art approach to solve classical planning problems. On the other hand, bidirectional heuristic search has a lot of potential but was never able to deliver on those expectations in practice. Only recently the near-optimal bidirectional search algorithm (NBS) was introduces by Chen et al. and as the name suggests, NBS expands nearly the optimal number of states to solve any search problem. This is a novel achievement and makes the NBS algorithm a very promising and efficient algorithm in search. With this premise in mind, we raise the question of how applicable NBS is to planning. In this thesis, we inquire this very question by implementing NBS in the state- of-the-art planner Fast-Downward and analyse its performance on the benchmark of the latest international planning competition. We additionally implement fractional meet-in- the-middle and computeWVC to analyse NBS’ performance more thoroughly in regards to the structure of the problem task.

The conducted experiments show that NBS can successfully be applied to planning as it was able to consistently outperform A*. Especially good results were achieved on the domains: blocks, driverlog, floortile-opt11-strips, get-opt14-strips, logistics00, and termes- opt18-strips. Analysing these results, we deduce that the efficiency of forward and backward search depends heavily upon the underlying implicit structure of the transition system which is induced by the problem task. This suggests that bidirectional search is inherently more suited for certain problems. Furthermore, we find that this aptitude for a certain search direction correlates with the domain, thereby providing a powerful analytic tool to a priori derive the effectiveness of certain search approaches.

In conclusion, even without intricate improvements the NBS algorithm is able to compete with A*. It therefore has further potential for future research. Additionally, the underlying transition system of a problem instance is shown to be an important factor which influences the efficiency of certain search approaches. This knowledge could be valuable for devising portfolio planners.

Multiple Sequence Alignment (MSA) is the problem of aligning multiple biological sequences in the evoluationary most plausible way. It can be viewed as a shortest path problem through an n-dimensional lattice. Because of its large branching factor of 2^n − 1, it has found broad attention in the artificial intelligence community. Finding a globally optimal solution for more than a few sequences requires sophisticated heuristics and bounding techniques in order to solve the problem in acceptable time and within memory limitations. In this thesis, we show how existing heuristics fall into the category of combining certain pattern databases. We combine arbitrary pattern collections that can be used as heuristic estimates and apply cost partitioning techniques from classical planning for MSA. We implement two of those heuristics for MSA and compare their estimates to the existing heuristics.

Increasing Cost Tree Search is a promising approach to multi-agent pathfinding problems, but like all approaches it has to deal with a huge number of possible joint paths, growing exponentially with the number of agents. We explore the possibility of reducing this by introducing a value abstraction to the Multi-valued Decision Diagrams used to represent sets of joint paths. To that end we introduce a heat map to heuristically judge how collisionprone agent positions are and present how to use and possible refine abstract positions in order to still find valid paths.

Estimating cheapest plan costs with the help of network flows is an established technique. Plans and network flows are already very similar, however network flows can differ from plans in the presence of cycles. If a transition system contains cycles, flows might be composed of multiple disconnected parts. This discrepancy can make the cheapest plan estimation worse. One idea to get rid of the cycles works by introducing time steps. For every time step the states of a transition system are copied. Transitions will be changed, so that they connect states only with states of the next time step, which ensures that there are no cycles. It turned out, that by applying this idea to multiple transitions systems, network flows of the individual transition systems can be synchronized via the time steps to get a new kind of heuristic, that will also be discussed in this thesis.

Probabilistic planning is a research field that has become popular in the early 1990s. It aims at finding an optimal policy which maximizes the outcome of applying actions to states in an environment that feature unpredictable events. Such environments can consist of a large number of states and actions which make finding an optimal policy intractable using classical methods. Using a heuristic function for a guided search allows for tackling such problems. Designing a domain-independent heuristic function requires complex algorithms which may be expensive when it comes to time and memory consumption.

In this thesis, we are applying the supervised learning techniques for learning two domain-independent heuristic functions. We use three types of gradient descent methods: stochastic, batch and mini-batch gradient descent and their improved versions using momen- tum, learning decay rate and early stopping. Furthermore, we apply the concept of feature combination in order to better learn the heuristic functions. The learned functions are pro- vided to Prost, a domain-independent probabilistic planner, and benchmarked against the winning algorithms of the International Probabilistic Planning Competition held in 2014. The experiments show that learning an offline heuristic improves the overall score of the search for some of the domains used in aforementioned competition.

The merge-and-shrink heuristic is a state-of-the-art admissible heuristic that is often used for optimal planning. Recent studies showed that the merge strategy is an important factor for the performance of the merge-and-shrink algorithm. There are many different merge strategies and improvements for merge strategies described in the literature. One out of these merge strategies is MIASM by Fan et al. MIASM tries to merge transition systems that produce unnecessary states in their product which can be pruned. Another merge strategy is the symmetry-based merge-and-shrink framework by Sievers et al. This strategy tries to merge transition systems that cause factored symmetries in their product. This strategy can be combined with other merge strategies and it often improves the performance for many merge strategy. However, the current combination of MIASM with factored symmetries performs worse than MIASM. We implement a different combination of MIASM that uses factored symmetries during the subset search of MIASM. Our experimental evaluation shows that our new combination of MIASM with factored symmetries solves more tasks than the existing MIASM and the previously implemented combination of MIASM with factored symmetries. We also evaluate different combinations of existing merge strategies and find combinations that perform better than their basic version that were not evaluated before.

Tree Cache is a pathfinding algorithm that selects one vertex as a root and constructs a tree with cheapest paths to all other vertices. A path is found by traversing up the tree from both the start and goal vertices to the root and concatenating the two parts. This is fast, but as all paths constructed this way pass through the root vertex they can be highly suboptimal.

To improve this algorithm, we consider two simple approaches. The first is to construct multiple trees, and save the distance to each root in each vertex. To find a path, the algorithm first selects the root with the lowest total distance. The second approach is to remove redundant vertices, i.e. vertices that are between the root and the lowest common ancestor (LCA) of the start and goal vertices. The performance and space requirements of the resulting algorithm are then compared to the conceptually similar hub labels and differential heuristics.

Greedy Best-First Search (GBFS) is a prominent search algorithm to find solutions for planning tasks. GBFS chooses nodes for further expansion based on a distance-to-goal estimator, the heuristic. This makes GBFS highly dependent on the quality of the heuristic. Heuristics often face the problem of producing Uninformed Heuristic Regions (UHRs). GBFS additionally suffers the possibility of simultaneously expanding nodes in multiple UHRs. In this thesis we change the heuristic approach in UHRs. The heuristic was unable to guide the search and so we try to expand novel states to escape the UHRs. The novelty measures how “new” a state is in the search. The result is a combination of heuristic and novelty guided search, which is indeed able to escape UHRs quicker and solve more problems in reasonable time.

In classical AI planning, the state explosion problem is a reoccurring subject: although the problem descriptions are compact, often a huge number of states needs to be considered. One way to tackle this problem is to use static pruning methods which reduce the number of variables and operators in the problem description before planning.

In this work, we discuss the properties and limitations of three existing static pruning techniques with a focus on satisficing planning. We analyse these pruning techniques and their combinations, and identify synergy effects between them and the domains and problem structures in which they occur. We implement the three methods into an existing propositional planner, and evaluate the performance of different configurations and combinations in a set of experiments on IPC benchmarks. We observe that static pruning techniques can increase the number of solved problems, and that the synergy effects of the combinations also occur on IPC benchmarks, although they do not lead to a major performance increase.

The goal of classical domain-independent planning is to find a sequence of actions which lead from a given initial state to a goal state that satisfies some goal criteria. Most planning systems use heuristic search algorithms to find such a sequence of actions. A critical part of heuristic search is the heuristic function. In order to find a sequence of actions from an initial state to a goal state efficiently this heuristic function has to guide the search towards the goal. It is difficult to create such an efficient heuristic function. Arfaee et al. show that it is possible to improve a given heuristic function by applying machine learning techniques on a single domain in the context of heuristic search. To achieve this improvement of the heuristic function, they propose a bootstrap learning approach which subsequently improves the heuristic function.

In this thesis we will introduce a technique to learn heuristic functions that can be used in classical domain-independent planning based on the bootstrap-learning approach introduced by Arfaee et al. In order to evaluate the performance of the learned heuristic functions, we have implemented a learning algorithm for the Fast Downward planning system. The experiments have shown that a learned heuristic function generally decreases the number of explored states compared to blind-search . The total time to solve a single problem increases because the heuristic function has to be learned before it can be applied.

Essential for the estimation of the performance of an algorithm in satisficing planning is its ability to solve benchmark problems. Those results can not be compared directly as they originate from different implementations and different machines. We implemented some of the most promising algorithms for greedy best-first search, published in the last years, and evaluated them on the same set of benchmarks. All algorithms are either based on randomised search, localised search or a combination of both. Our evaluation proves the potential of those algorithms.

Heuristic search with admissible heuristics is the leading approach to cost-optimal, domain-independent planning. Pattern database heuristics - a type of abstraction heuristics - are state-of-the-art admissible heuristics. Two recent pattern database heuristics are the iPDB heuristic by Haslum et al. and the PhO heuristic by Pommerening et al.

The iPDB procedure performs a hill climbing search in the space of pattern collections and evaluates selected patterns using the canonical heuristic. We apply different techniques to the iPDB procedure, improving its hill climbing algorithm as well as the quality of the resulting heuristic. The second recent heuristic - the PhO heuristic - obtains strong heuristic values through linear programming. We present different techniques to influence and improve on the PhO heuristic.

We evaluate the modified iPDB and PhO heuristics on the IPC benchmark suite and show that these abstraction heuristics can compete with other state-of-the-art heuristics in cost-optimal, domain-independent planning.

Greedy best-first search (GBFS) is a prominent search algorithm for satisficing planning - finding good enough solutions to a planning task in reasonable time. GBFS selects the next node to consider based on the most promising node estimated by a heuristic function. However, this behaviour makes GBFS heavily depend on the quality of the heuristic estimator. Inaccurate heuristics can lead GBFS into regions far away from a goal. Additionally, if the heuristic ranks several nodes the same, GBFS has no information on which node it shall follow. Diverse best-first search (DBFS) is a new algorithm by Imai and Kishimoto [2011] which has a local search component to emphasis exploitation. To enable exploration, DBFS deploys probabilities to select the next node.

In two problem domains, we analyse GBFS' search behaviour and present theoretical results. We evaluate these results empirically and compare DBFS and GBFS on constructed as well as on provided problem instances.

State-of-the-art planning systems use a variety of control knowledge in order to enhance the performance of heuristic search. Unfortunately most forms of control knowledge use a specific formalism which makes them hard to combine. There have been several approaches which describe control knowledge in Linear Temporal Logic (LTL). We build upon this work and propose a general framework for encoding control knowledge in LTL formulas. The framework includes a criterion that any LTL formula used in it must fulfill in order to preserve optimal plans when used for pruning the search space; this way the validity of new LTL formulas describing control knowledge can be checked. The framework is implemented on top of the Fast Downward planning system and is tested with a pruning technique called Unnecessary Action Application, which detects if a previously applied action achieved no useful progress.

Landmarks are known to be useable for powerful heuristics for informed search. In this thesis, we explain and evaluate a novel algorithm to find ordered landmarks of delete free tasks by intersecting solutions in the relaxation. The proposed algorithm efficiently finds landmarks and natural orders of delete free tasks, such as delete relaxations or Pi-m compilations.

Planning as heuristic search is the prevalent technique to solve planning problems of any kind of domains. Heuristics estimate distances to goal states in order to guide a search through large state spaces. However, this guidance is sometimes moderate, since still a lot of states lie on plateaus of equally prioritized states in the search space topology. Additional techniques that ignore or prefer some actions for solving a problem are successful to support the search in such situations. Nevertheless, some action pruning techniques lead to incomplete searches.

We propose an under-approximation refinement framework for adding actions to under-approximations of planning tasks during a search in order to find a plan. For this framework, we develop a refinement strategy. Starting a search on an initial under-approximation of a planning task, the strategy adds actions determined at states close to a goal, whenever the search does not progress towards a goal, until a plan is found. Key elements of this strategy consider helpful actions and relaxed plans for refinements. We have implemented the under-approximation refinement framework into the greedy best first search algorithm. Our results show considerable speedups for many classical planning problems. Moreover, we are able to plan with fewer actions than standard greedy best first search.

The main approach for classical planning is heuristic search. Many cost heuristics are based on the delete relaxation. The optimal heuristic of a delete free planning problem is called h + . This thesis explores two new ways to compute h + . Both approaches use factored planning, which decomposes the original planning problem to work on each subproblem separately. The algorithm reuses the subsolutions and combines them to a global solution.

The two algorithms are used to compute a cost heuristic for an A* search. As both approaches compute the optimal heuristic for delete free planning tasks, the algorithms can also be used to find a solution for relaxed planning tasks.

Multi-Agent-Path-Finding (MAPF) is a common problem in robotics and memory management. Pebbles in Motion is an implementation of a problem solver for MAPF in polynomial time, based on a work by Daniel Kornhauser from 1984. Recently a lot of research papers have been published on MAPF in the research community of Artificial Intelligence, but the work by Kornhauser seems hardly to be taken into account. We assumed that this might be related to the fact that said paper was more mathematically and hardly describing algorithms intuitively. This work aims at filling this gap, by providing an easy understandable approach of implementation steps for programmers and a new detailed description for researchers in Computer Science.

Bachelor's theses

Constraint Satisfaction Problems (CSPs) are typical NP-complete combinatorial problems in the field of Artificial Intelligence. As part of this thesis, we introduce Oxiflex, a CSP solver written from scratch in Rust. Oxiflex is built on the MiniZinc tool chain and supports a subset of FlatZinc constraint builtins. Starting with a naive backtracking approach, we enhance Oxiflex by applying variable ordering and inference. Both forward checking and arc consistency enforcing algorithms like AC-1 and AC-3 are used for inference. Results show that for Oxiflex, variable ordering and forward checking have a positive impact on time measurements, but AC-1 and AC-3 do not. However, by measuring the number of iterations, results show that AC-1 and AC-3 can significantly reduce the number of iterations needed for backtracking. This work shows that inference does tighten the problem size, but careful implementation is needed to make it fast.

Solving the sliding tile puzzle is an important benchmark problem for testing informed search algorithms as it provides a large state space and is straightforward to implement. The key to finding solutions for the sliding tile puzzle efficiently, is to use high quality heuristic functions, that guide the state space search. Popular search algorithms such as the IDA* algorithm use these functions in combination with the path cost from the start node to decide which nodes to expand. The post-hoc optimization heuristic is one such function, that promises particularly high quality by combining the combination of many overlapping pattern database heuristics into a single value with the use of linear programming. Our thesis explores the impact of various inputs to the heuristic in the form of pattern database collections and compares their performance with the state-of-the-art.

Budgeted Tree Search (BTS) is a depth-first version of the search algorithm framework Iterative Budgeted Exponential Search (IBEX). It aims to improve the worst case run time of Iterative Deepening A* (IDA*), a widely used search algorithm when memory is an issue. BTS seeks to remedy IDA*’s shortcomings while maintaining the same space complexity. A weakness of IDA* is that under certain circumstances each iteration spends a considerable amount of effort exploring a minimal portion of the state space in addition to what it explored in earlier iterations. A main component of BTS is its addition of the exponential search procedure, which forces the search to expand exponentially more nodes with each iteration. We implement BTS and evaluate its performance in Fast Downward, a classical planning system, and compare it to IDA* with the use of the International Planning Competition (IPC) benchmark suite.

In classical planning, actions are used to reach a goal from the initial state. Search algorithms explore the state space of a planning problem to find a plan, which is a sequence of actions. To estimate the distance to the goal for the states, search algorithms can use heuristics. One heuristic used to guide the search is the additive heuristic. Recently, optimizations were presented to efficiently compute the additive heuristic in the context of lifted planning. Fast Downward is a classical planning system that uses a ground representation. In this work, the optimizations have been used and integrated into Fast Downward to see if they can be adapted to this ground planning system and whether they bring benefits. The optimizations are used here to reduce the number of unary operators required to calculate the heuristic, thereby enhancing performance. It was found that the optimizations can be used to compute the additive heuristic more efficiently for specific domains.

The schematic invariant synthesis algorithm using limited grounding propsed by Rintanen was implemented and integrated into the classical planning system Fast Downward by replacing the existing invariant synthesis algorithm by Helmert. The invariant synthesis identifies find mutex groups which are used for the translation of the PDDL task into a finite domain representation task. A comparison between the algorithms was evaluated by running experiments with the planning system on various propositional STRIPS benchmark tasks. The goal was to implement the schematic invariant synthesis algorithm that correctly finds ground invariants and leading the planning task to find a valid plan, where the efficiency was not of high priority. The evaluation results show that using the implemented invariant synthesis correct ground invariants are found and that the search duration of the planner stays similar compared to the search with the already existing invariant synthesis. In comparison to the current invariant synthesis, the newly implemented schematic invariant synthesis algorithm is very inefficient and can therefore not compete with the current invariant synthesis by Helmert.

Fast Downward is a classical planner using heuristical search. The planner uses many advanced planning techniques that are not easy to teach, since they usually rely on complex data structures. To introduce planning techniques to the user an interactive application is created. This application uses an illustrative example to showcase planning techniques: Blocksworld

Blocksworld is an easy understandable planning problem which allows a simple representation of a state space. It is implemented in the Unreal Engine and provides an interface to the Fast Downward planner. Users can explore a state space themselves or have Fast Downward generate plans for them. The concept of heuristics as well as the state space are explained and made accessible to the user. The user experiences how the planner explores a state space and which techniques the planner uses.

This thesis is about implementing Jussi Rintanen’s algorithm for schematic invariants. The algo- rithm is implemented in the planning tool Fast Downward and refers to Rintanen’s paper Schematic Invariants by Reduction to Ground Invariants. The thesis describes all necessary definitions to under- stand the algorithm and draws a comparison between the original task and a reduced task in terms of runtime and number of grounded actions.

Planning is a field of Artificial Intelligence. Planners are used to find a sequence of actions, to get from the initial state to a goal state. Many planning algorithms use heuristics, which allow the planner to focus on more promising paths. Pattern database heuristics allow us to construct such a heuristic, by solving a simplified version of the problem, and saving the associated costs in a pattern database. These pattern databases can be computed and stored by using symbolic data structures.

In this paper we will look at how pattern databases using symbolic data structures using binary decision diagrams and algebraic decision diagrams can be implemented. We will extend fast down- ward (Helmert [2006]) with it, and compare the performance of this implementation with the already implemented explicit pattern database.

In the field of automated planning and scheduling, a planning task is essentially a state space which can be defined rigorously using one of several different formalisms (e.g. STRIPS, SAS+, PDDL etc.). A planning algorithm tries to determine a sequence of actions that lead to a goal state for a given planning task. In recent years, attempts have been made to group certain planners together into so called planner portfolios, to try and leverage their effectiveness on different specific problem classes. In our project, we create an online planner which in contrast to its offline counterparts, makes use of task specific information when allocating a planner to a task. One idea that has recently gained interest, is to apply machine learning methods to planner portfolios.

In previous work such as Delfi (Katz et al., 2018; Sievers et al., 2019a) supervised learning techniques were used, which made it necessary to train multiple networks to be able to attempt multiple, potentially different, planners for a given task. The reason for this being that, if we used the same network, the output would always be the same, as the input to the network would remain unchanged. In this project we make use of techniques from rein- forcement learning such as DQNs (Mnih et al., 2013). Using RL approaches such as DQNs, allows us to extend the input to the network to include information on things, such as which planners were previously attempted and for how long. As a result multiple attempts can be made after only having trained a single network.

Unfortunately the results show that current reinforcement learning agents are, amongst other reasons, too sample inefficient to be able to deliver viable results given the size of the currently available data sets.

Planning tasks are important and difficult problems in computer science. A widely used approach is the use of delete relaxation heuristics to which the additive and FF heuristic belong. Those two heuristics use a graph in their calculation, which only has to be constructed once for a planning task but then can be used repeatedly. To solve such a problem efficiently it is important that the calculation of the heuristics are fast. In this thesis the idea to achieve a faster calculation is to combine redundant parts of the graph when building it to reduce the number of edges and therefore speed up the calculation. Here the reduction of the redundancies is done for each action within a planning task individually, but further ideas to simplify over all actions are also discussed.

Monte Carlo search methods are widely known, mostly for their success in game domains, although they are also applied to many non-game domains. In previous work done by Schulte and Keller, it was established that best-first searches could adapt to the action selection functionality which make Monte Carlo methods so formidable. In practice however, the trial-based best first search, without exploration, was shown to be slightly slower than its explicit open list counterpart. In this thesis we examine the non-trial and trial-based searches and how they can address the exploitation exploration dilemma. Lastly, we will see how trial-based BFS can rectify a slower search by allowing occasional random action selection, by comparing it to regular open list searches in a line of experiments.

Sudoku has become one of the world’s most popular logic puzzles, arousing interest in the general public and the science community. Although the rules of Sudoku may seem simple, they allow for nearly countless puzzle instances, some of which are very hard to solve. SAT-solvers have proven to be a suitable option to solve Sudokus automatically. However, they demand the puzzles to be encoded as logical formulae in Conjunctive Normal Form. In earlier work, such encodings have been successfully demonstrated for original Sudoku Puzzles. In this thesis, we present encodings for rather unconventional Sudoku Variants, developed by the puzzle community to create even more challenging solving experiences. Furthermore, we demonstrate how Pseudo-Boolean Constraints can be utilized to encode Sudoku Variants that follow rules involving sums. To implement an encoding of Pseudo-Boolean Constraints, we use Binary Decision Diagrams and Adder Networks and study how they compare to each other.

In optimal classical planning, informed search algorithms like A* need admissible heuristics to find optimal solutions. Counterexample-guided abstraction refinement (CEGAR) is a method used to generate abstractions that yield suitable abstraction heuristics iteratively. In this thesis, we propose a class of CEGAR algorithms for the generation of domain abstractions, which are a class of abstractions that rank in between projections and Cartesian abstractions regarding the grade of refinement they allow. As no known algorithm constructs domain abstractions, we show that our algorithm is competitive with CEGAR algorithms that generate one projection or Cartesian abstraction.

This thesis will look at Single-Player Chess as a planning domain using two approaches: one where we look at how we can encode the Single-Player Chess problem as a domain-independent (general-purpose AI) approach and one where we encode the problem as a domain-specific solver. Lastly, we will compare the two approaches by doing some experiments and comparing the results of the two approaches. Both the domain-independent implementation and the domain-specific implementation differ from traditional chess engines because the task of the agent is not to find the best move for a given position and colour, but the agent’s task is to check if a given chess problem has a solution or not. If the agent can find a solution, the given chess puzzle is valid. The results of both approaches were measured in experiments, and we found out that the domain-independent implementation is too slow and that the domain-specific implementation, on the other hand, can solve the given puzzles reliably, but it has a memory bottleneck rooted in the search method that was used.

Bipartite Permutation Graphs have interesting algorithmic properties which make it possible to solve several PSPACE-complete problems (including the sliding token problem) in polynomial time when restricted to Bipartite Permutation Graphs. In this thesis we combined the algorithms of two papers to create a python implementation that checks if a graph is a bipartite permutation graph and if two independent sets for this graph can be reconfigured into one another using sliding token movement in polynomial time. We finished the entire process by giving out an actual reconfiguration sequence after we've checked that one exists. An interesting conclusion which developed during the implementation leads to the discussion if part of the original algorithm could be substituted to reach linear time complexity for the recognition of the existence of sliding token reconfiguration sequences.

Carcassonne is a tile-based board game with a large state space and a high branching factor and therefore poses a challenge to artificial intelligence. In the past, Monte Carlo Tree Search (MCTS), a search algorithm for sequential decision-making processes, has been shown to find good solutions in large state spaces. MCTS works by iteratively building a game tree according to a tree policy. The profitability of paths within that tree is evaluated using a default policy, which influences in what directions the game tree is expanded. The functionality of these two policies, as well as other factors, can be implemented in many different ways. In consequence, many different variants of MCTS exist. In this thesis, we applied MCTS to the domain of two-player Carcassonne and evaluated different variants in regard to their performance and runtime. We found significant differences in performance for various variable aspects of MCTS and could thereby evaluate a configuration which performs best on the domain of Carcassonne. This variant consistently outperformed an average human player with a feasible runtime.

In general, it is important to verify software as it is prone to error. This also holds for solving tasks in classical planning. So far, plans in general as well as the fact that there is no plan for a given planning task can be proven and independently verified. However, no such proof for the optimality of a solution of a task exists. Our aim is to introduce two methods with which optimality can be proven and independently verified. We first reduce unit cost tasks to unsolvable tasks, which enables us to make use of the already existing certificates for unsolvability. In a second approach, we propose a proof system for optimality, which enables us to infer that the determined cost of a task is optimal. This permits the direct generation of optimality certificates.

Pattern databases are one of the most powerful heuristics in classical planning. They evaluate the perfect cost for a simplified sub-problem. The post-hoc optimization heuristic is a technique on how to optimally combine a set of pattern databases. In this thesis, we will adapt the post-hoc optimization heuristic for the sliding tile puzzle. The sliding tile puzzle serves as a benchmark to compare the post-hoc optimization heuristic to already established methods, which also deal with the combining of pattern databases. We will then show how the post-hoc optimization heuristic is an improvement over the already established methods.

In this thesis, we generate landmarks for a logistics-specific task. Landmarks are actions that need to occur at least once in every plan. A landmark graph denotes a structure with landmarks and their edges called orderings. If there are cycles in a landmark graph, one of those landmarks needs to be achieved at least twice for every cycle. The generation of the logistics-specific landmarks and their orderings calculate the cyclic landmark heuristic. The task is to pick up on related work, the evaluation of the cyclic landmark heuristic. We compare the generation of landmark graphs from a domain-independent landmark generator to a domain-specific landmark generator, the latter being the focus. We aim to bridge the gap between domain-specific and domain-independent landmark generators. In this thesis, we compare one domain-specific approach for the logistics domain with results from a domain- independent landmark generator. We devise a unit to pre-process data for other domain- specific tasks as well. We will show that specificity is better suited than independence.

Lineare Programmierung ist eine mathematische Modellierungstechnik, bei der eine lineare Funktion, unter der Berücksichtigung verschiedenen Beschränkungen, maximiert oder minimiert werden soll. Diese Technik ist besonders nützlich, falls Entscheidungen für Optimierungsprobleme getroffen werden sollen. Ziel dieser Arbeit war es ein Tool für das Spiel Factory Town zu entwickeln, mithilfe man Optimierungsanfragen bearbeiten kann. Dabei ist es möglich wahlweise zwischen diversen Fragestellungen zu wählen und anhand von LP-\ IP-Solvern diese zu beantworten. Zudem wurden die mathematischen Formulierungen, sowie die Unterschiede beider Methoden angegangen. Schlussendlich unterstrichen die generierten Resultate, dass LP Lösungen mindestens genauso gut oder sogar besser seien als die Lösungen eines IP.

Symbolic search is an important approach to classical planning. Symbolic search uses search algorithms that process sets of states at a time. For this we need states to be represented by a compact data structure called knowledge compilations. Merge-and-shrink representations come a different field of planning, where they have been used to derive heuristic functions for state-space search. More generally they represent functions that map variable assignments to a set of values, as such we can regard them as a data structure we will call Factored Mappings. In this thesis, we will investigate Factored Mappings (FMs) as a knowledge compilation language with the hope of using them for symbolic search. We will analyse the necessary transformations and queries for FMs, by defining the needed operations and a canonical representation of FMs, and showing that they run in polynomial time. We will then show that it is possible to use Factored Mappings as a knowledge compilation for symbolic search by defining a symbolic search algorithm for a finite-domain plannings task that works with FMs.

Version control systems use a graph data structure to track revisions of files. Those graphs are mutated with various commands by the respective version control system. The goal of this thesis is to formally define a model of a subset of Git commands which mutate the revision graph, and to model those mutations as a planning task in the Planning Domain Definition Language. Multiple ways to model those graphs will be explored and those models will be compared by testing them using a set of planners.

Pattern Databases are admissible abstraction heuristics for classical planning. In this thesis we are introducing the Boosting processes, which consists of enlarging the pattern of a Pattern Database P, calculating a more informed Pattern Database P' and then min-compress P' to the size of P resulting in a compressed and still admissible Pattern Database P''. We design and implement two boosting algorithms, Hillclimbing and Randomwalk.

We combine pattern database heuristics using five different cost partitioning methods. The experiments compare computing cost partitionings over regular and boosted pattern databases. The experiments, performed on IPC (optimal track) tasks, show promising results which increased the coverage (number of solved tasks) by 9 for canonical cost partitioning using our Randomwalk boosting variant.

One dimensional potential heuristics assign a numerical value, the potential, to each fact of a classical planning problem. The heuristic value of a state is the sum over the poten- tials belonging to the facts contained in the state. Fišer et al. (2020) recently proposed to strengthen potential heuristics utilizing mutexes and disambiguations. In this thesis, we embed the same enhancements in the planning system Fast Downward. The experi- mental evaluation shows that the strengthened potential heuristics are a refinement, but too computationally expensive to solve more problems than the non-strengthened potential heuristics.

The potentials are obtained with a Linear Program. Fišer et al. (2020) introduced an additional constraint on the initial state and we propose additional constraints on random states. The additional constraints improve the amount of solved problems by up to 5%.

This thesis discusses the PINCH heuristic, a specific implementation of the additive heuristic. PINCH intends to combine the strengths of existing implementations of the additive heuristic. The goal of this thesis is to really dig into the PINCH heuristic. I want to provide the most accessible resource for understanding PINCH and I want to analyze the performance of PINCH by comparing it to the algorithm on which it is based, Generalized Dijkstra.

Suboptimal search algorithms can offer attractive benefits compared to optimal search, namely increased coverage of larger search problems and quicker search times. Improving on such algorithms, such as reducing costs further towards optimal solutions and reducing the number of node expansions, is therefore a compelling area for further research. This paper explores the utility and scalability of recently developed priority functions, XDP, XUP, and PWXDP, and the Improved Optimistic Search algorithm, compared to Weighted A*, in the Fast Downward planner. Analyses focus on the cost, total time, coverage, and node expansion parameters, with experimental evidence suggesting preferable performance if strict optimality is not desired. The implementation of priorityb functions in eager best-first search showed marked improvements compared to A* search on coverage, total time, and number of expansions, without significant cost penalties. Following previous suboptimal search research, experimental evidence even seems to indicate that these cost penalties do not reach the designated bound, even in larger search spaces.

In the Automated Planning field, algorithms and systems are developed for exploring state spaces and ultimately finding an action sequence leading from a task’s initial state to its goal. Such planning systems may sometimes show unexpected behavior, caused by a planning task or a bug in the planner itself. Generally speaking, finding the source of a bug tends to be easier when the cause can be isolated or simplified. In this thesis, we tackle this problem by making PDDL and SAS+ tasks smaller while ensuring they still invoke a certain characteristic when executed with a planner. We implement a system that successively removes elements, such as objects, from a task and checks whether the transformed task still fails on the planner. Elements are removed in a syntactically consistent way, however, no semantic integrity is enforced. Our system’s design is centered around the Fast Downward Planning System, as we re-use some of its translator modules and all test runs are performed with Fast Downward. At the core of our system, first-choice hill-climbing is used for optimization. Our “minimizer” takes (1) a failing planner execution command, (2) a description of the failing characteristic and (3) the type of element to be deleted as arguments. We evaluate our system’s functionality on the basis of three use-cases. In our most successful test runs, (1) a SAS+ task with initially 1536 operators and 184 variables is reduced to 2 operators and 2 variables and (2)a PDDL task with initially 46 actions, 62 objects and 29 predicate symbols is reduced to 2 actions, 6 objects and 4 predicates.

Fast Downward is a classical planning system based on heuristic search. Its successor generator is an efficient and intelligent tool to process state spaces and generate their successor states. In this thesis we implement different successor generators in the Fast Downward planning system and compare them against each other. Apart from the given fast downward successor generator we implement four other successor generators: a naive successor generator, one based on the marking of delete relaxed heuristics, one based on the PSVN planning system and one based on watched literals as used in modern SAT solvers. These successor generators are tested in a variety of different planning benchmarks to see how well they compete against each other. We verified that there is a trade-off between precomputation and faster successor generation and showed that all of the implemented successor generators have a use case and it is advisable to switch to a successor generator that fits the style of the planning task.

Verifying whether a planning algorithm came to the correct result for a given planning task is easy if a plan is emitted which solves the problem. But if a task is unsolvable most planners just state this fact without any explanation or even proof. In this thesis we present extended versions of the symbolic search algorithms SymPA and symbolic bidirectional uniform-cost search which, if a given planning task is unsolvable, provide certificates which prove unsolvability. We also discuss a concrete implementation of this version of SymPA.

Classical planning is an attractive approach to solving problems because of its generality and its relative ease of use. Domain-specific algorithms are appealing because of their performance, but require a lot of resources to be implemented. In this thesis we evaluate concepts languages as a possible input language for expert domain knowledge into a planning system. We also explore mixed integer programming as a way to use this knowledge to improve search efficiency and to help the user find and refine useful domain knowledge.

Classical Planning is a branch of artificial intelligence that studies single agent, static, deterministic, fully observable, discrete search problems. A common challenge in this field is the explosion of states to be considered when searching for the goal. One technique that has been developed to mitigate this is Strong Stubborn Set based pruning, where on each state expansion, the considered successors are restricted to Strong Stubborn Sets, which exploit the properties of independent operators to cut down the tree or graph search. We adopt the definitions of the theory of Strong Stubborn Sets from the SAS+ setting to transition systems and validate a central theorem about the correctness of Strong Stubborn Set based pruning for transition systems in the interactive theorem prover Isabelle/HOL.

Ein wichtiges Feld in der Wissenschaft der künstliche Intelligenz sind Planungsprobleme. Man hat das Ziel, eine künstliche intelligente Maschine zu bauen, die mit so vielen ver- schiedenen Probleme umgehen und zuverlässig lösen kann, indem sie ein optimaler Plan herstellt.

Der Trial-based Heuristic Tree Search(THTS) ist ein mächtiges Werkzeug um Multi-Armed- Bandit-ähnliche Probleme, Marcow Decsision Processe mit verändernden Rewards, zu lösen. Beim momentanen THTS können explorierte gefundene gute Rewards auf Grund von der grossen Anzahl der Rewards nicht beachtet werden. Ebenso können beim explorieren schlech- te Rewards, gute Knoten im Suchbaum, verschlechtern. Diese Arbeit führt eine Methodik ein, die von der stückweise stationären MABs Problematik stammt, um den THTS weiter zu optimieren.

Abstractions are a simple yet powerful method of creating a heuristic to solve classical planning problems optimally. In this thesis we make use of Cartesian abstractions generated with Counterexample-Guided Abstraction Refinement (CEGAR). This method refines abstractions incrementally by finding flaws and then resolving them until the abstraction is sufficiently evolved. The goal of this thesis is to implement and evaluate algorithms which select solutions of such flaws, in a way which results in the best abstraction (that is, the abstraction which causes the problem to then be solved most efficiently by the planner). We measure the performance of a refinement strategy by running the Fast Downward planner on a problem and measuring how long it takes to generate the abstraction, as well as how many expansions the planner requires to find a goal using the abstraction as a heuristic. We use a suite of various benchmark problems for evaluation, and we perform this experiment for a single abstraction and on abstractions for multiple subtasks. Finally, we attempt to predict which refinement strategy should be used based on parameters of the task, potentially allowing the planner to automatically select the best strategy at runtime.

Heuristic search is a powerful paradigm in classical planning. The information generated by heuristic functions to guide the search towards a goal is a key component of many modern search algorithms. The paper “Using Backwards Generated Goals for Heuristic Planning” by Alcázar et al. proposes a way to make additional use of this information. They take the last actions of a relaxed plan as a basis to generate intermediate goals with a known path to the original goal. A plan is found when the forward search reaches an intermediate goal.

The premise of this thesis is to modify their approach by focusing on a single sequence of intermediate goals. The aim is to improve efficiency while preserving the benefits of backwards goal expansion. We propose different variations of our approach by introducing multiple ways to make decisions concerning the construction of intermediate goals. We evaluate these variations by comparing their performance and illustrate the challenges posed by this approach.

Counterexample-guided abstraction refinement (CEGAR) is a way to incrementally compute abstractions of transition systems. It starts with a coarse abstraction and then iteratively finds an abstract plan, checks where the plan fails in the concrete transition system and refines the abstraction such that the same failure cannot happen in subsequent iterations. As the abstraction grows in size, finding a solution for the abstract system becomes more and more costly. Because the abstraction grows incrementally, however, it is possible to maintain heuristic information about the abstract state space, allowing the use of informed search algorithms like A*. As the quality of the heuristic is crucial to the performance of informed search, the method for maintaining the heuristic has a significant impact on the performance of the abstraction refinement as a whole. In this thesis, we investigate different methods for maintaining the value of the perfect heuristic h* at all times and evaluate their performance.

Pattern Databases are a powerful class of abstraction heuristics which provide admissible path cost estimates by computing exact solution costs for all states of a smaller task. Said task is obtained by abstracting away variables of the original problem. Abstractions with few variables offer weak estimates, while introduction of additional variables is guaranteed to at least double the amount of memory needed for the pattern database. In this thesis, we present a class of algorithms based on counterexample-guided abstraction refinement (CEGAR), which exploit additivity relations of patterns to produce pattern collections from which we can derive heuristics that are both informative and computationally tractable. We show that our algorithms are competitive with already existing pattern generators by comparing their performance on a variety of planning tasks.

We consider the problem of Rubik’s Cube to evaluate modern abstraction heuristics. In order to find feasible abstractions of the enormous state space spanned by Rubik’s Cube, we apply projection in the form of pattern databases, Cartesian abstraction by doing counterexample guided abstraction refinement as well as merge-and-shrink strategies. While previous publications on Cartesian abstractions have not covered applicability for planning tasks with conditional effects, we introduce factorized effect tasks and show that Cartesian abstraction can be applied to them. In order to evaluate the performance of the chosen heuristics, we run experiments on different problem instances of Rubik’s Cube. We compare them by the initial h-value found for all problems and analyze the number of expanded states up to the last f-layer. These criteria provide insights about the informativeness of the considered heuristics. Cartesian Abstraction yields perfect heuristic values for problem instances close to the goal, however it is outperformed by pattern databases for more complex instances. Even though merge-and-shrink is the most general abstraction among the considered, it does not show better performance than the others.

Probabilistic planning expands on classical planning by tying probabilities to the effects of actions. Due to the exponential size of the states, probabilistic planners have to come up with a strong policy in a very limited time. One approach to optimising the policy that can be found in the available time is called metareasoning, a technique aiming to allocate more deliberation time to steps where more time to plan results in an improvement of the policy and less deliberation time to steps where an improvement of the policy with more time to plan is unlikely.

This thesis aims to adapt a recent proposal of a formal metareasoning procedure from Lin. et al. for the search algorithm BRTDP to work with the UCT algorithm in the Prost planner and compare its viability to the current standard and a number of less informed time management methods in order to find a potential improvement to the current uniform deliberation time distribution.

A planner tries to produce a policy that leads to a desired goal given the available range of actions and an initial state. A traditional approach for an algorithm is to use abstraction. In this thesis we implement the algorithm described in the ASAP-UCT paper: Abstraction of State-Action Pairs in UCT by Ankit Anand, Aditya Grover, Mausam and Parag Singla.

The algorithm combines state and state-action abstraction with a UCT-algorithm. We come to the conclusion that the algorithm needs to be improved because the abstraction of action-state often cannot detect a similarity that a reasonable action abstraction could find.

The notion of adding a form of exploration to guide a search has been proven to be an effective method of combating heuristical plateaus and improving the performance of greedy best-first search. The goal of this thesis is to take the same approach and introduce exploration in a bounded suboptimal search problem. Explicit estimation search (EES), established by Thayer and Ruml, consults potentially inadmissible information to determine the search order. Admissible heuristics are then used to guarantee the cost bound. In this work we replace the distance-to-go estimator used in EES with an approach based on the concept of novelty.

Classical domain-independent planning is about finding a sequence of actions which lead from an initial state to a goal state. A popular approach for solving planning problems efficiently is to utilize heuristic functions. A possible heuristic function is the perfect heuristic of a delete relaxed planning problem denoted as h+. Delete relaxation simplifies the planning problem thus making it easier to find a perfect heuristic. However computing h+ is still NP-hard problem.

In this thesis we discuss a promising looking approach to compute h+ in practice. Inspired by the paper from Gnad, Hoffmann and Domshlak about star-shaped planning problems, we implemented the Flow-Cut algorithm. The basic idea behind flow-cut to divide a problem that is unsolvable in practice, into smaller sub problems that can be solved. We further tested the flow-cut algorithm on the domains provided by the International Planning Competition benchmarks, resulting in the following conclusion: Using a divide and conquer approach can successfully be used to solve classical planning problems, however it is not trivial to design such an algorithm to be more efficient than state-of-the-art search algorithm.

This thesis deals with the algorithm presented in the paper "Landmark-based Meta Best-First Search Algorithm: First Parallelization Attempt and Evaluation" by Simon Vernhes, Guillaume Infantes and Vincent Vidal. Their idea was to reconsider the approach to landmarks as a tool in automated planning, but in a markedly different way than previous work had done. Their result is a meta-search algorithm which explores landmark orderings to find a series of subproblems that reliably lead to an effective solution. Any complete planner may be used to solve the subproblems. While the referenced paper also deals with an attempt to effectively parallelize the Landmark-based Meta Best-First Search Algorithm, this thesis is concerned mainly with the sequential implementation and evaluation of the algorithm in the Fast Downward planning system.

Heuristics play an important role in classical planning. Using heuristics during state space search often reduces the time required to find a solution, but constructing heuristics and using them to calculate heuristic values takes time, reducing this benefit. Constructing heuristics and calculating heuristic values as quickly as possible is very important to the effectiveness of a heuristic. In this thesis we introduce methods to bound the construction of merge-and-shrink to reduce its construction time and increase its accuracy for small problems and to bound the heuris- tic calculation of landmark cut to reduce heuristic value calculation time. To evaluate the performance of these depth-bound heuristics we have implemented them in the Fast Down- ward planning system together with three iterative-deepening heuristic search algorithms: iterative-deepening A* search, a new breadth-first iterative-deepening version of A* search and iterative-deepening breadth-first heuristic search.

Greedy best-first search has proven to be a very efficient approach to satisficing planning but can potentially lose some of its effectiveness due to the used heuristic function misleading it to a local minimum or plateau. This is where exploration with additional open lists comes in, to assist greedy best-first search with solving satisficing planning tasks more effectively. Building on the idea of exploration by clustering similar states together as described by Xie et al. [2014], where states are clustered according to heuristic values, we propose in this paper to instead cluster states based on the Hamming distance of the binary representation of states [Hamming, 1950]. The resulting open list maintains k buckets and inserts each given state into the bucket with the smallest average hamming distance between the already clustered states and the new state. Additionally, our open list is capable of reclustering all states periodically with the use of the k-means algorithm. We were able to achieve promising results concerning the amount of expansions necessary to reach a goal state, despite not achieving a higher coverage than fully random exploration due to slow performance. This was caused by the amount of calculations required to identify the most fitting cluster when inserting a new state.

Monte Carlo Tree Search Algorithms are an efficient method of solving probabilistic planning tasks that are modeled by Markov Decision Problems. MCTS uses two policies, a tree policy for iterating through the known part of the decission tree and a default policy to simulate the actions and their reward after leaving the tree. MCTS algorithms have been applied with great success to computer Go. To make the two policies fast many enhancements based on online knowledge have been developed. The goal of All Moves as First enhancements is to improve the quality of a reward estimate in the tree policy. In the context of this thesis the, in the field of computer Go very efficient, α-AMAF, Cutoff-AMAF as well as Rapid Action Value Estimation enhancements are implemented in the probabilistic planner PROST. To obtain a better default policy, Move Average Sampling is implemented into PROST and benchmarked against it’s current default policies.

In classical planning the objective is to find a sequence of applicable actions that lead from the initial state to a goal state. In many cases the given problem can be of enormous size. To deal with these cases, a prominent method is to use heuristic search, which uses a heuristic function to evaluate states and can focus on the most promising ones. In addition to applying heuristics, the search algorithm can apply additional pruning techniques that exclude applicable actions in a state because applying them at a later point in the path would result in a path consisting of the same actions but in a different order. The question remains as to how these actions can be selected without generating too much additional work to still be useful for the overall search. In this thesis we implement and evaluate the partition-based path pruning method, proposed by Nissim et al. [1], which tries to decompose the set of all actions into partitions. Based on this decomposition, actions can be pruned with very little additional information. The partition-based pruning method guarantees with some alterations to the A* search algorithm to preserve it’s optimality. The evaluation confirms that in several standard planning domains, the pruning method can reduce the size of the explored state space.

Validating real-time systems is an important and complex task which becomes exponentially harder with increasing sizes of systems. Therefore finding an automated approach to check real-time systems for possible errors is crucial. The behaviour of such real-time systems can be modelled with timed automata. This thesis adapts and implements the under-approximation refinement algorithm developed for search based planners proposed by Heusner et al. to find error states in timed automata via the directed model checking approach. The evaluation compares the algorithm to already existing search methods and shows that a basic under-approximation refinement algorithm yields a competitive search method for directed model checking which is both fast and memory efficient. Additionally we illustrate that with the introduction of some minor alterations the proposed under- approximation refinement algorithm can be further improved.

In dieser Arbeit wird versucht eine Heuristik zu lernen. Damit eine Heuristik erlernbar ist, muss sie über Parameter verfügen, die die Heuristik bestimmen. Eine solche Möglichkeit bieten Potential-Heuristiken und ihre Parameter werden Potentiale genannt. Pattern-Databases können mit vergleichsweise wenig Aufwand Eigenschaften eines Zustandsraumes erkennen und können somit eingesetzt werden als Grundlage um Potentiale zu lernen. Diese Arbeit untersucht zwei verschiedene Ansätze zum Erlernen der Potentiale aufgrund der Information aus Pattern-Databases. In Experimenten werden die beiden Ansätze genauer untersucht und schliesslich mit der FF-Heuristik verglichen.

We consider real-time strategy (RTS) games which have temporal and numerical aspects and pose challenges which have to be solved within limited search time. These games are interesting for AI research because they are more complex than board games. Current AI agents cannot consistently defeat average human players, while even the best players make mistakes we think an AI could avoid. In this thesis, we will focus on StarCraft Brood War. We will introduce a formal definition of the model Churchill and Buro proposed for StarCraft. This allows us to focus on Build Order optimization only. We have implemented a base version of the algorithm Churchill and Buro used for their agent. Using the implementation we are able to find solutions for Build Order Problems in StarCraft Brood War.

Auf dem Gebiet der Handlungsplanung stellt die symbolische Suche eine der erfolgversprechendsten angewandten Techniken dar. Um eine symbolische Suche auf endlichen Zustandsräumen zu implementieren bedarf es einer geeigneten Datenstruktur für logische Formeln. Diese Arbeit erprobt die Nutzung von Sentential Decision Diagrams (SDDs) anstelle der gängigen Binary Decision Diagrams (BDDs) zu diesem Zweck. SDDs sind eine Generalisierung von BDDs. Es wird empirisch getestet wie eine Implementierung der symbolischen Suche mit SDDs im FastDownward-Planer sich mit verschiedenen vtrees unterscheidet. Insbesondere wird die Performance von balancierten vtrees, mit welchen die Stärken von SDDs oft gut zur Geltung kommen, mit rechtsseitig linearen vtrees verglichen, bei welchen sich SDDs wie BDDs verhalten.

Die Frage ob es gültige Sudokus - d.h. Sudokus mit nur einer Lösung - gibt, die nur 16 Vorgaben haben, konnte im Dezember 2011 mithilfe einer erschöpfenden Brute-Force-Methode von McGuire et al. verneint werden. Die Schwierigkeit dieser Aufgabe liegt in dem ausufernden Suchraum des Problems und der dadurch entstehenden Erforderlichkeit einer effizienten Beweisidee sowie schnellerer Algorithmen. In dieser Arbeit wird die Beweismethode von McGuire et al. bestätigt werden und für 2 2 × 2 2 und 3 2 × 3 2 Sudokus in C++ implementiert.

Das Finden eines kürzesten Pfades zwischen zwei Punkten ist ein fundamentales Problem in der Graphentheorie. In der Praxis ist es oft wichtig, den Ressourcenverbrauch für das Ermitteln eines solchen Pfades minimal zu halten, was mithilfe einer komprimierten Pfaddatenbank erreicht werden kann. Im Rahmen dieser Arbeit bestimmen wir drei Verfahren, mit denen eine Pfaddatenbank möglichst platzsparend aufgestellt werden kann, und evaluieren die Effektivität dieser Verfahren anhand von Probleminstanzen verschiedener Grösse und Komplexität.

In planning what we want to do is to get from an initial state into a goal state. A state can be described by a finite number of boolean valued variables. If we want to transition from one state to the other we have to apply an action and this, at least in probabilistic planning, leads to a probability distribution over a set of possible successor states. From each transition the agent gains a reward dependent on the current state and his action. In this setting the growth of the number of possible states is exponential with the number of variables. We assume that the value of these variables is determined for each variable independently in a probabilistic fashion. So these variables influence the number of possible successor states in the same way as they did the state space. In consequence it is almost impossible to obtain an optimal amount of reward approaching this problem with a brute force technique. One way to get past this problem is to abstract the problem and then solve a simplified version of the aforementioned. That’s in general the idea proposed by Boutilier and Dearden [1]. They have introduced a method to create an abstraction which depends on the reward formula and the dependencies contained in the problem. With this idea as a basis we’ll create a heuristic for a trial-based heuristic tree search (THTS) algorithm [5] and a standalone planner using the framework PROST (Keller and Eyerich, 2012). These will then be tested on all the domains of the International Probabilistic Planning Competition (IPPC).

In einer Planungsaufgabe geht es darum einen gegebenen Wertezustand durch sequentielles Anwenden von Aktionen in einen Wertezustand zu überführen, welcher geforderte Zieleigenschaften erfüllt. Beim Lösen von Planungsaufgaben zählt Effizienz. Um Zeit und Speicher zu sparen verwenden viele Planer heuristische Suche. Dabei wird mittels einer Heuristik abgeschätzt, welche Aktion als nächstes angewendet werden soll um möglichst schnell in einen gewünschten Zustand zu gelangen.

In dieser Arbeit geht es darum, die von Haslum vorgeschlagene P m -Kompilierung für Planungsaufgaben zu implementieren und die h max -Heuristik auf dem kompilierten Problem gegen die h m -Heuristik auf dem originalen Problem zu testen. Die Implementation geschieht als Ergänzung zum Fast-Downward-Planungssystem. Die Resultate der Tests zeigen, dass mittels der Kompilierung die Zahl der gelösten Probleme erhöht werden kann. Das Lösen eines kompilierten Problems mit der h max -Heuristik geschieht im allgemeinen mit selbiger Informationstiefe schneller als das Lösen des originalen Problems mit der h m -Heuristik. Diesen Zeitgewinn erkauft man sich mit einem höheren Speicherbedarf.

The objective of classical planning is to find a sequence of actions which begins in a given initial state and ends in a state that satisfies a given goal condition. A popular approach to solve classical planning problems is based on heuristic forward search algorithms. In contrast, regression search algorithms apply actions “backwards” in order to find a plan from a goal state to the initial state. Currently, regression search algorithms are somewhat unpopular, as the generation of partial states in a basic regression search often leads to a significant growth of the explored search space. To tackle this problem, state subsumption is a pruning technique that additionally discards newly generated partial states for which a more general partial state has already been explored.

In this thesis, we discuss and evaluate techniques of regression and state subsumption. In order to evaluate their performance, we have implemented a regression search algorithm for the planning system Fast Downward, supporting both a simple subsumption technique as well as a refined subsumption technique using a trie data structure. The experiments have shown that a basic regression search algorithm generally increases the number of explored states compared to uniform-cost forward search. Regression with pruning based on state subsumption with a trie data structure significantly reduces the number of explored states compared to basic regression.

This thesis discusses the Traveling Tournament Problem and how it can be solved with heuristic search. The Traveling Tournament problem is a sports scheduling problem where one tries to find a schedule for a league that meets certain constraints while minimizing the overall distance traveled by the teams in this league. It is hard to solve for leagues with many teams involved since its complexity grows exponentially in the number of teams. The largest instances solved up to date, are instances with leagues of up to 10 teams.

Previous related work has shown that it is a reasonable approach to solve the Traveling Tournament Problem with an IDA*-based tree search. In this thesis I implemented such a search and extended it with several enhancements to examine whether they improve performance of the search. The heuristic that I used in my implementation is the Independent Lower Bound heuristic. It tries to find lower bounds to the traveling costs of each team in the considered league. With my implementation I was able to solve problem instances with up to 8 teams. The results of my evaluation have mostly been consistent with the expected impact of the implemented enhancements on the overall performance.

One huge topic in Artificial Intelligence is the classical planning. It is the process of finding a plan, therefore a sequence of actions that leads from an initial state to a goal state for a specified problem. In problems with a huge amount of states it is very difficult and time consuming to find a plan. There are different pruning methods that attempt to lower the amount of time needed to find a plan by trying to reduce the number of states to explore. In this work we take a closer look at two of these pruning methods. Both of these methods rely on the last action that led to the current state. The first one is the so called tunnel pruning that is a generalisation of the tunnel macros that are used to solve Sokoban problems. The idea is to find actions that allow a tunnel and then prune all actions that are not in the tunnel of this action. The second method is the partition-based path pruning. In this method all actions are distributed into different partitions. These partitions then can be used to prune actions that do not belong to the current partition.

The evaluation of these two pruning methods show, that they can reduce the number of explored states for some problem domains, however the difference between pruned search and normal search gets smaller when we use heuristic functions. It also shows that the two pruning rules effect different problem domains.

Ziel klassischer Handlungsplanung ist es auf eine möglichst effiziente Weise gegebene Planungsprobleme zu lösen. Die Lösung bzw. der Plan eines Planungsproblems ist eine Sequenz von Operatoren mit denen man von einem Anfangszustand in einen Zielzustand gelangt. Um einen Zielzustand gezielter zu finden, verwenden einige Suchalgorithmen eine zusätzliche Information über den Zustandsraum - die Heuristik. Sie schätzt, ausgehend von einem Zustand den Abstand zum Zielzustand. Demnach wäre es ideal, wenn jeder neue besuchte Zustand einen kleineren heuristischen Wert aufweisen würde als der bisher besuchte Zustand. Es gibt allerdings Suchszenarien bei denen die Heuristik nicht weiterhilft um einem Ziel näher zu kommen. Dies ist insbesondere dann der Fall, wenn sich der heuristische Wert von benachbarten Zuständen nicht ändert. Für die gierige Bestensuche würde das bedeuten, dass die Suche auf Plateaus und somit blind verläuft, weil sich dieser Suchalgorithmus ausschliesslich auf die Heuristik stützt. Algorithmen, die die Heuristik als Wegweiser verwenden, gehören zur Klasse der heuristischen Suchalgorithmen.

In dieser Arbeit geht es darum, in Fällen wie den Plateaus trotzdem eine Orientierung im Zustandsraum zu haben, indem Zustände neben der Heuristik einer weiteren Priorisierung unterliegen. Die hier vorgestellte Methode nutzt Abhängigkeiten zwischen Operatoren aus und erweitert die gierige Bestensuche. Wie stark Operatoren voneinander abhängen, betrachten wir anhand eines Abstandsmasses, welches vor der eigentlichen Suche berechnet wird. Die grundlegende Idee ist, Zustände zu bevorzugen, deren Operatoren im Vorfeld voneinander profitierten. Die Heuristik fungiert hierbei erst im Nachhinein als Tie-Breaker, sodass wir einem vielversprechenden Pfad zunächst folgen können, ohne dass uns die Heuristik an einer anderen, weniger vielversprechenden Stelle suchen lässt.

Die Ergebnisse zeigen, dass unser Ansatz in der reinen Suchzeit je nach Heuristik performanter sein kann, als wenn man sich ausschliesslich auf die Heuristik stützt. Bei sehr informationsreichen Heuristiken kann es jedoch passieren, dass die Suche durch unseren Ansatz eher gestört wird. Zudem werden viele Probleme nicht gelöst, weil die Berechnung der Abstände zu zeitaufwändig ist.

In classical planning, heuristic search is a popular approach to solving problems very efficiently. The objective of planning is to find a sequence of actions that can be applied to a given problem and that leads to a goal state. For this purpose, there are many heuristics. They are often a big help if a problem has a solution, but what happens if a problem does not have one? Which heuristics can help proving unsolvability without exploring the whole state space? How efficient are they? Admissible heuristics can be used for this purpose because they never overestimate the distance to a goal state and are therefore able to safely cut off parts of the search space. This makes it potentially easier to prove unsolvability

In this project we developed a problem generator to automatically create unsolvable problem instances and used those generated instances to see how different admissible heuristics perform on them. We used the Japanese puzzle game Sokoban as the first problem because it has a high complexity but is still easy to understand and to imagine for humans. As second problem, we used a logistical problem called NoMystery because unlike Sokoban it is a resource constrained problem and therefore a good supplement to our experiments. Furthermore, unsolvability occurs rather 'naturally' in these two domains and does not seem forced.

Sokoban is a computer game where each level consists of a two-dimensional grid of fields. There are walls as obstacles, moveable boxes and goal fields. The player controls the warehouse worker (Sokoban in Japanese) to push the boxes to the goal fields. The problem is very complex and that is why Sokoban has become a domain in planning.

Phase transitions mark a sudden change in solvability when traversing through the problem space. They occur in the region of hard instances and have been found for many domains. In this thesis we investigate phase transitions in the Sokoban puzzle. For our investigation we generate and evaluate random instances. We declare the defining parameters for Sokoban and measure their influence on the solvability. We show that phase transitions in the solvability of Sokoban can be found and their occurrence is measured. We attempt to unify the parameters of Sokoban to get a prediction on the solvability and hardness of specific instances.

In planning, we address the problem of automatically finding a sequence of actions that leads from a given initial state to a state that satisfies some goal condition. In satisficing planning, our objective is to find plans with preferably low, but not necessarily the lowest possible costs while keeping in mind our limited resources like time or memory. A prominent approach for satisficing planning is based on heuristic search with inadmissible heuristics. However, depending on the applied heuristic, plans found with heuristic search might be of low quality, and hence, improving the quality of such plans is often desirable. In this thesis, we adapt and apply iterative tunneling search with A* (ITSA*) to planning. ITSA* is an algorithm for plan improvement which has been originally proposed by Furcy et al. for search problems. ITSA* intends to search the local space of a given solution path in order to find "short cuts" which allow us to improve our solution. In this thesis, we provide an implementation and systematic evaluation of this algorithm on the standard IPC benchmarks. Our results show that ITSA* also successfully works in the planning area.

In action planning, greedy best-first search (GBFS) is one of the standard techniques if suboptimal plans are accepted. GBFS uses a heuristic function to guide the search towards a goal state. To achieve generality, in domain-independant planning the heuristic function is generated automatically. A well-known problem of GBFS are search plateaus, i.e., regions in the search space where all states have equal heuristic values. In such regions, heuristic search can degenerate to uninformed search. Hence, techniques to escape from such plateaus are desired to improve the efficiency of the search. A recent approach to avoid plateaus is based on diverse best-first search (DBFS) proposed by Imai and Kishimoto. However, this approach relies on several parameters. This thesis presents an implementation of DBFS into the Fast Downward planner. Furthermore, this thesis presents a systematic evaluation of DBFS for several parameter settings, leading to a better understanding of the impact of the parameter choices to the search performance.

Risk is a popular board game where players conquer each other's countries. In this project, I created an AI that plays Risk and is capable of learning. For each decision it makes, it performs a simple search one step ahead, looking at the outcomes of all possible moves it could make, and picks the most beneficial. It judges the desirability of outcomes by a series of parameters, which are modified after each game using the TD(λ)-Algorithm, allowing the AI to learn.

The Canadian Traveler's Problem ( ctp ) is a path finding problem where due to unfavorable weather, some of the roads are impassable. At the beginning, the agent does not know which roads are traversable and which are not. Instead, it can observe the status of roads adjacent to its current location. We consider the stochastic variant of the problem, where the blocking status of a connection is randomly defined with known probabilities. The goal is to find a policy which minimizes the expected travel costs of the agent.

We discuss several properties of the stochastic ctp and present an efficient way to calculate state probabilities. With the aid of these theoretical results, we introduce an uninformed algorithm to find optimal policies.

Finding optimal solutions for general search problems is a challenging task. A powerful approach for solving such problems is based on heuristic search with pattern database heuristics. In this thesis, we present a domain specific solver for the TopSpin Puzzle problem. This solver is based on the above-mentioned pattern database approach. We investigate several pattern databases, and evaluate them on problem instances of different size.

Merge-and-shrink abstractions are a popular approach to generate abstraction heuristics for planning. The computation of merge-and-shrink abstractions relies on a merging and a shrinking strategy. A recently investigated shrinking strategy is based on using bisimulations. Bisimulations are guaranteed to produce perfect heuristics. In this thesis, we investigate an efficient algorithm proposed by Dovier et al. for computing coarsest bisimulations. The algorithm, however, cannot directly be applied to planning and needs some adjustments. We show how this algorithm can be reduced to work with planning problems. In particular, we show how an edge labelled state space can be translated to a state labelled one and what other changes are necessary for the algorithm to be usable for planning problems. This includes a custom data structure to fulfil all requirements to meet the worst case complexity. Furthermore, the implementation will be evaluated on planning problems from the International Planning Competitions. We will see that the resulting algorithm can often not compete with the currently implemented algorithm in Fast Downward. We discuss the reasons why this is the case and propose possible solutions to resolve this issue.

In order to understand an algorithm, it is always helpful to have a visualization that shows step for step what the algorithm is doing. Under this presumption this Bachelor project will explain and visualize two AI techniques, Constraint Satisfaction Processing and SAT Backbones, using the game Gnomine as an example.

CSP techniques build up a network of constraints and infer information by propagating through a single or several constraints at a time, reducing the domain of the variables in the constraint(s). SAT Backbone Computations find literals in a propositional formula, which are true in every model of the given formula.

By showing how to apply these algorithms on the problem of solving a Gnomine game I hope to give a better insight on the nature of how the chosen algorithms work.

Planning as heuristic search is a powerful approach to solve domain-independent planning problems. An important class of heuristics is based on abstractions of the original planning task. However, abstraction heuristics usually come with loss in precision. The contribution of this thesis is the investigation of constrained abstraction heuristics in general, and the application of this concept to pattern database and merge and shrink abstractions in particular. The idea is to use a subclass of mutexes which represent sets of variable-value-pairs so that only one of these pairs can be true at any given time, to regain some of the precision which is lost in the abstraction without increasing its size. By removing states and operators in the abstraction which conflict with such a mutex, the abstraction is refined and hence, the corresponding abstraction heuristic can get more informed. We have implemented the refinements of these heuristics in the Fast Downward planner and evaluated the different approaches using standard IPC benchmarks. The results show that the concept of constrained abstraction heuristics can improve planning as heuristic search in terms of time and coverage.

A permutation problem considers the task where an initial order of objects (ie, an initial mapping of objects to locations) must be reordered into a given goal order by using permutation operators. Permutation operators are 1:1 mappings of the objects from their locations to (possibly other) locations. An example for permutation problems are the wellknown Rubik's Cube and TopSpin Puzzle. Permutation problems have been a research area for a while, and several methods for solving such problems have been proposed in the last two centuries. Most of these methods focused on finding optimal solutions, causing an exponential runtime in the worst case.

In this work, we consider an algorithm for solving permutation problems that has been originally proposed by M. Furst, J. Hopcroft and E. Luks in 1980. This algorithm has been introduced on a theoretical level within a proof for "Testing Membership and Determining the Order of a Group", but has not been implemented and evaluated on practical problems so far. In contrast to the other abovementioned solving algorithms, it only finds suboptimal solutions, but is guaranteed to run in polynomial time. The basic idea is to iteratively reach subgoals, and then to let them fix when we go further to reach the next goals. We have implemented this algorithm and evaluated it on different models, as the Pancake Problem and the TopSpin Puzzle .

Pattern databases (Culberson & Schaeffer, 1998) or PDBs, have been proven very effective in creating admissible Heuristics for single-agent search, such as the A*-algorithm. Haslum et. al proposed, a hill-climbing algorithm can be used to construct the PDBs, using the canonical heuristic. A different approach would be to change action-costs in the pattern-related abstractions, in order to obtain the admissible heuristic. This the so called Cost-Partitioning.

The aim of this project was to implement a cost-partitioning inside the hill-climbing algorithm by Haslum, and compare the results with the standard way which uses the canonical heuristic.

UCT ("upper confidence bounds applied to trees") is a state-of-the-art algorithm for acting under uncertainty, e.g. in probabilistic environments. In the last years it has been very successfully applied in numerous contexts, including two-player board games like Go and Mancala and stochastic single-agent optimization problems such as path planning under uncertainty and probabilistic action planning.

In this project the UCT algorithm was implemented, adapted and evaluated for the classical arcade game "Ms Pac-Man". The thesis introduces Ms Pac-Man and the UCT algorithm, discusses some critical design decisions for developing a strong UCT-based algorithm for playing Ms Pac-Man, and experimentally evaluates the implementation.

  • Python Course
  • Python Basics
  • Interview Questions
  • Python Quiz
  • Popular Packages
  • Python Projects
  • Practice Python
  • AI With Python
  • Learn Python3
  • Python Automation
  • Python Web Dev
  • DSA with Python
  • Python OOPs
  • Dictionaries

8 Best Topics for Research and Thesis in Artificial Intelligence

Imagine a future in which intelligence is not restricted to humans!!! A future where machines can think as well as humans and work with them to create an even more exciting universe. While this future is still far away, Artificial Intelligence has still made a lot of advancement in these times. There is a lot of research being conducted in almost all fields of AI like Quantum Computing, Healthcare, Autonomous Vehicles, Internet of Things , Robotics , etc. So much so that there is an increase of 90% in the number of annually published research papers on Artificial Intelligence since 1996.

Keeping this in mind, if you want to research and write a thesis based on Artificial Intelligence, there are many sub-topics that you can focus on. Some of these topics along with a brief introduction are provided in this article. We have also mentioned some published research papers related to each of these topics so that you can better understand the research process.

Table of Content

1. Machine Learning

2. deep learning, 3. reinforcement learning, 4. robotics, 5. natural language processing (nlp), 6. computer vision, 7. recommender systems, 8. internet of things.

Best-Topics-for-Research-and-Thesis-in-Artificial-Intelligence

So without further ado, let’s see the different Topics for Research and Thesis in Artificial Intelligence!

Machine Learning involves the use of Artificial Intelligence to enable machines to learn a task from experience without programming them specifically about that task. (In short, Machines learn automatically without human hand holding!!!) This process starts with feeding them good quality data and then training the machines by building various machine learning models using the data and different algorithms. The choice of algorithms depends on what type of data do we have and what kind of task we are trying to automate.

However, generally speaking, Machine Learning Algorithms are generally divided into 3 types: Supervised Machine Learning Algorithms , Unsupervised Machine Learning Algorithms , and Reinforcement Machine Learning Algorithms . If you are interested in gaining practical experience and understanding these algorithms in-depth, check out the Data Science Live Course by us.

Deep Learning is a subset of Machine Learning that learns by imitating the inner working of the human brain in order to process data and implement decisions based on that data. Basically, Deep Learning uses artificial neural networks to implement machine learning. These neural networks are connected in a web-like structure like the networks in the human brain (Basically a simplified version of our brain!).

This web-like structure of artificial neural networks means that they are able to process data in a nonlinear approach which is a significant advantage over traditional algorithms that can only process data in a linear approach. An example of a deep neural network is RankBrain which is one of the factors in the Google Search algorithm.

Reinforcement Learning is a part of Artificial Intelligence in which the machine learns something in a way that is similar to how humans learn. As an example, assume that the machine is a student. Here the hypothetical student learns from its own mistakes over time (like we had to!!). So the Reinforcement Machine Learning Algorithms learn optimal actions through trial and error.

This means that the algorithm decides the next action by learning behaviors that are based on its current state and that will maximize the reward in the future. And like humans, this works for machines as well! For example, Google’s AlphaGo computer program was able to beat the world champion in the game of Go (that’s a human!) in 2017 using Reinforcement Learning.

Robotics is a field that deals with creating humanoid machines that can behave like humans and perform some actions like human beings. Now, robots can act like humans in certain situations but can they think like humans as well? This is where artificial intelligence comes in! AI allows robots to act intelligently in certain situations. These robots may be able to solve problems in a limited sphere or even learn in controlled environments.

An example of this is Kismet , which is a social interaction robot developed at M.I.T’s Artificial Intelligence Lab. It recognizes the human body language and also our voice and interacts with humans accordingly. Another example is Robonaut , which was developed by NASA to work alongside the astronauts in space.

It’s obvious that humans can converse with each other using speech but now machines can too! This is known as Natural Language Processing where machines analyze and understand language and speech as it is spoken (Now if you talk to a machine it may just talk back!). There are many subparts of NLP that deal with language such as speech recognition, natural language generation, natural language translation , etc. NLP is currently extremely popular for customer support applications, particularly the chatbot . These chatbots use ML and NLP to interact with the users in textual form and solve their queries. So you get the human touch in your customer support interactions without ever directly interacting with a human.

Some Research Papers published in the field of Natural Language Processing are provided here. You can study them to get more ideas about research and thesis on this topic.

The internet is full of images! This is the selfie age, where taking an image and sharing it has never been easier. In fact, millions of images are uploaded and viewed every day on the internet. To make the most use of this huge amount of images online, it’s important that computers can see and understand images. And while humans can do this easily without a thought, it’s not so easy for computers! This is where Computer Vision comes in.

Computer Vision uses Artificial Intelligence to extract information from images. This information can be object detection in the image, identification of image content to group various images together, etc. An application of computer vision is navigation for autonomous vehicles by analyzing images of surroundings such as AutoNav used in the Spirit and Opportunity rovers which landed on Mars.

When you are using Netflix, do you get a recommendation of movies and series based on your past choices or genres you like? This is done by Recommender Systems that provide you some guidance on what to choose next among the vast choices available online. A Recommender System can be based on Content-based Recommendation or even Collaborative Filtering.

Content-Based Recommendation is done by analyzing the content of all the items. For example, you can be recommended books you might like based on Natural Language Processing done on the books. On the other hand, Collaborative Filtering is done by analyzing your past reading behavior and then recommending books based on that.

Artificial Intelligence deals with the creation of systems that can learn to emulate human tasks using their prior experience and without any manual intervention. Internet of Things , on the other hand, is a network of various devices that are connected over the internet and they can collect and exchange data with each other.

Now, all these IoT devices generate a lot of data that needs to be collected and mined for actionable results. This is where Artificial Intelligence comes into the picture. Internet of Things is used to collect and handle the huge amount of data that is required by the Artificial Intelligence algorithms. In turn, these algorithms convert the data into useful actionable results that can be implemented by the IoT devices.

author

Similar Reads

  • AI-ML-DS Blogs

Please Login to comment...

  • How to Install & Use Kodi on FireStick
  • How to Watch NFL on NFL+ in 2024: A Complete Guide
  • Best Smartwatches in 2024: Top Picks for Every Need
  • Top Budgeting Apps in 2024
  • GeeksforGeeks Practice - Leading Online Coding Platform

Improve your Coding Skills with Practice

 alt=

What kind of Experience do you want to share?

Thesis Guidelines

Presentation of prof. jesse davis about academic writing and presenting..

academic writing and presenting

The readers of your thesis:

Every thesis is usually evaluated by three people: this is either the promotor, and internal reader (same research unit) and an external reader (other research unit), or, in case of two promotors, the promotors and an external reader. Sometimes there are four readers: two promotors and two readers. The names of the promotor(s) and reader(s) will need to appear on the preface of your thesis. This information will be made available to you in the second half of May.

Handing in the thesis:

There's no need to provide a hard copy.

You only need to submit your thesis electronically via KU Loket and send it via e-mail to the jury (i.e. the promotor(s), the daily advisor(s) and the assesor(s) and [email protected] in cc)

Thesis format:

All students need to complete a thesis of approximately 50 pages for the thesis text (so not including title, foreword,...) .

If the promotor deems it appropriate it may also be possible to write a 20 page paper of publishable quality instead. Standard is the 50-pages format.

The faculty of Engineering has decided that the cover and the preface pages of the thesis need to have a fixed format. The guidelines for this format are on https://eng.kuleuven.be/en/study/masters-theses/facultary-template

Electronically uploading the master thesis:

All master students in the faculty of engineering are required to upload their thesis electronically in KU Loket. Information on the uploading of the thesis is on https://www.kuleuven.be/english/education/student/examinations/submitting-the-electronic-copy-of-your-masters-thesis#guidelines

Master of Artificial Intelligence

What is the true nature of intelligence? Is it simply learning processes, cognition models, natural language representation and reasoning? Or is it more? The Master of Artificial Intelligence programme trains a wide variety of students in all areas of knowledge-based technology and the wider application of that technology across multiple fields, including in the development of intelligent robots.

  • About the programme

Admission and application

Tuition fees, after graduation, why ku leuven, more information, about this programme .

The Master of Artificial Intelligence programme at KU Leuven explores and builds on the fascinating challenges of cognitive processes and models, natural language and perception. For many years, it has provided an internationally acclaimed advanced study programme in artificial intelligence. The multidisciplinary programme trains students from a variety of backgrounds - including engineering, sciences, economics, management, psychology, and linguistics - in all areas of knowledge-based technology, cognitive science, and their applications. The one-year programme, taught entirely in English, is the result of a collaboration between many internationally prominent research units from seven different faculties of the university. It allows you to focus on engineering and computer science, cognitive science, or speech and language technology.

The option Big Data Analytics  is intended for graduates in Computer Science and trains them to become specialists in the analysis of Big Data. It instructs the students in Statistics, Machine Learning, Data Mining and Advanced programming techniques for dealing with Big Data. It offers a range of application domains in Information Retrieval, Bio-Informatics, Computer Vision and others.

The understanding of the principles of intelligence, the development of artificial intelligence and its many applications in different areas requires a multi-disciplinary approach. The Master of Artificial Intelligence programme is internationally and interdisciplinary oriented. It aims at educating students from a wide variety of different backgrounds. A basic introductory part of the programme consists of fundamentals of artificial intelligence and a broadening course that the student selects from either cognitive science, philosophy of mind and artificial intelligence or privacy and big data. Specific for the option big data analytics, it also contains the course data and statistical modeling. Within the programme students have the choice between three options.

Programme Master of Artificial Intelligence

Curious about which courses you will follow or which options you will have? Do you want to know what a typical week will look like?

Your programme

Rewatch the webinar about the Faculty of Engineering Science programmes given on 29 November 2023. 

Engineering and computer science option (ECS)

The  ECS option  is intended for students with a background in the area of engineering or exact sciences. After an introduction to the basic AI concepts, tools and application areas topics of focus include advanced programming languages, knowledge based systems, artificial neural networks and deep learning, robotics, computer vision, machine learning, datamining, support vector machines, bioinformatics, genetic algorithms and evolutionary computation, multi-agent systems, and others. 

Speech and language technology option (SLT)

The central focus of the  SLT option  is the processing of natural language, both in its written and spoken form. The programme offers a solid linguistic basis, covering the fields of syntax, semantics, morphology, phonetics and lexicography, and continues with advanced courses in natural language processing, speech recognition, speech synthesis and language engineering. 

Big Data Analytics option (BDA)

The  BDA option  trains students in state of the art data analysis techniques, programming techniques and applications that deal with very large data collections.

The option is oriented towards graduates of Computer Science programs. It instructs students on the central concepts of statistical data analysis, machine learning and data mining. It trains students to program learning and data mining techniques that need to cope with big data collections. It conducts deeper studies into an number of applications regarding big data and advanced analysis techniques.

Admission criteria

The admission policy is intended to ensure equal opportunity of access to higher education for qualified European and third country students. There are two ways to be admitted to the programme:

  • Direct admission , on the basis of a relevant degree obtained in the Flemish Community. You can check the  list of relevant programmes here . 
  • After an admission process , which is meant for students that obtained a degree outside the Flemish Community. 

All applications from students who have obtained a degree outside the Flemish Community are evaluated by both KU Leuven Admissions Office and the relevant master’s programme director. Final admission decisions will made be at the discretion of the Faculty.

Official and current admission requirements . 

Student profile

As a student you should be able to formulate research goals, determine trajectories that achieve such goals, collect and select information relevant to achieve research goals and interpret collected information on the basis of a critical research attitude. When selecting the ECS option you are expected to be familiar with basic undergraduate level mathematics. When selecting the BDA option, you are expected to already have a degree in Computer Science.

When succeeding in the programme you should be able to understand the concepts, methods, and applicability of fundamentals of artificial intelligence and a broadening view on a selection of either cognitive science, philosophy of mind and artificial intelligence or privacy and big data. You will become familiar with AI programming languages and several advanced areas in AI including current research directions. 

Application deadline

For most recent - and only official - information on application deadlines, check  KU Leuven - Application Deadlines .

Application procedure

Check the  application instructions  for Bachelor, Master, Postgraduate programmes and Credit Contracts. There is also a  video  that explains the application procedure from start to finish. 

It is worth noting that our tuition fees are cheaper than in many other European countries thanks to the generous financing of the higher education system by our government. Although all of our programmes are very affordable compared to equivalent universities around the world, the fees for any individual student are contingent upon their choice of academic programme and the nationality of the student.

For most recent and only official information on the tuition fees, check the  KU Leuven - Tuition Fees .

Scholarships

Our aim is to offer affordable tuition fees for all students, which means we only have a limited number of  scholarships  available for students from particular backgrounds or studying in particular fields.

Excellent students who are eligible for a Master Mind Scholarship are invited to submit their application before 1 February.

Career perspectives

With a Master of Artificial Intelligence degree, you will be welcomed in companies in the areas of information technology, data-mining, speech and language technology, intelligent systems, diagnosis and quality control, fraud detection, biometric systems. You can also work in the banking area, or give support in the areas of process industry, biomedicine and bioinformatics, robotics, traffic systems. Furthermore this degree offers opportunities to start a PhD programme.

Career support

Our Student Career Center and our student organisation VTK are both happy to put you on the right track towards your first work experience. They also coaches you in the search for an interesting job and help you with job interviews. 

  • Student Career Center
  • KU Leuven Career Zone
  • VTK Student Career support
  • VTK Student Corporate relations

Entrepreneurship

For students wanting to start their own business KU Leuven also has KICK, the KU Leuven community that encourages students with innovative and entrepreneurial ideas.

KU Leuven Kick

Our alumni network 

The Faculty of Engineering Sciences constantly aims at strengthening the contacts between alumni as well as between alumni and current students. ​​​​

  • Engineering alumni network
  • KU Leuven alumni

KU Leuven is one of Europe’s highest-ranked and most renowned universities. It boasts a long tradition of pioneering research and high-quality education. But KU Leuven has quite a few other strengths as well.

  • Discover our strengths
  • A virtual tour of our faculties
  • Why you should come to Belgium

Our campus in Leuven 

The city of Leuven is home to the main and largest KU Leuven campus.

  • More on the Leuven campus
  • KU Leuven walking tours

Life at KU Leuven

  • Meeting people
  • Immigration and residence
  • Welcome activities for new students

Faculty of Engineering Science

This programme is organised by the Faculty of Engineering Science, located at our beautiful and green campus in Heverlee - south of Leuven.​​​​​​

Website of the Faculty of Engineering Science

Publications

Find out more about studying at KU Leuven and quickly find the practical information you need.  

  • Download the International Programmes brochure ​​​​​
  • All publications

Chat with our students

Do you have any questions about student life in Belgium, life at KU Leuven or do you want more information about a specific course or programme? You can ask our students directly.

Chat with our student ambassadors

Stay informed

Want to stay informed on KU Leuven, our programmes, the deadlines when applying, ...? Leave your details to receive regular updates.

Keep me informed

Questions about the programme?

Ask about the classes you will take, the subjects you will study, or the campus where you will be spending your time.

Questions about studying at KU Leuven?

For those wondering about how KU Leuven can help you feel at home whilst studying in Belgium.

Questions about admissions?

Where to start, what to include, and more support on the road to enrolling at KU Leuven.

Available Master's thesis topics in machine learning

Main content.

Here we list topics that are available. You may also be interested in our list of completed Master's theses .

Approximation algorithms for learning Bayesian networks

Bayesian networks are probabilistic models that are used to represent multivariate distributions. The core of a Bayesian network is its structure, a directed acyclic graph (DAG), that expresses conditional independencies between variables.

Typically, the structure is learned from data. The problem is NP-hard and thus exact algorithms do not scale up and one often resorts to heuristics that do not give any quality guarantees. 

A recent paper presented a moderately exponential time approximation algorithm that can be used to trade between running time and quality of the approximation. However, the paper is fully theoretical and we do not know whether the proposed algorithm is useful in practice.

Task: Implement the algorithm and do experiments to assess its practical performance.

Advisor: Pekka Parviainen

Learning and inference with large Bayesian networks

Most learning and inference tasks with Bayesian networks are NP-hard. Therefore, one often resorts to using different heuristics that do not give any quality guarantees.

Task: Evaluate quality of large-scale learning or inference algorithms empirically.

Sum-product networks

Traditionally, probabilistic graphical models use a graph structure to represent dependencies and independencies between random variables. Sum-product networks are a relatively new type of a graphical model where the graphical structure models computations and not the relationships between variables. The benefit of this representation is that inference (computing conditional probabilities) can be done in linear time with respect to the size of the network.

Potential thesis topics in this area: a) Compare inference speed with sum-product networks and Bayesian networks. Characterize situations when one model is better than the other. b) Learning the sum-product networks is done using heuristic algorithms. What is the effect of approximation in practice?

Bayesian Bayesian networks

The naming of Bayesian networks is somewhat misleading because there is nothing Bayesian in them per se; A Bayesian network is just a representation of a joint probability distribution. One can, of course, use a Bayesian network while doing Bayesian inference. One can also learn Bayesian networks in a Bayesian way. That is, instead of finding an optimal network one computes the posterior distribution over networks.

Task: Develop algorithms for Bayesian learning of Bayesian networks (e.g., MCMC, variational inference, EM)

Large-scale (probabilistic) matrix factorization

The idea behind matrix factorization is to represent a large data matrix as a product of two or more smaller matrices.They are often used in, for example, dimensionality reduction and recommendation systems. Probabilistic matrix factorization methods can be used to quantify uncertainty in recommendations. However, large-scale (probabilistic) matrix factorization is computationally challenging.

Potential thesis topics in this area: a) Develop scalable methods for large-scale matrix factorization (non-probabilistic or probabilistic), b) Develop probabilistic methods for implicit feedback (e.g., recommmendation engine when there are no rankings but only knowledge whether a customer has bought an item)

Bayesian deep learning

Standard deep neural networks do not quantify uncertainty in predictions. On the other hand, Bayesian methods provide a principled way to handle uncertainty. Combining these approaches leads to Bayesian neural networks. The challenge is that Bayesian neural networks can be cumbersome to use and difficult to learn.

The task is to analyze Bayesian neural networks and different inference algorithms in some simple setting.

Deep learning for combinatorial problems

Deep learning is usually applied in regression or classification problems. However, there has been some recent work on using deep learning to develop heuristics for combinatorial optimization problems; see, e.g., [1] and [2].

Task: Choose a combinatorial problem (or several related problems) and develop deep learning methods to solve them.

References: [1] Vinyals, Fortunato and Jaitly: Pointer networks. NIPS 2015. [2] Dai, Khalil, Zhang, Dilkina and Song: Learning Combinatorial Optimization Algorithms over Graphs. NIPS 2017.

Advisors: Pekka Parviainen, Ahmad Hemmati

Estimating the number of modes of an unknown function

Mode seeking considers estimating the number of local maxima of a function f. Sometimes one can find modes by, e.g., looking for points where the derivative of the function is zero. However, often the function is unknown and we have only access to some (possibly noisy) values of the function. 

In topological data analysis,  we can analyze topological structures using persistent homologies. For 1-dimensional signals, this can translate into looking at the birth/death persistence diagram, i.e. the birth and death of connected topological components as we expand the space around each point where we have observed our function. These observations turn out to be closely related to the modes (local maxima) of the function. A recent paper [1] proposed an efficient method for mode seeking.

In this project, the task is to extend the ideas from [1] to get a probabilistic estimate on the number of modes. To this end, one has to use probabilistic methods such as Gaussian processes.

[1] U. Bauer, A. Munk, H. Sieling, and M. Wardetzky. Persistence barcodes versus Kolmogorov signatures: Detecting modes of one-dimensional signals. Foundations of computational mathematics17:1 - 33, 2017.

Advisors:  Pekka Parviainen ,  Nello Blaser

Causal Abstraction Learning

We naturally make sense of the world around us by working out causal relationships between objects and by representing in our minds these objects with different degrees of approximation and detail. Both processes are essential to our understanding of reality, and likely to be fundamental for developing artificial intelligence. The first process may be expressed using the formalism of structural causal models, while the second can be grounded in the theory of causal abstraction [1].      This project will consider the problem of learning an abstraction between two given structural causal models. The primary goal will be the development of efficient algorithms able to learn a meaningful abstraction between the given causal models.      [1] Rubenstein, Paul K., et al. "Causal consistency of structural equation models." arXiv preprint arXiv:1707.00819 (2017).

Advisor: Fabio Massimo Zennaro

Causal Bandits

"Multi-armed bandit" is an informal name for slot machines, and the formal name of a large class of problems where an agent has to choose an action among a range of possibilities without knowing the ensuing rewards. Multi-armed bandit problems are one of the most essential reinforcement learning problems where an agent is directly faced with an exploitation-exploration trade-off.       This project will consider a class of multi-armed bandits where an agent, upon taking an action, interacts with a causal system [1]. The primary goal will be the development of learning strategies that takes advantage of the underlying causal system in order to learn optimal policies in a shortest amount of time.      [1] Lattimore, Finnian, Tor Lattimore, and Mark D. Reid. "Causal bandits: Learning good interventions via causal inference." Advances in neural information processing systems 29 (2016).

Causal Modelling for Battery Manufacturing

Lithium-ion batteries are poised to be one of the most important sources of energy in the near future. Yet, the process of manufacturing these batteries is very hard to model and control. Optimizing the different phases of production to maximize the lifetime of the batteries is a non-trivial challenge since physical models are limited in scope and collecting experimental data is extremely expensive and time-consuming [1].      This project will consider the problem of aggregating and analyzing data regarding a few stages in the process of battery manufacturing. The primary goal will be the development of algorithms for transporting and integrating data collected in different contexts, as well as the use of explainable algorithms to interpret them.      [1] Niri, Mona Faraji, et al. "Quantifying key factors for optimised manufacturing of Li-ion battery anode and cathode via artificial intelligence." Energy and AI 7 (2022): 100129.

Advisor: Fabio Massimo Zennaro ,  Mona Faraji Niri

Reinforcement Learning for Computer Security

The field of computer security presents a wide variety of challenging problems for artificial intelligence and autonomous agents. Guaranteeing the security of a system against attacks and penetrations by malicious hackers has always been a central concern of this field, and machine learning could now offer a substantial contribution. Security capture-the-flag simulations are particularly well-suited as a testbed for the application and development of reinforcement learning algorithms [1].       This project will consider the use of reinforcement learning for the preventive purpose of testing systems and discovering vulnerabilities before they can be exploited. The primary goal will be the modelling of capture-the-flag challenges of interest and the development of reinforcement learning algorithms that can solve them.      [1] Erdodi, Laszlo, and Fabio Massimo Zennaro. "The Agent Web Model--Modelling web hacking for reinforcement learning." arXiv preprint arXiv:2009.11274 (2020).

Advisor: Fabio Massimo Zennaro ,  Laszlo Tibor Erdodi

Approaches to AI Safety

The world and the Internet are more and more populated by artificial autonomous agents carrying out tasks on our behalf. Many of these agents are provided with an objective and they learn their behaviour trying to achieve their objective as better as they can. However, this approach can not guarantee that an agent, while learning its behaviour, will not undertake actions that may have unforeseen and undesirable effects. Research in AI safety tries to design autonomous agent that will behave in a predictable and safe way [1].      This project will consider specific problems and novel solution in the domain of AI safety and reinforcement learning. The primary goal will be the development of innovative algorithms and their implementation withing established frameworks.      [1] Amodei, Dario, et al. "Concrete problems in AI safety." arXiv preprint arXiv:1606.06565 (2016).

Reinforcement Learning for Super-modelling

Super-modelling [1] is a technique designed for combining together complex dynamical models: pre-trained models are aggregated with messages and information being exchanged in order synchronize the behavior  of the different modles and produce more accurate and reliable predictions. Super-models are used, for instance, in weather or climate science, where pre-existing models are ensembled together and their states dynamically aggregated to generate more realistic simulations. 

This project will consider how reinforcement learning algorithms may be used to solve the coordination problem among the individual models forming a super-model. The primary goal will be the formulation of the super-modelling problem within the reinforcement learning framework and the study of custom RL algorithms to improve the overall performance of super-models.

[1] Schevenhoven, Francine, et al. "Supermodeling: improving predictions with an ensemble of interacting models." Bulletin of the American Meteorological Society 104.9 (2023): E1670-E1686.

Advisor: Fabio Massimo Zennaro ,  Francine Janneke Schevenhoven

Multilevel Causal Discovery

Modelling causal relationships between variables of interest is a crucial step in understanding and controlling a system. A common approach is to represent such relations using graphs with directed arrows discriminating causes from effects.

While causal graphs are often built relying on expert knowledge, a more interesting challenge is to learn them from data. In particular, we want to consider the case where data might have been collected at multiple levels, for instance, with sensor with different resolutions. In this project we want to explore how these heterogeneous data can help the process of inferring causal structures.

[1] Anand, Tara V., et al. "Effect identification in cluster causal diagrams." Proceedings of the 37th AAAI Conference on Artificial Intelligence. Vol. 82. 2023.

Advisor: Fabio Massimo Zennaro ,  Pekka Parviainen

Manifolds of Causal Models

Modelling causal relationships is fundamental in order to understand real-world systems. A common formalism is offered by structural causal models (SCMs) which represent these relationships graphical. However, SCMs are complex mathematical objects entailing collections of different probability distributions.      In this project we want to explore a differential geometric perspective on structural causal models [1]. We will model an SCM and the probability distributions it generates in terms of manifold, and we will study how this modelling encodes causal properties of interest and how relevant quantities may be computed in this framework.      [1] Dominguez-Olmedo, Ricardo, et al. "On data manifolds entailed by structural causal models." International Conference on Machine Learning. PMLR, 2023.

Advisor: Fabio Massimo Zennaro ,  Nello Blaser

Topological Data Analysis on Simulations

Complex systems and dynamics may be hard to formalize in a closed form, and they can often be better studied through simulations. Social systems, for instance, may be reproduced by instantiating simple agents whose interactions generate complex and emergent dynamics. Still, analyzing the behaviours arising from these interactions is not trivial.      In this project we will consider the use of topological data analysis for categorizing and understanding the behaviour of agents in agent-based models [1]. We will analyze the insights and the limitations of exisiting algorithms, as well as consider what dynamical information may be glimpsed through such an analysis.

[1] Swarup, Samarth, and Reza Rezazadegan. "Constructing an Agent Taxonomy from a Simulation Through Topological Data Analysis." Multi-Agent-Based Simulation XX: 20th International Workshop, MABS 2019, Montreal, QC, Canada, May 13, 2019, Revised Selected Papers 20. Springer International Publishing, 2020.

Abstraction for Epistemic Logic

Weighted Kripke models constitute a powerful formalism to express the evolving knowledge of an agent; it allows to express known facts and beliefs, and to recursively model the knowledge of an agent about another agent. Moreover, such relations of knowledge can be given a graphical expression using suitable diagrams on which to perform reasoning. Unfortunately, such graphs can quickly become very large and inefficient to process.

This project consider the reduction of epistemic logic graph using ideas from causal abstraction [1]. The primary goal will be the development of ML models that can learn to output small epistemic logic graph still satisfying logical and consistency constraints.

[1] Zennaro, Fabio Massimo, et al. "Jointly learning consistent causal abstractions over multiple interventional distributions." Conference on Causal Learning and Reasoning. PMLR, 2023

Advisor: Fabio Massimo Zennaro ,  Rustam Galimullin

Optimal Transport for Public Transportation

Modelling public transportation across cities is critical in order to improve viability, provide reliable services and increase reliance on greener form of mass transport. Yet cities and transportation networks are complex systems and modelling often has to rely on incomplete and uncertain data. 

This project will start from considering a concrete challenge in modelling commuter flows across the city of Bergen. In particular, it will consider the application of the mathematical framework of optimal transport [1] to recover statistical patterns in the usage of the main transportation lines across different periods.

[1] Peyré, Gabriel, and Marco Cuturi. "Computational optimal transport: With applications to data science." Foundations and Trends in Machine Learning 11.5-6 (2019): 355-607.

Finalistic Models

The behavior of an agent may be explained both in causal terms (what has caused a certain behavior) or in finalistic terms (what aim justifies a certain behaviour). While causal reasoning is well explained by different mathematical formalism (e.g., structural causal models), finalistic reasoning is still object of research.

In this project we want to explore how a recently-proposed framework for finalistic reasoning [1] may be used to model intentions and counterfactuals in a causal bandit setting, or how it could be used to enhance inverse reinforcement learning.

[1] Compagno, Dario. "Final models: A finalistic interpretation of statistical correlation." arXiv preprint arXiv:2310.02272 (2023).

Advisor: Fabio Massimo Zennaro , Dario Compagno

Automatic hyperparameter selection for isomap

Isomap is a non-linear dimensionality reduction method with two free hyperparameters (number of nearest neighbors and neighborhood radius). Different hyperparameters result in dramatically different embeddings. Previous methods for selecting hyperparameters focused on choosing one optimal hyperparameter. In this project, you will explore the use of persistent homology to find parameter ranges that result in stable embeddings. The project has theoretic and computational aspects.

Advisor: Nello Blaser

Topological Ancombs quartet

This topic is based on the classical Ancombs quartet and families of point sets with identical 1D persistence ( https://arxiv.org/abs/2202.00577 ). The goal is to generate more interesting datasets using the simulated annealing methods presented in ( http://library.usc.edu.ph/ACM/CHI%202017/1proc/p1290.pdf ). This project is mostly computational.

Persistent homology vectorization with cycle location

There are many methods of vectorizing persistence diagrams, such as persistence landscapes, persistence images, PersLay and statistical summaries. Recently we have designed algorithms to in some cases efficiently detect the location of persistence cycles. In this project, you will vectorize not just the persistence diagram, but additional information such as the location of these cycles. This project is mostly computational with some theoretic aspects.

Divisive covers

Divisive covers are a divisive technique for generating filtered simplicial complexes. They original used a naive way of dividing data into a cover. In this project, you will explore different methods of dividing space, based on principle component analysis, support vector machines and k-means clustering. In addition, you will explore methods of using divisive covers for classification. This project will be mostly computational.

Learning Acquisition Functions for Cost-aware Bayesian Optimization

This is a follow-up project of an earlier Master thesis that developed a novel method for learning Acquisition Functions in Bayesian Optimization through the use of Reinforcement Learning. The goal of this project is to further generalize this method (more general input, learned cost-functions) and apply it to hyperparameter optimization for neural networks.

Advisors: Nello Blaser , Audun Ljone Henriksen

Stable updates

This is a follow-up project of an earlier Master thesis that introduced and studied empirical stability in the context of tree-based models. The goal of this project is to develop stable update methods for deep learning models. You will design sevaral stable methods and empirically compare them (in terms of loss and stability) with a baseline and with one another.

Advisors:  Morten Blørstad , Nello Blaser

Multimodality in Bayesian neural network ensembles

One method to assess uncertainty in neural network predictions is to use dropout or noise generators at prediction time and run every prediction many times. This leads to a distribution of predictions. Informatively summarizing such probability distributions is a non-trivial task and the commonly used means and standard deviations result in the loss of crucial information, especially in the case of multimodal distributions with distinct likely outcomes. In this project, you will analyze such multimodal distributions with mixture models and develop ways to exploit such multimodality to improve training. This project can have theoretical, computational and applied aspects.

Wet area segmentation for rivers

NORCE LFI is working on digitizing wetted areas in rivers. You will apply different machine learning techniques for distinguishing water bodies (rivers) from land based on drone aerial (RGB) pictures. This is important for water management and assessing effects of hydropower on river ecosystems (residual flow, stranding of fish and spawning areas).  We have a database of approximately 100 rivers (aerial pictures created from totally ca. 120.000 single pictures with Structure from Motion, single pictures available as well) and several of these rivers are flown at 2-4 different discharges, taken in different seasons and with different weather patterns. For ca. 50 % of the pictures the wetted area is digitized for training (GIS shapefile), most (>90 % of single pictures) cover water surface and land. Possible challenges include shading, reflectance from the water surface, different water/ground colours and wet surfaces on land. This is an applied topic, where you will try many different machine learning techniques to find the best solution for the mapping tasks by NORCE LFI.

Advisors: Nello Blaser , Sebastian Franz Stranzl

Optimizing Jet Reconstruction with Quantum-Based Clustering Techniques

QCD jets are collimated sprays of energy and particles frequently observed at collider experiments, signaling the occurrence of high-energy processes. These jets are pivotal for understanding quantum chromodynamics at high energies and for exploring physics beyond the Standard Model. The definition of a jet typically arises from an agreement between experimentalists and theorists, formalized in jet algorithms that help make sense of the large number of particles produced in collisions.

This project focuses on jet reconstruction using data-driven clustering techniques. Specifically, we aim to apply fast clustering algorithms, optimized through quantum methods, to identify the optimal distribution of jets on an event-by-event basis. This approach allows us to refine jet definitions and enhance the accuracy of jet reconstruction. Key objectives include:

  • Introduce a purely data-drive clustering process using standard techniques.
  • Optimizing the clustering process using quantum-inspired techniques.
  • Benchmark the performance of these algorithms against existing frameworks and compare the extracted jet populations.

By focusing on clustering methods and quantum optimization, this project aims to provide a novel perspective on jet reconstruction, improving the precision and reliability of high-energy physics analyses.

Advisors: Nello Blaser , Konrad Tywoniuk

Learning a hierarchical metric

Often, labels have defined relationships to each other, for instance in a hierarchical taxonomy. E.g. ImageNet labels are derived from the WordNet graph, and biological species are taxonomically related, and can have similarities depending on life stage, sex, or other properties.

ArcFace is an alternative loss function that aims for an embedding that is more generally useful than softmax. It is commonly used in metric learning/few shot learning cases.

Here, we will develop a metric learning method that learns from data with hierarchical labels. Using multiple ArcFace heads, we will simultaneously learn to place representations to optimize the leaf label as well as intermediate labels on the path from leaf to root of the label tree. Using taxonomically classified plankton image data, we will measure performance as a function of ArcFace parameters (sharpness/temperature and margins -- class-wise or level-wise), and compare the results to existing methods.

Advisor: Ketil Malde ( [email protected] )

Self-supervised object detection in video

One challenge with learning object detection is that in many scenes that stretch off into the distance, annotating small, far-off, or blurred objects is difficult. It is therefore desirable to learn from incompletely annotated scenes, and one-shot object detectors may suffer from incompletely annotated training data.

To address this, we will use a region-propsal algorithm (e.g. SelectiveSearch) to extract potential crops from each frame. Classification will be based on two approaches: a) training based on annotated fish vs random similarly-sized crops without annotations, and b) using a self-supervised method to build a representation for crops, and building a classifier for the extracted regions. The method will be evaluated against one-shot detectors and other training regimes.

If successful, the method will be applied to fish detection and tracking in videos from baited and unbaited underwater traps, and used to estimate abundance of various fish species.

See also: Benettino (2016): https://link.springer.com/chapter/10.1007/978-3-319-48881-3_56

Representation learning for object detection

While traditional classifiers work well with data that is labeled with disjoint classes and reasonably balanced class abundances, reality is often less clean. An alternative is to learn a vectors space embedding that reflects semantic relationships between objects, and deriving classes from this representation. This is especially useful for few-shot classification (ie. very few examples in the training data).

The task here is to extend a modern object detector (e.g. Yolo v8) to output an embedding of the identified object. Instead of a softmax classifier, we can learn the embedding either in a supervised manner (using annotations on frames) by attaching an ArcFace or other supervised metric learning head. Alternatively, the representation can be learned from tracked detections over time using e.g. a contrastive loss function to keep the representation for an object (approximately) constant over time. The performance of the resulting object detector will be measured on underwater videos, targeting species detection and/or indiviual recognition (re-ID).

Time-domain object detection

Object detectors for video are normally trained on still frames, but it is evident (from human experience) that using time domain information is more effective. I.e., it can be hard to identify far-off or occluded objects in still images, but movement in time often reveals them.

Here we will extend a state of the art object detector (e.g. yolo v8) with time domain data. Instead of using a single frame as input, the model will be modified to take a set of frames surrounding the annotated frame as input. Performance will be compared to using single-frame detection.

Large-scale visualization of acoustic data

The Institute of Marine Research has decades of acoustic data collected in various surveys. These data are in the process of being converted to data formats that can be processed and analyzed more easily using packages like Xarray and Dask.

The objective is to make these data more accessible to regular users by providing a visual front end. The user should be able to quickly zoom in and out, perform selection, export subsets, apply various filters and classifiers, and overlay annotations and other relevant auxiliary data.

Learning acoustic target classification from simulation

Broadband echosounders emit a complex signal that spans a large frequency band. Different targets will reflect, absorb, and generate resonance at different amplitudes and frequencies, and it is therefore possible to classify targets at much higher resolution and accuracy than before. Due to the complexity of the received signals, deriving effective profiles that can be used to identify targets is difficult.

Here we will use simulated frequency spectra from geometric objects with various shapes, orientation, and other properties. We will train ML models to estimate (recover) the geometric and material properties of objects based on these spectra. The resulting model will be applied to read broadband data, and compared to traditional classification methods.

Online learning in real-time systems

Build a model for the drilling process by using the Virtual simulator OpenLab ( https://openlab.app/ ) for real-time data generation and online learning techniques. The student will also do a short survey of existing online learning techniques and learn how to cope with errors and delays in the data.

Advisor: Rodica Mihai

Building a finite state automaton for the drilling process by using queries and counterexamples

Datasets will be generated by using the Virtual simulator OpenLab ( https://openlab.app/ ). The student will study the datasets and decide upon a good setting to extract a finite state automaton for the drilling process. The student will also do a short survey of existing techniques for extracting finite state automata from process data. We present a novel algorithm that uses exact learning and abstraction to extract a deterministic finite automaton describing the state dynamics of a given trained RNN. We do this using Angluin's L*algorithm as a learner and the trained RNN as an oracle. Our technique efficiently extracts accurate automata from trained RNNs, even when the state vectors are large and require fine differentiation.arxiv.org

Machine Learning for Drug Repositioning in Parkinson’s Disease

Background : Parkinson’s Disease (PD) is a major neurological condition with a complex etiology that tends to affect the elderly population. Understanding the risk factors associated with PD, including drug usage patterns across different demographics, can provide insights into its management and prevention. The Norwegian Prescribed Drug Registry (NorPD) provides comprehensive data on prescriptions dispensed from 2004, making it an excellent resource for such an analysis.

Objective : This project seeks to investigate how well machine learning techniques can predict PD risk, using the individual histories of drug usage along with demographic variables like gender and age.

Methodology :

  • Exploratory Data Analysis and Data Preprocessing: Although the dataset is clean and structured, specific preprocessing steps will be required to tailor the data for the chosen methods.
  • Predictive Modeling: Apply standard machine learning models such as Random Forest for handling large, imbalanced, sparse dataset, to find the best model or ensemble models to robust prediction. The predictive model will be employed to discern patterns in drug usage and demographic factors that correlate with PD risk.
  • Feature Analysis: Conduct a detailed analysis to understand the importance of different features, such as specific drugs, gender, and age, in predicting PD risk and explore complex dependencies between features.
  • Evaluation Metrics: Explore different metrics, such as F1-score and AUC-ROC to evaluate the performance of the predictive models.

Expected Outcomes : The project aims to study and develop predictive models that can accurately identify individuals at increased risk of developing PD based on their prescription history and demographic data.

Ethical Considerations : Data privacy and confidentiality will be strictly maintained by conducting all analyses on the SAFE server, following ethical guidelines for handling sensitive health data. The approval from regional ethics committee (REK) is already in place, as the project will be part of DRONE ( https://www.uib.no/en/epistat/139849/drone-drug-repurposing-neurological-diseases ).

Project Benefits .

  • The student practices working with a huge and rich set of real data and working with experts from epidemiology group at MED faculty.
  • Utilizing different ML methods in real data
  • The possibility of publication if the results are promising.

Advisors :  Asieh Abolpour Mofrad , Samaneh Abolpour Mofrad , Julia Romanowska , Jannicke Igland

Exploring Graph Neural Networks for Analyzing Prescription Data to Predict Parkinson’s Disease Risk

Background : Parkinson’s Disease (PD) significantly impacts the elderly, necessitating advanced computational approaches to predict and understand its risk factors better. The Norwegian Prescribed Drug Registry (NorPD) provides comprehensive data on prescriptions dispensed from 2004, presents an excellent opportunity to employ graph neural networks (GNNs), especially to analyze the temporal dynamics of prescription data.

Objective . The project aims to investigate the effectiveness of GNNs in analyzing time-dependent prescription data, focusing on various graph structures to understand how drug interactions and patient demographics influence PD risk over time.

  • Exploratory Data Analysis and Data Preprocessing: Prepare the prescription data for GNN analysis by investigating different structures to represent the data as a graph. This step is a challenging step; we must investigate what is the best structure for a graph based on the existing GNN and temporal GNN methods. For instance, one might assign a graph to each individual and consider classification approaches, or defining a graph for all participants, and investigating the GNN methods for clustering or predicting nodes and edges.

Incorporate demographic features, such as age, gender, and education, into the graph. Additionally, explore how to integrate time-dependent features to reflect the dynamic nature of the prescription data effectively.

  • Graph Neural Network Implementation: Apply graph neural network models such as Graph Convolutional Networks (GCNs) or Graph Attention Networks (GATs) that can process temporal graph data, based on the structure of our defined graph.
  • Feature Analysis: Perform an in-depth analysis of the learned embeddings and node features to identify significant patterns and influential factors related to increased, decreased PD risk.
  • Evaluation Metrics: Explore different metrics to evaluate the performance of the predictive models.

Expected Outcomes :

The project aims to study how graph neural networks (GNNs) can be utilized to analyze complex, time-dependent prescription data.

Ethical Considerations . All analyses will adhere to strict privacy protocols by conducting research on the SAFE server, ensuring that all individual data remains confidential and secure in compliance with ethical healthcare data management practices. The approval from regional ethics committee (REK) is already in place, as the project will be part of DRONE ( https://www.uib.no/en/epistat/139849/drone-drug-repurposing-neurological-diseases )

Project Benefits :

  • Get familiar with GNNs as advanced ML methods and utilize them in real data.

Advisors :  Samaneh Abolpour Mofrad , Asieh Abolpour Mofrad , Julia Romanowska , Jannicke Igland

Scaling Laws for Language Models in Generative AI

Large Language Models (LLM) power today's most prominent language technologies in Generative AI like ChatGPT, which, in turn, are changing the way that people access information and solve tasks of many kinds.

A recent interest on scaling laws for LLMs has shown trends on understanding how well they perform in terms of factors like the how much training data is used, how powerful the models are, or how much computational cost is allocated. (See, for example, Kaplan et al. - "Scaling Laws for Neural Language Models”, 2020.)

In this project, the task will consider to study scaling laws for different language models and with respect with one or multiple modeling factors.

Advisor: Dario Garigliotti

Applications of causal inference methods to omics data

Many hard problems in machine learning are directly linked to causality [1]. The graphical causal inference framework developed by Judea Pearl can be traced back to pioneering work by Sewall Wright on path analysis in genetics and has inspired research in artificial intelligence (AI) [1].

The Michoel group has developed the open-source tool Findr [2] which provides efficient implementations of mediation and instrumental variable methods for applications to large sets of omics data (genomics, transcriptomics, etc.). Findr works well on a recent data set for yeast [3].

We encourage students to explore promising connections between the fiels of causal inference and machine learning. Feel free to contact us to discuss projects related to causal inference. Possible topics include: a) improving methods based on structural causal models, b) evaluating causal inference methods on data for model organisms, c) comparing methods based on causal models and neural network approaches.

References:

1. Schölkopf B, Causality for Machine Learning, arXiv (2019):  https://arxiv.org/abs/1911.10500

2. Wang L and Michoel T. Efficient and accurate causal inference with hidden confounders from genome-transcriptome variation data. PLoS Computational Biology 13:e1005703 (2017).  https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1005703

3. Ludl A and and Michoel T. Comparison between instrumental variable and mediation-based methods for reconstructing causal gene networks in yeast. arXiv:2010.07417  https://arxiv.org/abs/2010.07417

Advisors: Adriaan Ludl ,  Tom Michoel

Space-Time Linkage of Fish Distribution to Environmental Conditions

Conditions in the marine environment, such as, temperature and currents, influence the spatial distribution and migration patterns of marine species. Hence, understanding the link between environmental factors and fish behavior is crucial in predicting, e.g., how fish populations may respond to climate change.   Deriving this link is challenging because it requires analysis of two types of datasets (i) large environmental (currents, temperature) datasets that vary in space and time, and (ii) sparse and sporadic spatial observations of fish populations.

Project goal   

The primary goal of the project is to develop a methodology that helps predict how spatial distribution of two fish stocks (capelin and mackerel) change in response to variability in the physical marine environment (ocean currents and temperature).  The information can also be used to optimize data collection by minimizing time spent in spatial sampling of the populations.

The project will focus on the use of machine learning and/or causal inference algorithms.  As a first step, we use synthetic (fish and environmental) data from analytic models that couple the two data sources.  Because the ‘truth’ is known, we can judge the efficiency and error margins of the methodologies. We then apply the methodologies to real world (empirical) observations.

Advisors:  Tom Michoel , Sam Subbey . 

Towards precision medicine for cancer patient stratification

On average, a drug or a treatment is effective in only about half of patients who take it. This means patients need to try several until they find one that is effective at the cost of side effects associated with every treatment. The ultimate goal of precision medicine is to provide a treatment best suited for every individual. Sequencing technologies have now made genomics data available in abundance to be used towards this goal.

In this project we will specifically focus on cancer. Most cancer patients get a particular treatment based on the cancer type and the stage, though different individuals will react differently to a treatment. It is now well established that genetic mutations cause cancer growth and spreading and importantly, these mutations are different in individual patients. The aim of this project is use genomic data allow to better stratification of cancer patients, to predict the treatment most likely to work. Specifically, the project will use machine learning approach to integrate genomic data and build a classifier for stratification of cancer patients.

Advisor: Anagha Joshi

Unraveling gene regulation from single cell data

Multi-cellularity is achieved by precise control of gene expression during development and differentiation and aberrations of this process leads to disease. A key regulatory process in gene regulation is at the transcriptional level where epigenetic and transcriptional regulators control the spatial and temporal expression of the target genes in response to environmental, developmental, and physiological cues obtained from a signalling cascade. The rapid advances in sequencing technology has now made it feasible to study this process by understanding the genomewide patterns of diverse epigenetic and transcription factors as well as at a single cell level.

Single cell RNA sequencing is highly important, particularly in cancer as it allows exploration of heterogenous tumor sample, obstructing therapeutic targeting which leads to poor survival. Despite huge clinical relevance and potential, analysis of single cell RNA-seq data is challenging. In this project, we will develop strategies to infer gene regulatory networks using network inference approaches (both supervised and un-supervised). It will be primarily tested on the single cell datasets in the context of cancer.

Developing a Stress Granule Classifier

To carry out the multitude of functions 'expected' from a human cell, the cell employs a strategy of division of labour, whereby sub-cellular organelles carry out distinct functions. Thus we traditionally understand organelles as distinct units defined both functionally and physically with a distinct shape and size range. More recently a new class of organelles have been discovered that are assembled and dissolved on demand and are composed of liquid droplets or 'granules'. Granules show many properties characteristic of liquids, such as flow and wetting, but they can also assume many shapes and indeed also fluctuate in shape. One such liquid organelle is a stress granule (SG). 

Stress granules are pro-survival organelles that assemble in response to cellular stress and important in cancer and neurodegenerative diseases like Alzheimer's. They are liquid or gel-like and can assume varying sizes and shapes depending on their cellular composition. 

In a given experiment we are able to image the entire cell over a time series of 1000 frames; from which we extract a rough estimation of the size and shape of each granule. Our current method is susceptible to noise and a granule may be falsely rejected if the boundary is drawn poorly in a small majority of frames. Ideally, we would also like to identify potentially interesting features, such as voids, in the accepted granules.

We are interested in applying a machine learning approach to develop a descriptor for a 'classic' granule and furthermore classify them into different functional groups based on disease status of the cell. This method would be applied across thousands of granules imaged from control and disease cells. We are a multi-disciplinary group consisting of biologists, computational scientists and physicists. 

Advisors: Sushma Grellscheid , Carl Jones

Machine Learning based Hyperheuristic algorithm

Develop a Machine Learning based Hyper-heuristic algorithm to solve a pickup and delivery problem. A hyper-heuristic is a heuristics that choose heuristics automatically. Hyper-heuristic seeks to automate the process of selecting, combining, generating or adapting several simpler heuristics to efficiently solve computational search problems [Handbook of Metaheuristics]. There might be multiple heuristics for solving a problem. Heuristics have their own strength and weakness. In this project, we want to use machine-learning techniques to learn the strength and weakness of each heuristic while we are using them in an iterative search for finding high quality solutions and then use them intelligently for the rest of the search. Once a new information is gathered during the search the hyper-heuristic algorithm automatically adjusts the heuristics.

Advisor: Ahmad Hemmati

Machine learning for solving satisfiability problems and applications in cryptanalysis

Advisor: Igor Semaev

Hybrid modeling approaches for well drilling with Sintef

Several topics are available.

"Flow models" are first-principles models simulating the flow, temperature and pressure in a well being drilled. Our project is exploring "hybrid approaches" where these models are combined with machine learning models that either learn from time series data from flow model runs or from real-world measurements during drilling. The goal is to better detect drilling problems such as hole cleaning, make more accurate predictions and correctly learn from and interpret real-word data.

The "surrogate model" refers to  a ML model which learns to mimic the flow model by learning from the model inputs and outputs. Use cases for surrogate models include model predictions where speed is favoured over accuracy and exploration of parameter space.

Surrogate models with active Learning

While it is possible to produce a nearly unlimited amount of training data by running the flow model, the surrogate model may still perform poorly if it lacks training data in the part of the parameter space it operates in or if it "forgets" areas of the parameter space by being fed too much data from a narrow range of parameters.

The goal of this thesis is to build a surrogate model (with any architecture) for some restricted parameter range and implement an active learning approach where the ML requests more model runs from the flow model in the parts of the parameter space where it is needed the most. The end result should be a surrogate model that is quick and performs acceptably well over the whole defined parameter range.

Surrogate models trained via adversarial learning

How best to train surrogate models from runs of the flow model is an open question. This master thesis would use the adversarial learning approach to build a surrogate model which to its "adversary" becomes indistinguishable from the output of an actual flow model run.

GPU-based Surrogate models for parameter search

While CPU speed largely stalled 20 years ago in terms of working frequency on single cores, multi-core CPUs and especially GPUs took off and delivered increases in computational power by parallelizing computations.

Modern machine learning such as deep learning takes advantage this boom in computing power by running on GPUs.

The SINTEF flow models in contrast, are software programs that runs on a CPU and does not happen to utilize multi-core CPU functionality. The model runs advance time-step by time-step and each time step relies on the results from the previous time step. The flow models are therefore fundamentally sequential and not well suited to massive parallelization.

It is however of interest to run different model runs in parallel, to explore parameter spaces. The use cases for this includes model calibration, problem detection and hypothesis generation and testing.

The task of this thesis is to implement an ML-based surrogate model in such a way that many surrogate model outputs can be produced at the same time using a single GPU. This will likely entail some trade off with model size and maybe some coding tricks.

Uncertainty estimates of hybrid predictions (Lots of room for creativity, might need to steer it more, needs good background literature)

When using predictions from a ML model trained on time series data, it is useful to know if it's accurate or should be trusted. The student is challenged to develop hybrid approaches that incorporates estimates of uncertainty. Components could include reporting variance from ML ensembles trained on a diversity of time series data, implementation of conformal predictions, analysis of training data parameter ranges vs current input, etc. The output should be a "traffic light signal" roughly indicating the accuracy of the predictions.

Transfer learning approaches

We're assuming an ML model is to be used for time series prediction

It is possible to train an ML on a wide range of scenarios in the flow models, but we expect that to perform well, the model also needs to see model runs representative of the type of well and drilling operation it will be used in. In this thesis the student implements a transfer learning approach, where the model is trained on general model runs and fine-tuned on a most representative data set.

(Bonus1: implementing one-shot learning, Bonus2: Using real-world data in the fine-tuning stage)

ML capable of reframing situations

When a human oversees an operation like well drilling, she has a mental model of the situation and new data such as pressure readings from the well is interpreted in light of this model. This is referred to as "framing" and is the normal mode of work. However, when a problem occurs, it becomes harder to reconcile the data with the mental model. The human then goes into "reframing", building a new mental model that includes the ongoing problem. This can be seen as a process of hypothesis generation and testing.

A computer model however, lacks re-framing. A flow model will keep making predictions under the assumption of no problems and a separate alarm system will use the deviation between the model predictions and reality to raise an alarm. This is in a sense how all alarm systems work, but it means that the human must discard the computer model as a tool at the same time as she's handling a crisis.

The student is given access to a flow model and a surrogate model which can learn from model runs both with and without hole cleaning and is challenged to develop a hybrid approach where the ML+flow model continuously performs hypothesis generation and testing and is able to "switch" into predictions of  a hole cleaning problem and different remediations of this.

Advisor: Philippe Nivlet at Sintef together with advisor from UiB

Explainable AI at Equinor

In the project Machine Teaching for XAI (see  https://xai.w.uib.no ) a master thesis in collaboration between UiB and Equinor.

Advisor: One of Pekka Parviainen/Jan Arne Telle/Emmanuel Arrighi + Bjarte Johansen from Equinor.

Explainable AI at Eviny

In the project Machine Teaching for XAI (see  https://xai.w.uib.no ) a master thesis in collaboration between UiB and Eviny.

Advisor: One of Pekka Parviainen/Jan Arne Telle/Emmanuel Arrighi + Kristian Flikka from Eviny.

If you want to suggest your own topic, please contact Pekka Parviainen ,  Fabio Massimo Zennaro or Nello Blaser .

Graph

Master Thesis

The thesis project is the ultimate step in the master’s programme, where you apply and extend your knowledge as you conduct research and formulate a DSAIT solution. You can find information about thesis topics, and their connection with the DSAIT themes on an online platform and you can organise a meeting with potential thesis supervisors. At least one scientific staff member from the Software Technology (ST) or Intelligent Systems (INSY) departments will supervise and guide you. For projects carried out in a company, the supervisory team is expanded with a daily co-supervisor from the company.

The thesis project process is embedded in the faculty graduation procedures, consisting of regulated project planning, feedback, and progress monitoring during the thesis project. At the start of the thesis project, you and your supervisor develop a project plan. Ten weeks into the project, the supervisor carries out the first-stage review, followed by greenlight review 14 weeks later. A formal thesis defence in front of the audience and a thesis committee concludes the thesis project six weeks later.

Deepening / broadening electives

Credits: the thesis project is 45EC

Share this page:.

Topics for Master Theses at the Chair for Artificial Intelligence

Smart city / smart mobility.

  • Traffic Forecasting with Graph Attention Networks
  • Learning Traffic Simulation Parameters with Reinforcement Learning
  • Extending the Mannheim Mobility Model with Individual Bike Traffic

AI for Business Process Management

  • Probabilistic Online Conformance Checking with  Declarative Process Monitoring and Deep Neural Networks
  • Continual Learning for Predictive Process Monitoring with Event Streams
  • Explainable predictive process monitoring with Graph Neural Networks
  • New Learning Strategies for Predictive Process Monitoring: Addressing Imbalanced and Redundant Data Challenges
  • Predictive Process Monitoring with Mixed Effects Neural Networks
  • New Evaluation Approaches for Predictive Process Monitoring: Tackling Data Leakage and Concept Drift

Explainable and Fair Machine Learning

  • Extracting Causal Models from Module Handbooks for Explainable Student Success Prediction
  • Detecting Differences in Course Difficulty for Different Demographic Groups
  • Investigating Different Techniques to Improve Fairness for Tabular Data
  • Data-induced Bias in Social Simulations
  • Learing Causal Models from Tabular Data

Human Activity and Goal Recognition

  • Reinforcement Learning for Goal Recognition
  • Investigating the Difficulty of Goal Recognition Problems
  • Enhancing Audio-Based Activity Recognition through Autoencoder Encoded Representations
  • Activity Recognition from Audio Data in a Kitchen Scenario
  • Speaker Diarization and Identification in a Meeting Scenario

Machine Learning for Supply Chain Optimization

  • Time Series Analysis & Forecasting of Events (Sales, Demand, etc.)
  • Integrated vs. separated optimization: theory and practice
  • Leveraging deep learning to build a versatile end-to-end inventory management model
  • Reinforcement learning for the vehicle routing problem
  • Metaheuristics in SCM: Overview and benchmark study
  • Finetuning parametrized inventory management system

Anomaly Detection on Server Logs

  • Analyse real-life server logs stored in an existing opensearch library (Graylog)
  • Learning values describing normal behavior of servers and detect anomalies in logged messages
  • Implement simple alert system (existing systems like Icinga can be used)
  • Prepare results in a (Web-)Gui
  • Creating eLearning Recommender Systems using NLP
  • Hyperparameter Optimization for Symbolic Knowledge Graph Completion
  • Applying Symbolic Knowledge Graph Completion to Inductive Link Prediction
  • Data Augmentation via Generative Adversarial Networks (GANs)
  • Autoencoders for Sparse, Irregularly Spaced Time Series Sequences

Tracking cookies are currently allowed.

Tracking cookies are currently not allowed.

Information

  • Author Services

Initiatives

You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.

All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .

Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.

Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Original Submission Date Received: .

  • Active Journals
  • Find a Journal
  • Journal Proposal
  • Proceedings Series
  • For Authors
  • For Reviewers
  • For Editors
  • For Librarians
  • For Publishers
  • For Societies
  • For Conference Organizers
  • Open Access Policy
  • Institutional Open Access Program
  • Special Issues Guidelines
  • Editorial Process
  • Research and Publication Ethics
  • Article Processing Charges
  • Testimonials
  • Preprints.org
  • SciProfiles
  • Encyclopedia

crystals-logo

Article Menu

master thesis artificial intelligence

  • Subscribe SciFeed
  • Recommended Articles
  • Author Biographies
  • Google Scholar
  • on Google Scholar
  • Table of Contents

Find support for a specific problem in the support section of our website.

Please let us know what you think of our products and services.

Visit our dedicated information section to learn more about MDPI.

JSmol Viewer

Artificial intelligence modeling of materials’ bulk chemical and physical properties.

master thesis artificial intelligence

1. Introduction

2. artificial intelligence, atomic systems, 5.1. atomic systems, 5.2. molecular systems, molecular systems, data availability statement, conflicts of interest.

  • Hyperchem , version 5.01; Hypercube, Inc.: Waterloo, ON, Canada, 1994. Available online: http://www.hypercubeusa.com/News/PressRelease/NewPolicyOct1997/tabid/397/Default.aspx (accessed on 29 August 2024).
  • Frisch, M.J.; Trucks, G.W.; Schlegel, H.B.; Scuseria, G.E.; Robb, M.A.; Cheeseman, J.R.; Scalmani, G.; Barone, V.; Mennucci, B.; Petersson, G.A.; et al. Gaussian , version 09; Gaussian, Inc.: Wallingford, CT, USA, 2009.
  • Haykin, S. Neural Networks: A Comprehensive Foundation , 2nd ed.; Prentice Hall: Hoboken, NJ, USA, 1999. [ Google Scholar ]
  • Käser, S.; Vazquez-Salazar, L.I.; Meuwly, M.; Töpfer, K. Neural network potentials for chemistry: Concepts, applications and prospects. Digit. Discov. 2023 , 2 , 28–58. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Sumpter, B.G.; Getino, C.; Noid, D.W. Theory and Applications of Neural Computing in Chemical Science. Annu. Rev. Phys. Chem. 1994 , 45 , 439–481. [ Google Scholar ] [ CrossRef ]
  • Shelton, R.; Baffes, P.T. Nets Back-Propagation ; version 4.0; Software Technology Branch, NASA, Johnson Space Center: Houston, TX, USA, 1989. Available online: https://ntrs.nasa.gov/api/citations/19920012191/downloads/19920012191.pdf (accessed on 29 August 2024).
  • Darsey, J.A.; Sumpter, B.G.; Noid, D.W. Correlating Physical Properties of Both Polymeric and Energetic Materials and their Organic Precursors of Polymers Using Artificial Neural Networks. Int. J. Smart Eng. Syst. Des. 2000 , 2 , 283–298. [ Google Scholar ]
  • Tu, X.; Geesman, E.; Wang, K.; Compadre, C.; Buzatu, D.; Darsey, J. Prediction of the partition coefficient of dillapiol and its derivatives by use of molecular simulation and artificial neural networks. Chim. Oggi 2002 , 10 , 51–54. [ Google Scholar ]
  • Lane, T.R.; Harris, J.; Urbina, F.; Ekins, S. Comparing LD50/LC50 Machine Learning Models for Multiple Species. J. Chem. Health Saf. 2023 , 30 , 83–97. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Mitchell, N. A Systematic Approach for Developing Feed-Forward/Back-Propagating Neural Network Models for Predicting Bulk Chemical and Physical Properties of Transition Metals. Master’s Thesis, University of Arkansas at Little Rock, Little Rock, AR, USA, 2007. [ Google Scholar ]
  • Baffer, P.T. NNETS Program ; version 2.0; Johnson Space Center Report No. 23366; NASA: Houston, TX, USA, 1989.
  • Soman, A. Application of Artificial Intelligence to Chemical Systems. Master’s Thesis, University of Arkansas at Little Rock, Little Rock, AR, USA, 1993. [ Google Scholar ]
  • Lide, D.R. CRC Handbook CRC Handbook of Chemistry and Physics , 76th ed.; CRC Press: Cleveland, OH, USA, 1977. [ Google Scholar ]
  • Eberhart, R.C.; Dobbins, R.W. Neural Network PC Tools. In A Practical Guide ; Academic Press, Inc.: London, UK, 1990. [ Google Scholar ]
  • Hert, J.; Krogh, A.; Palmer, R.G. Introduction to the Theory of Neural Computing ; Addison-Wesley Publishing, Co.: Redwood City, CA, USA, 1991. [ Google Scholar ]
  • Wasserman, P.D. Neural Computing Theory and Practice ; Van Nostrand Reinhold: New York, NY, USA, 1989. [ Google Scholar ]
  • Zurada, J.M. Introduction to Artificial Neural Systems ; Weit Publishing Company: New York, NY, USA, 1992. [ Google Scholar ]
  • Darsey, J.A.; Noid, D.W.; Wunderlich, B.; Tsoukalas, L. Neural-net extrapolations of heat capacities of polymers to low temperatures. Makromol. Chem. Rapid Commun. 1991 , 12 , 325. [ Google Scholar ] [ CrossRef ]

Click here to enlarge figure

No.Compound NameMol. Weight (U)
1Dichloro-fluoro-methane102.92
2Dibromo-methane173.85
3Nitro-methane61.04
4Pentachloro-ethane202.3
5Chloro-ethylene62.5
6Ethanal44.05
7Chloro-ethane64.52
8Fluoro-ethane48.06
9Iodo-ethane155.97
10Acetyl amine59.07
11Dimethyl sulfoxide78.13
12Dimethyl-amine45.09
13Propyne40.07
142-chloro-propene76.53
15Propene42.08
162,2-dichloro-propane112.99
171-propanol60.11
18Trimethyl-amine59.11
19Furan68.08
20Thiophene84.14
211,2-butadiene54.09
22Butanal72.12
23Cyclopentene68.13
24Pyridine79.10
25Bromo-benzene157.02
26Nitro-benzene123.11
27Phenol94.11
28p-chloro-toulene126.59
29Toulene92.15
30o-xylene106.17
31Dibutyl-ether130.23
32Quinoline129.16
33Isoquinoline129.16
34Phenyl-benzene154.21
35Tribromo-methane252.75
36Iodo-methane141.94
37Ethanethiol62.13
38Propanone58.08
39Butane58.13
40Dipropyl-ether102.18
41Fluoro-methane34.03
421,1-dichloro-ethane98.96
431,1-difluoro-ethane66.05
442-propanol60.11
451-nitro-propane89.09
462-chloro-propane78.54
47Aniline93.13
48Butanal72.12
49m-dichloro-benzene147.01
50m-fluoro-toulene110.13
51Ethane30.07
52Propadiene40.07
53Propene42.08
54Acetylene26.04
552-chloro-ethanol80.52
561,3-cyclohexadiene80.14
571-Hexyne82.15
581,4-dichloro-butane127.03
59Ethanoic acid60.05
601,3-dichloro-propane112.99
612-chloro-2-methyl-propane92.57
62m-chloro-nitrobenzene157.56
63p-chloro-nitrobenzene157.56
641,3-cyclopentadiene66.10
651,3-butadiene54.09
664-chloro-phenol128.56
671,3-cyclohexadiene80.14
68Phenyl-methanol108.15
69Acetophenone120.16
70p-fluoro-nitrobenzene141.10
Compound NameExperimental M.W. (U)Predicted M.W. (U)
Propene42.0840.833
Pyridine79.1081.36
Butanal72.1273.10
1,3-cyclopentadiene66.1062.32
Propyne40.0740.85
p-chloro-toulene126.59127.3
1,3-cyclohexadiene80.1485.82
p-fluoro-nitrobenzene141.10133.59
Chloro-ethane64.5263.20
Furan68.0873.50
Propadiene40.0738.70
Phenyl-methanol108.15112.88
Fluoro-ethane48.0647.90
Cyclopentene68.1371.11
Isoquinoline129.16135.72
1 Hexyne82.1578.29
o-xylene106.17113.8
1-nitro-propane89.0999.35
1,3-dichloro-propane112.9996.06
p-fluoro-nitro-benzene141.10128.5
Ethanal44.0542.10
Butanal72.1272.70
m-fluoro-toluene110.13103.73
2-chloro-ethanol80.5283.60
No.Compound NameM.P. (°C)B.P. (°C)D (g/cc)R.I.D.M. (Debyes)
1Dichloro-fluoro-methane−135.09.01.405 1.3724 1.29
2Dibromo-methane−52.5597.02.49701.5420 1.43
3Nitro-methane−28.50100.81.13711.3817 3.46
4Pentachloro-ethane−29.00162.01.67961.5025 0.92
5Chloro-ethylene−153.8−13.40.19061.3700 1.45
6Ethanal−121.020.800.78 1.3316 2.69
7Chloro-ethane−136.412.270.89781.3676 2.05
8Fluoro-ethane−143.2−37.70.00221.2656 1.94
9Iodo-ethane−108.072.301.93581.5133 1.91
10Acetyl amine82.30221.20.99 1.4278 3.76 i
11Dimethyl sulfoxide18.45189.01.10141.4770 3.96
12Dimethyl-amine−93.007.400.680 1.3500 1.03
13Propyne−101.5−23.20.7 1.386 0.78
142-chloro-propene−137.422.650.90171.3973 1.66
15Propene−185.2−47.40.51931.357 0.37
162,2-dichloro-propane−33.8069.301.11201.4148 2.27
171-propanol−126.597.400.80351.3850 1.68 i
18Trimethyl-amine−117.22.870.671 1.3631 0.61
19Furan−85.6531.360.95141.4214 0.66
20Thiophene−38.2584.161.06491.5289 0.55
211,2-butadiene−136.210.850.676 1.421 0.40
22Butanal−99.0075.700.81701.3843 2.72 i
23Cyclopentene−135.144.240.77201.4225 0.20
24Pyridine−42.00115.50.98191.5095 2.19
25Bromo-benzene−30.82156.01.49501.5597 1.70
26Nitro-benzene5.7210.81.20371.5562 4.22
27Phenol43.070.861.05761.5418 1.45
28p-chloro-toulene7.5162.01.06971.5150 2.21
29Toulene−95.0110.60.86691.4961 0.36
30o-xylene−25.18144.40.88021.5055 0.62
31Dibutyl-ether−95.30142.00.76891.3992 1.17 i
32Quinoline−15.60238.11.09291.6268 2.29
33Isoquinoline26.50243.31.09861.6148 2.73
34Phenyl-benzene71.00255.90.86601.5880 0.00
35Tribromo-methane8.30149.52.88991.5976 0.99
36Iodo-methane−66.4542.402.27901.5382 1.62
37Ethanethiol−144.435.000.83911.4310 1.58 i
38Propanone−95.3556.200.78991.3588 2.88
39Butane−138.4−0.500.601 1.354 <0.05
40Dipropyl-ether−12291.000.73601.3809 1.21 i
41Fluoro-methane−141.8−78.40.8 1.1727 1.85
421,1-dichloro-ethane−16.9857.281.17571.4164 2.06
431,1-difluoro-ethane−117.0−24.70.95001.301 2.07
442-propanol−89.5082.400.78551.3776 1.66 i
451-nitro-propane−108.0130.51.01 1.4016 3.56 i
462-chloro-propane−117.235.740.86171.3777 2.17
47Aniline−6.30184.11.02171.5863 1.53
48Butanal−99.075.70.81701.3843 2.72 i
49m-dichloro-benzene−24.7173.01.28841.5459 1.72
50m-fluoro-toulene−87.7116.00.99861.4691 1.86
51Ethane−183.3−88.60.57201.0377 0.00
52Propadiene−136.0−34.51.78701.41680.00
53Propene−185.3−47.40.51931.357 0.37
54Acetylene−80.8−84.00.6 1.0005 0.00
552-chloro-ethanol−67.5128.01.20021.4419 1.78 i
561,3-cyclohexadiene−89.080.500.84051.4755 0.44
571-Hexyne−131.971.300.71551.3989 0.83 i
581,4-dichloro-butane−37.3153.91.14081.4542 2.22 i
59Ethanoic acid16.604117.91.04921.3716 1.74
601,3-dichloro-propane−99.5120.41.18781.4487 2.1 i
612-chloro-2-methyl-propane−25.452.00.84201.3857 2.13
62m-chloro-nitrobenzene24.00235.01.34 1.5374 3.73
63p-chloro-nitrobenzene83.6242.01.3 1.538 2.83
641,3-cyclopentadiene−97.240.000.80211.4440 0.42
651,3-butadiene−108.91−4.410.62111.429 0.00
664-chloro-phenol43.20219.81.27 1.5579 2.11
671,3-cyclohexadiene−89.080.50.84051.4755 0.44
68Phenyl-methanol−15.3205.31.04191.5396 1.71
69Acetophenone20.5202.01.02811.5372 3.02
70p-fluoro-nitrobenzene27.0206.01.33001.5316 2.87
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

Darsey, J.A. Artificial Intelligence Modeling of Materials’ Bulk Chemical and Physical Properties. Crystals 2024 , 14 , 866. https://doi.org/10.3390/cryst14100866

Darsey JA. Artificial Intelligence Modeling of Materials’ Bulk Chemical and Physical Properties. Crystals . 2024; 14(10):866. https://doi.org/10.3390/cryst14100866

Darsey, Jerry A. 2024. "Artificial Intelligence Modeling of Materials’ Bulk Chemical and Physical Properties" Crystals 14, no. 10: 866. https://doi.org/10.3390/cryst14100866

Article Metrics

Further information, mdpi initiatives, follow mdpi.

MDPI

Subscribe to receive issue release notifications and newsletters from MDPI journals

IMAGES

  1. SOLUTION: Master s thesis on demystifying artificial intelligence

    master thesis artificial intelligence

  2. Master Thesis Ideas in Artificial Intelligence

    master thesis artificial intelligence

  3. MASTER THESIS IN ARTIFICIAL INTELLIGENCE

    master thesis artificial intelligence

  4. SOLUTION: Master s thesis on demystifying artificial intelligence

    master thesis artificial intelligence

  5. Best AI for Writing Thesis 2024

    master thesis artificial intelligence

  6. Artificial Intelligence Master Thesis Ideas

    master thesis artificial intelligence

VIDEO

  1. PhD Thesis: On Cognitive Machines (AI) in Organizations

  2. Master Thesis Defense

  3. How do I write my PhD thesis about Artificial Intelligence, Machine Learning and Robust Clustering?

  4. The Birth of AI: Unveiling the Minds Behind the Dartmouth Summer Research Project

  5. Artirev_web: un outil d'aide à la recherche de documents scientifiques/ A Scientific Search Engine

  6. ai curious

COMMENTS

  1. PDF Artificial Intelligence and Machine Learning Capabilities and

    that a machine can be made to simulate it." [3] In the AI field, there are several terms. Artificial intelligence is the largest collection, machine learning is a subset of artificial intelligence, and deep learning is a subset of machine learning, as shown in Exhibit 2.3 [4]. This thesis mainly

  2. FIU Libraries: Artificial Intelligence: Dissertations & Theses

    Many universities provide full-text access to their dissertations via a digital repository. If you know the title of a particular dissertation or thesis, try doing a Google search. OATD (Open Access Theses and Dissertations) Aims to be the best possible resource for finding open access graduate theses and dissertations published around the world with metadata from over 800 colleges ...

  3. Master's in Artificial Intelligence

    The online master's in Artificial Intelligence program balances theoretical concepts with the practical knowledge you can apply to real-world systems and processes. Courses deeply explore areas of AI, including robotics, natural language processing, image processing, and more—fully online. At the program's completion, you will:

  4. Master of Science in Artificial Intelligence (AI)

    Curriculum. Curriculum for Master of Science in Artificial Intelligence. Students must complete at least 30 credit hours of study in the MS program, which is equivalent to a minimum of 10 three-credit graduate courses. As part of these 30 credits, you may select the MS thesis-option, which requires a 9-credit master's thesis, or the project ...

  5. 12 Best Artificial Intelligence Topics for Thesis and Research

    1) Top Artificial Intelligence Topics for Research. a) Natural Language Processing. b) Computer vision. c) Reinforcement Learning. d) Explainable AI (XAI) e) Generative Adversarial Networks (GANs) f) Robotics and AI. g) AI in healthcare. h) AI for social good.

  6. M.S. in Artificial Intelligence

    Master's Project and Thesis Policies. The contents of this section apply only to students who elect to do a DS 700B Master's Project or a DS 701B Master's Thesis in topics related to Artificial Intelligence. Students must first find a research advisor who must be a tenure-track faculty of the DS department, including faculty with a joint ...

  7. PDF Master in Artificial Intelligence Master Thesis

    Master in Artificial Intelligence Master Thesis Analysis of Explainable Artificial Intelligence on Time Series Data Author: Supervisors: NataliaJakubiak MiquelSànchez-Marrè CristianBarrué Department: DepartmentofComputerScience Facultat d'Informatica de Barcelona (FIB) Universitat Politècnica de Catalunya (UPC) - BarcelonaTech October 2022

  8. Master's Thesis

    Together, the Campuslibrary Arenberg, ILT and the Faculty of Engineering Science organise three master's thesis information sessions in order to support you during this process. The first session about Information Literacy takes place during the third week of the academic year. Plagiarism. You can find all the information about these sessions ...

  9. Thesis Topic Proposals 2022-2023

    Once you have reached an agreement with your promotor, fill in the digital form for your thesis topic. The deadline for submitting this form is 30th of October, 2022. (!!) Below you can see the thesis topics for 2022-2023. We offer 3 different thesis formats: - Format 1 : Regular thesis (fully supervised by KU Leuven) - Format 2 : Thesis in ...

  10. Thesis project

    Content of the thesis. In addition to the main text describing the research, the master thesis should at least contain: a front page, containing: name of the student, name of the supervisors, student number, date, name of the program (master Artificial Intelligence, Utrecht University); an abstract; an introduction and a conclusion;

  11. Artificial Intelligence · University of Basel · Completed Theses

    Master's thesis, December 2022.Download: (PDF) (slides; PDF) (sources; ZIP) A Digital Microfluidic Biochip (DMFB) is a digitally controllable lab-on-a-chip. Droplets of fluids are moved, merged and mixed on a grid. Routing these droplets efficiently has been tackled by various different approaches.

  12. PDF Project Management through the lens of Artificial Intelligence

    Master's thesis in the Master's Program International Project Management E2018:066 ANNAAM BUTT . 2 | Page ... Artificial Intelligence (AI) has become one of the most deeply researched and developed technologies in recent years. From smart personal assistants to self-driving vehicles, ...

  13. 8 Best Topics for Research and Thesis in Artificial Intelligence

    So without further ado, let's see the different Topics for Research and Thesis in Artificial Intelligence! 1. Machine Learning. Machine Learning involves the use of Artificial Intelligence to enable machines to learn a task from experience without programming them specifically about that task. (In short, Machines learn automatically without ...

  14. PDF The impact of artificial intelligence amongst higher ...

    Artificial intelligence has developed a lot in the past years, each day loads of new tools and software are released. It has been taken into use also among teachers and students and can offer great advantages in education. The idea of the topic came from arti-cles and Tik Tok videos on students using ChatGPT.

  15. PDF The use of artificial intelligence (AI) in thesis writing

    Text generator (chatbot) based on artificial intelligence and developed by the company OpenAI. Aims to generate conversations that are as human-like as possible. Transforms input into output by "language modeling" technique. Output texts are generated as the result of a probability calculation.

  16. Thesis Guidelines

    Thesis format: All students need to complete a thesis of approximately 50 pages for the thesis text (so not including title, foreword,...) . If the promotor deems it appropriate it may also be possible to write a 20 page paper of publishable quality instead. Standard is the 50-pages format. The faculty of Engineering has decided that the cover ...

  17. Master of Artificial Intelligence

    The Master of Artificial Intelligence programme trains a wide variety of students in all areas of knowledge-based technology and the wider application of that technology across multiple fields, including in the development of intelligent robots. About the programme. Admission and application. Tuition fees. After graduation.

  18. Available Master's thesis topics in machine learning

    The field of computer security presents a wide variety of challenging problems for artificial intelligence and autonomous agents. Guaranteeing the security of a system against attacks and penetrations by malicious hackers has always been a central concern of this field, and machine learning could now offer a substantial contribution.

  19. Master Thesis

    Master Thesis. The thesis project is the ultimate step in the master's programme, where you apply and extend your knowledge as you conduct research and formulate a DSAIT solution. You can find information about thesis topics, and their connection with the DSAIT themes on an online platform and you can organise a meeting with potential thesis ...

  20. PDF Artificial Intelligence in Cybersecurity and Network

    These factors combine to create problems for security teams to keep up with the pace, and smarter solutions are required. This thesis gives an overview of how artificial intelligence (AI) approaches, and sub-domains such as machine learning and deep learning, can be applied to cybersecurity issues.

  21. PDF ARTIFICIAL INTELLIGENCE IN FINANCE

    artificial intelligence along with the focus on its benefits and challenges. The researcher likewise inves-tigated the global adoption of artificial intelligence when studying the artificial intelligence investment and start-ups in Europe. The method of data collection used for this thesis was document analysis of qualitative research method.

  22. Master Thesis Topics in Artificial Intelligence

    Time Series Analysis & Forecasting of Events (Sales, Demand, etc.) Integrated vs. separated optimization: theory and practice. Leveraging deep learning to build a versatile end-to-end inventory management model. Reinforcement learning for the vehicle routing problem. Metaheuristics in SCM: Overview and benchmark study.

  23. Masters of Science in Artificial Intelligence

    AUM's Master of Science in Artificial Intelligence program is a 30-hour STEM program taught in a cohort model. Candidates can choose from a thesis option (for those wanting to pursue research or teaching careers) or a non-thesis option, for practitioners. ... Artificial Intelligence Thesis II: The second semester of a two-semester course ...

  24. Artificial Intelligence Modeling of Materials' Bulk Chemical and

    Energies of the atomic and molecular orbitals belonging to one and two atom systems from the fourth and fifth periods of the periodic table have been calculated using ab initio quantum mechanical calculations. The energies of selected occupied and unoccupied orbitals surrounding the highest occupied and lowest unoccupied orbitals (HOMOs and LUMOs) of each system were selected and used as input ...