Biography: Nikolaos Mavridis, PhD from the Massachusetts
Institute of Technology (MIT), is an Academic and Consultant specialising in
Artificial Intelligence and Robotics. He is the founder and director of the
Interactive Robots and Media Lab (IRML), and has served as faculty at numerous
institutions worldwide, including New York University (NYU) in Abu Dhabi and
NYU Poly, the United Arab EmiratesUniversity, the Hellenic National Center for
Title: "Data & Intelligence: The new Oil for the 4th Industrial Revolution"
Although Artificial Intelligence and Robotics arguably start from the 1960's, their roots are much older:
The beginnings of formalization of parts of human thought by Aristotle, the Automata in Ancient Egyptian
temples and Al Jazari's inventions, and beyond. Nowadays, through the internet and through the
increasingly large numbers of sensors that are monitoring various aspects of the physical world, immense
amounts of useful data have started to become available. And through the latest advances in AI and
Machine Learning techniques, such data are being used to analyze, predict, and recently also to modify
different aspects of our world. Most importantly, Data & Intelligence will play a fundamental role in the
so-called "Fourth Industrial Revolution", which we have just started to enter. With the Internet of Things providing not only data but also intelligence potentially in any object of the physical world, with the advent
of Collaborative Robots that work together with humans in production lines (and not just alone in cages), with Additive Manufacturing techniques such as 3D printing, arguably we are now entering this new 4th
Revolution: which will blur the lines between the physical, digital, and biological spheres, and which
promises to radically change our lives. We will look into more detail not only to those developments that
have enabled humanity to start entering this novel phase, but also to how it could change ourselves, our
homes, and our cities. Most importantly, we will also touch upon important open issues that have arisen,
as well as upon glimpses of a future that holds the power to potentially create better lives for an ever
increasing part of the world's population.
Darwich has a BSc, MSc. from Cairo University and DIC and PhD from
Imperial College, London all in petroleum engineering. He is a petroleum
engineering consultant and a director of the UK-based consulting and
training company Zeal Energy Ltd. He is also a Visiting Professor at
Effat University, Jeddah, Saudi Arabia. Dr Darwich has more than 30
years of industrial and academic experience in oil & gas field
development, assets evaluation, reserves & resources reporting,
production optimisation, economic evaluation, and digital applications
in the oil and gas industry. His experience covers conventional and
unconventional oil and gas resources as well as CO2 storage. He also has
published several papers on multiphase flow metering, oil and gas
production optimisation and expert systems for fluids metering and
selection of optimal enhanced oil recovery techniques.
Title: Artificial Intelligence Applications in the Oil and Gas Industry
The oil industry has never been technology-averse especially in digital technology. Advances in the digital technology in the 1980s were behind the emergence of many new applications in the industry. In the current challenging low-oil price environment, the oil industry is looking for more implementation of digital solutions to minimise capital and operating expenditures and maximise oil and gas production and reserves. With more and more oil companies capturing comprehensive real-time data, the amount of data is increasing exponentially. This requires advanced tools and techniques to extract meaningful and useful information. Oil and gas organisations have re-engineered their production strategies and operational models to include artificial intelligence (AI) as an important element in business transformation. Therefore, the AI market in oil and gas has a huge scope for rapid growth and is estimated to reach US$2.85 billion by 2022.The talk will present a quick overview of the history of AI implementation in the upstream oil and gas industry. Some of the current successful applications of AI in the industry will be presented and future potential applications will be highlighted. The talk will bring some of the examples of the implementation of the AI in the Middle Eastern oil and gas industry in general and in Saudi Arabia in particular. The talk will also examine the challenges facing further implementation of the AI technology in the oil and gas industry and present recommendations for accelerating and widening the applications of AI in the oil and gas industry.
Miltiades D. Lytras
Miltiadis D. Lytras is an expert in advanced computer science and
management, editor, lecturer, and research consultant, with extensive
experience in academia and the business sector in Europe and Asia. Dr.
Lytras is Research Professor at Deree College – The American College of
Greece. Dr. Lytras is a world-class expert in the fields of cognitive
computing, information systems, technology enabled innovation, social
networks, computers in human behavior, and knowledge management.
Abstract: Title: "Artificial Intelligence Research for Sustainable Economic Growth in KSA: A learning and technology challenge for Vision 2030"
Cognitive computing has received increasing attention from academia and industries as it brings cognitive science and computing together for the development of new computational platforms, infrastructures, systems, and algorithms. Artificial intelligence and computational intelligence are key elements to success in cognitive computing. The evolution of Artificial Intelligence, Data Science and Cognitive Computing set various challenges for Learning and Education. In this speech, we elaborate on the main aspects of Cognitive Computing and its potential impact on learning, education, and technology-enhanced learning. A theoretical meta-model for the adoption of Cognitive Computing in KSA is also promoted. The key contribution of this speech is twofold: It sets the underpinnings for the new thread of research of Artificial Intelligence in education and determines the value-adding context for applied research on Skills, Competencies management and the enhancement of innovation and entrepreneurship in KSA. It promotes also a new data-driven decision-making framework in Education linked to Vision 2030. Vision 2030 is analyzed as a Learning and Technology challenge.
Biography: Dr. Manal, is an assistant professor in Om al Qura University in College of Computer and Information Systems. She earned her master and Ph.D degree from University of Sheffield in Britain. Her research interest lies in the area of Alignment of
Nearly-Duplicate Contents in Video Stream with Manifold
Embedding, Embedding Manifold Matching with Applicationto Instance
search based on video Queries.
Abstract: Title: "Will the Artificial Intelligence beat the Human Performance in Glaucoma Detection?!"
Glaucoma is an eye disease that damages the optic nerve head (ONH) causing vision loss. Therefore, early diagnosis and treatment is important in preventing possible blindness caused by glaucoma. Its current identification is based on the manual segmentation of the optic cup and disc to examine the cup-to-disc ratio (CDR). However, experts' annotation of these regions is a rather difficult and tedious task. Deep convolutional neural networks (CNNs) show potential for general and highly variable tasks across many fine-grained object categories. However, its performance depends on the availability of a large number of labeled samples for its training phase. Usually, unlabeled images are more readily available than labeled ones.
Here we demonstrate the performance of glaucoma detection using three CNN models with different learning methods, and also compare the performance between these models and ophthalmologists. We use transfer and semi-supervised learning methods based on both labeled and unlabelled data. Firstly, the transfer learning model is defined based on a pre-trained CNN from non-medical data and fine-tuned using labeled data. Secondly, a semi-supervised framework is constructed and trained using both labeled and unlabeled data based on two different approaches. The experimental results from a small public database demonstrate the efficacy of deep learning models when applied to glaucoma, which is a promising step in providing a pre-screener to identify individuals with early-stage glaucoma. Compared with two ophthalmologists, all the presented models achieve better performances, demonstrating the capability of artificial intelligence in detecting glaucoma with a high level of reliability.
Richard M. satava
Biography: Richard M. Satava, MD FACs, is Professor Emeritus of Surgery at the University of Washington Medical Center (Seattle). Professor of Surgery Walter Reed Army Medical Center, Prgram Manager Defense Advanced Research Project Agency (DARPA) and Senior Science Advisor US Army Medical Researh Command. He was trained in Johns Hophins University, Medical School Hahnemann University of Philadelphia, Internship Cleveland Clinic, Surgical Residency Mayo Clinic with Master of Surgical Research
Title: "Surgical Robotics and The Next Generation of Non-invasive Procedures Require the next Generation of Simulation and Training."
Even as this fourth revolution in surgery in 25 years (robotic surgery) is gaining in popularity, a much more disruptive change is beginning with the next revolution: non-invasive Directed Energy Surgery for Diagnosis and Therapy (DEDAT). While surgeons have been investigating a few different types of energy for decades, including success with some forms such as lithotripsy, photo-biomodulation using photonics, high-intensity
focused ultrasound (HIFU), etc., these pioneering techniques are but the tip of the iceberg that heralds the transition to non-invasive surgery. Energy, when combined with robotics, information technologies, molecular
and genetic manipulation, imaging systems, nanotechnology (to name a few), diseases will begin to be cured at the cellular and molecular level, and non-invasively. Such systems are based upon the premise which
robotics and automation can bring – precision, speed and reliability, especially as surgery ‘descends’ into operating at the cellular and molecular level. Nobel Laureate Richard Feynman was right – this is “room at the bottom” !
Clinical practice of surgery is now requiring evidence-based medicine to determine best practices, and for technical skills of a surgeon, the use of simulation provides an opportunity to create technical skills curricula (courses) in which the performance of the learner is quantitatively measurable. And because we can accurately measure skill performance using simulation, it is possible to create a benchmark (standard) which every learner must achieve before being able to operate upon a patient. This new methodology called Proficiency-based Progression (PbP) has been developed which allows not only technical skills, but also non-technical skills, to be objectively measured, and therefore be required to perform without errors before conducting surgical (or other) procedures. The Fundamentals of Robotic Surgery (FRS) will be presented as an exemplar of this new power of simulation.
Biography: Dr. Sabbagh, MBBS, FRCSC, is a Consultant Neurosurgeon, Pediatric Neusurgeon and Epilepsy Surgeon. He is an avid researcher in the field of Virtual reality Simulation for skill training and assessment. He also supervised a team to build 3 neurosurgical simulation scenario on the NRC NeuroTouch platform.
He is the chairman of the National Scientific Council on Neurosurgery at the Saudi Commission for Health Specialties. He is the head of research and development unit at the Clinical Skill Simulation Center, Assistant chairman of the department of surgery for research and higher education and an assistant professor at King Abdulaziz University, Jeddah, Saudi Arabia. He currently holds the position of fellowship director of the Pediatric Neurosurgery training program at king Abdulaziz University Hospital.
Title: "Virtual Reality Simulation (VRS) is one of the most promising new technologies available to neurosurgeons"
This technology is expected to be an powerful adjunct to neurosurgical training the other objective is surgical rehearsal in the near future. Due to restriction of duty hours in some countries and because of the high risks involved with neurosurgical procedures in general, the real-time hands-on exposure of neurosurgical trainees on actual patients is becoming more and more limited with time. Training in Neurosurgery through virtual reality simulation (VRS) has been going through significant advancements over the past decade. The advancements are mainly through technological enhancements, development of new and complex scenario and through improvement of validation.
Most available Neurosurgical VRS scenarios only reached face, content and construct validation. Only few complex neurosurgical procedures have been developed and not yet published. We developed the "Complex Neurosurgical VRS development roadmap. Through the roadmap we developed 3 complex scenarios and currently working on their publication and validation.
The author lead and collaborated with a group of engineers and computer scientists as well as neurosurgeons to develop 3 complex neurosurgical scenarios on a computer based haptic feedback platform. Currently they are under validation.
Conclusions: Neurosurgical VRS is expected to be a powerful aid to neurosurgical training. More work is needed in terms of validation and in terms of development of new complex neurosurgical VRS.
Areej, is an assistant professor in the faculty of computer science and
information technology at King Abdul-Aziz University. She earned her
master and Ph.D degree in computer science from University of Waterloo.
Her research interest lies in the area of artificial intelligence, and
her areas of expertise include machine learning,
natural language processing, affective
computing, and sentiment analysis. She has collaborated actively with
researchers in computer science and other disciplines, particularly in
sociology on the problem of building affectively intelligent
Abstract: Title: "Modeling Sentiment Dynamics in Event-Based Text"
Sentiment analysis has received greater attention in the last few years, as it plays a significant role in finding solutions for numerous real-world problems in a variety of contexts.
Researchers have made great strides with respect to the analysis of user sentiments regarding textual information, focusing in general on opinionated and review documents. We proposed a contextual machine learning model that performs fine-grained sentiment analysis of event- based text and models the emotional responses arising from this textual information.
Analyzing the sentiments that arise from non-review or event-based documents is a challenging task as it requires a deep understanding of the syntactic structure of the sentences and the meaning of the words they compose. Event-based texts also do not express an explicit opinion about a product or services and do not contain highly opinionated objects that the sentiment analysis models might rely on. The proposed model was evaluated on a fairytale and a news articles dataset, and despite the challenges associated with working on an unstructured real-world text, the proposed approach achieved a high agreement with the user annotations.
Biography: Director, Center for Artificial Intelligence, Chief Information and
Security Officer at King Khalid University. Dr. Alelyani obtained his
PhD from Arizona State University 2013 in Artificial Intelligence and
Machine Learning. He serves as the Director of Center for Artificial
Intelligence, a research center at KKU. Dr. Alelyani also, has several
publications in the field and served as an editorial board member in
JISCR and as a PC of several international conferences and a reviewer of
several journals including ICTAI, KACSTIT, AAAI, ICMLA, ICML, IRI,
JMLR, AJSE and others.
Title: "Artificial Intelligence as an Education Enabler"
Quality Education is one of the 17 Sustainable Development Goals (SDGs) of leaving no one behind
principals that was adopted by the UN assembly in Sep 2015. It ensure inclusive and equitable quality education and promote lifelong learning opportunities for all. These SDGs are meant to be achieved by 2030. However, quality education is not easy task due to several obstacles. Knowing that more than 72 million children around the world remain unschooled. In some developing countries, more children remain less than 4 years in education. Beside that, inequality between girls and boys. Also, the lack of qualified teachers in the most targeted countries is another challenge. We believe, that Artificial Intelligence would be an education enabler to help the world achieve SDGs and quality education in particular. In this keynote, we will present how AI is implemented and adopted to do so.
Imtiaz Hussain Khan
Imtiaz, is an associate professor in Department of Computer Science at
King Abdul-Aziz University, Jeddah, and Kingdom of Saudi Arabia. He
received his MS in Computer Science (Evolutionary Computation) from the
University of Essex UK in 2005 and PhD in Artificial Intelligence
(Natural Language Processing) from the University of Aberdeen UK in
2010. He has more than twenty scholarly papers published in well-reputed
conferences and journals in the area of artificial intelligence. His
research interests include Natural Language Processing, Evolutionary
Computation and building cognitive models.
Abstract: Title: "AI asa Tool to Support Outcome Based Learning"
A large collection of documents is freely available on the Web and modern search engines, for example Google, enable authors to easily find texts on different topics from the Web. With the help of sophisticated word processing tools, authors can easily reuse such existing texts in their own work. The diverse sources of information and ease of text reuse make it difficult to detect manually whether the author of a document has plagiarized or not. In this talk, a plagiarism detection system for Arabic language will be described that combines different similarity-measure techniques aided with modern natural language processing techniques to uncover plagiarism in Arabic documents. The proposed system consists of two main components, one document-retrieval and the other detailed similarity analysis. The document-retrieval component generates queries from a given suspicious document and makes use of Google search API to retrieve candidate source documents from the Web. The similarity analysis component takes each source document in turn and attempts to identify the plagiarized parts in the suspicious document. The proposed system is thoroughly evaluated using an indigenous corpus. At the document-retrieval level, the system achieved above 75% accuracy in terms of fscore, whereas at the detailed similarity-computationlevel, the f-score is above 70%. Finally, in this talk, I also intend to highlight the challenges and further directions for research in this ever-demanding area.
Md. Abdur Rahman is an Associate Professor and Chairman of the
Department of Cyber Security and Forensic Computing, College of
Computing and Cyber Sciences, University of Prince Muqrin (UPM), Madinah
Al Munawwarah, Kingdom of Saudi Arabia. In 2018, Dr. Rahman has
received BEST Researcher Award from UPM. His research interests include
Blockchain and off-chain solutions, AI for Cyber Security, AI for Health
Applications, Cyber Security for Cyber Physical Multimedia Systems,
Secure Serious games, Security in Cloud and fog, Multimedia security for
healthcare Applications, IoT and 5G security, secure smart city
services, secure ambient intelligent systems, spatio-temporal multimedia
big data security, and next generation media security. He has authored
more than 100 publications. He has 1 US patent granted and several are
pending. Dr. A. Rahman has received more than 12 million SAR as research
grant. Dr. A. Rahman is the founding director of Smart City Research
Title: "Artificial Intelligence as an Education Enabler"
More than 7 million cyber-attacks occur every day. Intercepting, handling, and mitigating those attacks require AI-based threat intelligence. These multitude of sophisticated attack vectors are crippling our businesses and critical infrastructure. Hence, it is a becoming a business need to have AI for threat intelligence. However, AI based attacking and threat tools are also leveraged by the black market. Hence, countering the bad AI that will be used by threat agents with better AI is a challenging task. AI will lead to the autonomous stockpiling of software vulnerabilities (e.g. zero-day attacks). Using machine learning approaches, these systems will automatically decide on the most effective attack and defense vectors. In the future, “data warfare” will include a virtual battle between artificial intelligences seeking to disable one another and infect command and control systems with disinformation or malicious code. AI can be used to create intelligence malware, which needs to be handled by better AI. AI can be used for targeting IoT infrastructure, which is used in industry 4.0 and controlling critical infrastructure of a nation. AI can be used by rogue elements having access to the power of AI to attack the IoT infrastructure. Hence, proper counter AI will be needed. In this talk, we will present different metaphors of AI to identify the next generation threats and its counter strategy.
Hoffman, Ph.D. from the University of Washington (UW) in Seattle, USA
is a research scientist specializing in using virtual reality to
distract children during painful medical procedures, and helping
therapist use virtual reality to treat patients for severe psychological
problems. Dr. Hoffman is the Director of the Virtual Reality Research
Center at the University of Washington in Seattle. He conducted
pre-doctoral research at Princeton University, was visiting faculty at
Stanford University in 2017, and is currently collaborating on VR
research projects with Dr. Wadee Alhalabi and students at Effat
University in Jeddah, Kingdom of Saudi Arabia.
Title: In the future: "Virtual Reality Therapy avatars enhanced by artificial intelligence"
Empirical evidence from clinical research studies and fMRI brain scans show that Immersive virtual reality can help reduce the acute procedural pain experienced by patients during painful medical procedures where the patients remain awake, such as burn wound care, dental procedures, needle injections, and endoscopic surgery. In addition, new technologies are quickly becoming commercially available that can potentially further enhance the effectiveness of virtual reality pain control. For example, according to one study recently conducted at Effat University in the Kingdom of Saudi Arabia, VR pain reduction was significantly more effective when participants used their eye movements to interact with objects in virtual reality during brief painful stimuli (throwing Snowballs in SnowCanyon). In the future, artificial intelligence may help make virtual reality pain control even more effective.
Another area where AI and VR can be combined is clinical therapy for treating people for psychological problems. In SpiderWorld, patients with spider phobia put on a VR helmet, and “go into” a computer generated kitchen, that has animated spiders. Starting off very slowly, the therapists helps the patient gradually become more comfortable being around (virtual) spiders, until after several sessions, the patients typically are much less afraid of the spiders, and have learned skills that help them maintain their improvements long term. Currently the supply of patients needing therapy far exceeds the supply of good therapists. In the future, expert therapists may train an AI program (e.g., virtual human) how to treat the patient. The algorithms for treating phobias would be relatively simple. For more serious/severe psychological disorders, such as treating suicidal women, the AI computer program must be a lot more sophisticated, but AI agent advice could influence/inform therapy team decisions about how to maximize symptom reduction in individual patients. Although currently in the early stages, there is enormous potential for using AI to enhance the effectiveness of virtual reality therapy.
Badar K. Khan
Biography: As the head of Development at the MindSphere Application Center, he brought many years of project management and technology expertise within Siemens. As a Siemens certified R/D Project Manager and running many research projects that covered cloud technologies, artificial intelligence and embeded systems. his goal is to help customers co-creat digital solutions using siemens MindSphere IIOT Cloud based platformto deliver software and hardware solutions to adress their unique nusiness needs
Title: "Intelligent and Reliable Software for AI Based Systems"
Advanced manufacturing is bringing with it new changes in the production floor with Autonomous robots and new technologies paving the way for more intelligent software. These new intelligent software brings new challenges in how the software is architected, developed and validated. In this talk we not only address new methods to engineering systems in the production floor, but also discuss how Siemens is paving the way in answering challenges brought about by Intelligent Software and Machines.