<img height="1" width="1" style="display:none" alt="" src="https://www.facebook.com/tr?id=367542720414923&amp;ev=PageView&amp;noscript=1">

Eric Charton

Eric Charton holds a Master in machine learning applied to voice recognition, and a Ph.D. in machine learning applied to Information extraction and natural language generation. He worked as scientist and research project coordinator in academic context in Europe (University of Avignon) and North America (CRIM, École Polytechnique de Montréal) before becoming head of search engine research and development at Yellow Pages Canada. Since March 2018, he is Senior AI Director at National Bank of Canada.

Hamid Arian

Measuring risk is at the center of modern financial risk management. As the world economy becomes more and more complex, the standard modeling assumptions are violated, and advanced artificial intelligence solutions may provide the right tools to analyze the global market. In this talk, we provide a novel approach for measuring market risk called Encoded Value-at-Risk (Encoded VaR), based on a type of artificial neural network called Variational Auto-encoders (VAEs). Encoded VaR is a generative model that can reproduce market scenarios from a range of historical cross-sectional stock returns while increasing the signal-to-noise ratio present in the financial data and learning the dependency structure of the market without any assumptions about the joint distribution of stock returns.

Hamid Arian is an Associate Director of Research and Financial Engineering at Equitable Bank and an Assistant Professor of Machine Learning and Quantitative Finance at the Sharif University of Technology. His experience in the field of finance spans various investment asset classes in Toronto, New York, and the Middle East. Hamid completed his Ph.D. in Financial Mathematics at the University of Toronto and holds the CFA and FRM charters. In this event, Hamid will talk about a recent work titled Encoded Value-at-Risk: A machine learning approach for portfolio risk measurement, published in Mathematics and Computers in Simulations.

Olga Tsubiks

Olga is a Director of Strategic Analytics and Data Science at RBC, where she applies machine learning and automation for capacity planning and optimization. She has been working in data science for over 10 years. Olga brings data to life through machine learning, analytics, and visualization. Outside of her work at RBC, she has worked directly with global organizations such as the UN Environment World Conservation Monitoring Centre, and World Resources Institute, as well as prominent Canadian non-profits such as War Child Canada and Rainbow Railroad on various data science and analytics challenges.  

Isaac De Souza

Artificial Intelligence (AI) is one of the most powerful technologies being adopted today and will significantly impact the financial industry and society at large. In this session we will demystify the “black box” of AI, discuss the novel risks it brings, how regulators are reacting, and what can be done to ensure we safely and securely use AI.

Isaac is a scientist and entrepreneur with a multi-disciplinary background. His interests with Artificial Intelligence and Emerging Technologies have driven him for the last decade as he worked in both research and industry from NASA to Silicon Valley and now the financial industry. Isaac am passionate about communicating the adoption risks introduced by technologies and driving insights into key decision making. Isaac joined the risk management team at BMO in 2020 to support oversight activities. 

 

Talieh Tabatabaei

The advent of the use of AI/ML and big data for decision-making holds promise but also poses risks. Some of the recent incidents in these areas have generated skepticism: will these models truly benefit individuals, or will the risk inherent in their use overshadow their value? This talk explains ML model lifecycle in the financial institutes and how financial institutes practice responsible AI/ML to ensure the imposed risks are evaluated, managed, and mitigated.

Talieh holds a Bachelor of Engineering in Computer Eng. and a Master of Science in Electrical and Computer Eng., with several years of post-graduate research studies in the field of ML/AI and publications in the subject area. Talieh has several years of industry work experience in the ML/AI field. She currently works in the ML/AI model validation team at TD Bank as a senior manager, leading a team of data scientists to validate the ML/AI models which are used in the Bank. 

Meisam Soltani-Koopa

Customer Lifetime Value, CLV, is a popular measure to understand the future profitability of customers to allocate resources in more efficient ways to keep the company alive during difficult economic situations. We use machine learning tools to predict the expected revenue from each customer during one year of his/her relationship with the institution as the CLV of the customer. The approach is implemented on two datasets from two international financial institutions. Different feature engineering techniques were applied to improve the prediction power of the model. We used two stage or three stage prediction models. In the second phase, we train a reinforcement learning algorithm based on the history of marketing activities and the CLV as the state of customers to determine the optimum marketing action for customers in each state to maximize their profitability.

Meisam, with a background in Electrical Engineering and automotive industry moved to data science and analytics world. He graduated from the University of Waterloo Management Sciences program in 2014. While pursuing his Ph.D. in Management Analytics at Queens University, he has been involved in many data science and analytics projects in Scotiabank.

Chetan Phull

Chetan Phull is a technology and data management lawyer at Deloitte Legal Canada in Ontario. He is in his tenth year of practice and holds two IAPP certifications (CIPP/C/US). He regularly advises on regulatory and contractual aspects of product development, integrated digital services, and cyber incident response. Chetan's subject matter focus is in privacy, artificial intelligence, blockchain, SaaS/PaaS, and IT service contracts. His publications include BIG DATA LAW IN CANADA, and numerous compilations and articles on virtual asset regulation. Chetan is admitted to the Bars of Ontario, New York State, and Massachusetts. 
He is a former law clerk of the Nova Scotia Court of Appeal, with law degrees from Queen’s University and University College London.

Alexey Rubtsov

The last decade has witnessed a large-scale adoption of Artificial Intelligence and Machine Learning (AI/ML) models in finance. Although there are many benefits that AI/ML can bring to financial services (e.g., higher accuracy, automation), it could also introduce new and amplify existing risks. In this respect, financial regulators around the world are currently working on regulatory requirements that AI/ML models should satisfy when applied by financial institutions. In this presentation we discuss some most recent developments on AI/ML model risk management.

Alexey Rubtsov is an Assistant Professor of Mathematical Finance at Ryerson University, a Senior Research Associate at the Global Risk Institute and an Academic Advisor at Borealis AI. His areas of focus are Systemic Risk, FinTech, and Asset Allocation. His academic research was published in such journals as the Operations Research, Journal of Banking and Finance, Journal of Economic Dynamics and Control, Annals of Finance, among others. He holds a PhD in Operations Research and an MSc in Financial Mathematics from North Carolina State University.

Kate Goldman

Kate Goldman is the Senior Policy Associate within Elliptic's Global Policy and Research Group. Goldman's work focuses on developing the regulatory framework surrounding cryptocurrencies for US-based regulators and policymakers. Goldman is a board member of the Modern Markets Initiative. Goldman previously led the FinTech program within the Milken Institute's Center for Financial Markets, where her work emphasized responsible and pro-innovation policies for digital assets. While at the Milken Institute, she led many executive groups, roundtables, and panels on matters related to financial technology. Kate Goldman holds a Bachelor's Degree in Public Policy from the University of Delaware. 

Hakimeh Purmehdi

Hakimeh Purmehdi is a project lead and senior data scientist at Ericsson Global Artificial Intelligence Accelerator, where leads innovative AI/ML solutions for future wireless communication networks. She received her Ph.D. degree in electrical engineering from the Department of Electrical and Computer Engineering, University of Alberta, Edmonton, AB, Canada. After completing a postdoc in AI and image processing at the Radiology Department, University of Alberta, she co-founded Corowave, a start-up to develop resilient to movement, non-contact and continuous vital signs monitoring platform and sensors solution, leveraging radiofrequency technology and machine learning. Before joining Ericsson, she was with Microsoft Research (MSR) as a research engineer, and collaborated on developments of TextWorld project, which is a testbed for reinforcement learning research projects. Her research focus is basically on the intersection of wireless communication (5G and beyond including resource management and edge computing), AI solutions (such as online learning, federated learning, reinforcement learning, deep learning, transfer learning, etc.), optimization, and biotech.

Vinay Narayana

Vinay Narayana is a seasoned data and ml engineering leader at Levi Strauss & Co. He has extensive experience with building and managing large scale AI/ML Platforms and batch/real time data pipelines. He has presented at numerous conferences on diverse topics ranging from storage management software to devops to big data and machine learning. Vinay lives in Southborough, MA along with his wife and a dog.

Rohit Saha

In recent years, we have seen amazing results in artificial intelligence and machine learning owing to the emergence of models such as transformers and pretrained language models. Despite the astounding results published in academic papers, there is still a lot of ambiguity and challenges when it comes to deploying these models in industry because: 1) troubleshooting, training, and maintaining these models is very time and cost consuming due to their inherent large size and complexities 2) there is not enough clarity yet about when the advantages of these models outweigh their challenges and when they should be preferred over classical ML models. These challenges are even more severe for small and mid size companies that do not have access to huge compute resources and infrastructure. In this talk, we discuss these challenges and share our findings and recommendations from working on real world examples at SPINS: a company that provides industry-leading CPG Product Intelligence. In particular, we describe how we leveraged state-of-the-art language models to seamlessly automate part of SPINS workflow and drive substantial business outcomes. We share our findings from our experimentation and provide insights on when one should use these gigantic models instead of classic ML models. Considering that we have all sorts of challenges in our use cases from an ill-defined label space to a huge number of classes (~86,000) and massive data imbalance, we believe our findings and recommendations can be applied to most real-world settings. We hope that the learnings from this talk can help you to solve your own problems more effectively and efficiently! 

Rohit Saha is currently an applied research scientist at Georgian’s R&D team and is assisting portfolio companies with their research endeavours. Owing to previous roles, he has experience building end-to-end machine learning pipelines. He holds a master’s degree from the University of Toronto, and his research interests broadly lie in Computer Vision, NLP and their intersection, i.e., Multimodal Reasoning.

Rajvinder (Robin) Singh

Leading product for the CoreML Group which builds ML systems powering Yelp's ML / AI efforts including Ads, Search and Recommendations.

Areas / Teams I'm responsible for
- ML Signals [Foundation Models / Bandits]
- Model Platform
- Feature Stores
- ML Compute

Sharon Shahrokhi Tehrani

In machine learning and AI, the black box models are developed from input data by algorithms (e.g., Deep learning algorithms). Although the input variables are known, the functions' complexity and the joint relationship between variables make it challenging for data scientists and ML developers to interpret the processes inside the box and explain the ultimate decision. The lack of interpretability makes it challenging to trust the black box models and creates barriers to adopting ML and AI in numerous domains. The short answer to the current question, "How to overcome 'Black Box' Model Challenges?" is explainable AI. Designing the AI and ML models by adding explainable techniques improves understanding, increases trust and transparency, and avoids possible biases and discriminations due to data quality issues. 

Sharon Shahrokhi Tehrani is a seasoned data and machine learning product leader, dedicated to developing, implementing, and deploying scalable AI & ML algorithms that remarkably impact business. She focuses on data-driven solutions to increase customer engagement and platforms’ performance. 
She serves as the product manager of CBC's machine intelligence retention team. In detail, she leads a project which aims to partner with digital products and content teams to influence the business decisions around implementation and data-driven solutions. The project's main objective is to deliver data and an ML platform to empower content teams in short-term and long-term content creation, publishing on digital products, and strategic decisions to better serve Canadians with relevant, diverse, and personalized content. 
Prior to product leadership, Sharon spent years in data science and engineering at CBC and EQ works. Apply state-of-the-art machine learning, statistics, and data mining in a variety of areas, including audience behavior, audience journeys, and content strategy data science projects.
Sharon earned her BSc and MSc in Electrical Engineering from IAUCTB and Concordia University and a post-graduate certificate in Data Analytics, Big Data, and Predictive Analytics from TMU.

Jekaterina Novikova

Natural language and speech processing is a thriving area in AI that becomes more and more important nowadays. Almost everyone has been exposed in one way or another to the newest technology that employs natural language processing, was it a virtual assistant Siri, or a simple automated phone answering system. The range of possible applications able to create value from natural language processing is much broader, however, and may include such, from the first glance unrelated, areas as interaction with humanoid robots or detection of dementia. In this talk, Jekaterina Novikova, a Director of Machine Learning at Winterlight Labs, will discuss how AI researchers use natural language processing in these two fields.

Amir Raza

The increasing size of digital footprints and an ever-increasing digital economy have made us more vulnerable to cyber-attacks. The number of cyber-attacks has increased. Often malicious programs are sold as a service to enable even non-expert parties to launch an attack. Cyber attackers are always finding ways to evade detection, which makes reliance on rule-based systems & humans in the loop very risky & non-scalable. Machine learning-based models can step in and provide a smart & evolving defense system. An attacker actively trying to evade detection ends up creating a pattern of abnormal activity different from regular users. These can be picked up by an ML model giving us clues to detect malicious activity.

In this talk we shall discuss 1) how we can use some known clues, as marks of malicious behavior. 2) what kind of challenges can come up while modeling these problems and 3) how we can deploy and monitor such models.

Amir has worked with application of machine learning in different fields: EdTech, Cybersecurity and E-commerce. He believes in the potential of ML to create big changes.  He has a Master’s in Machine learning from MILA Montreal, focused in NLP and recommendation systems. In his spare time, he likes to dabble in language learning. 

Shreshth Gandhi

Shreshth Gandhi leads the Machine Learning group at Deep Genomics, a biotechnology company that uses ML to program and prioritize transformational RNA therapeutics for genetic diseases. He received his master's degree from the University of Toronto, where his research work focused on developing deep learning predictors for predicting RNA-protein binding. At Deep Genomics he continued this work at the intersection of deep learning and genomics and co-developed the splicing predictor that was used to identify that the ATP7B Variant c.1934T>G p.Met645Arg causes Wilson Disease by altering splicing.

Elizabeth Hunker

Elizabeth is Entrepreneur in Senior Director of Innovation, Web3 at Northwestern Mutual. With a background in building venture-funded startups, working with the UN, international NGOs & Fortune 100s, with a focus in web3, AI, XR, and healthspan extension - Elizabeth works to bridge moonshot technology and legacy institutions with leapfrog adoption strategies.

Eric Lanoix

In spite of some well-publicized early missteps, Machine Learning (ML) in credit underwriting is here to stay. Significant gains in approval speed, operational efficiency, and reduced credit losses can be realized while limiting operational and reputational risk. This presentation discusses aspects of credit underwriting where ML shows the most promise, as well as potential pitfalls. In particular, I promote inherent and continuous explainability as a winning strategy (instead of post-hoc techniques like LIME or SHAP). I also discuss model stability and fairness and propose strategies for quickly gaining acceptance of ML credit underwriting from internal stakeholders and regulators.

Eric Lanoix (FRM) is an applied mathematician and a problem solver who is passionate about uncovering business insights and efficiencies through modelling. As Vice-President of Quantitative Risk at Coast Capital Savings, Eric is responsible for a variety of risk management and modelling activities, such as credit scorecards development, machine learning, stress testing and capital attribution, credit (ECL) and deposit modelling, etc. Prior to joining the finance industry, Eric was a spacecraft engineer working on a variety of spacecraft visiting the International Space Station, such as the Crew Dragon and the HTV. He is learning Japanese from his wife and is still trying to get better at golfing.

Mary Jane Dykeman

Mary Jane Dykeman is a managing partner at INQ Law. In addition to data law, she is a long-standing health lawyer. Her data practice focuses on privacy, artificial intelligence (AI), cyber preparedness and response, and data governance. She regularly advises on use and disclosure of identifiable and de-identified data. Mary Jane applies a strategic, risk and innovation lens to data and emerging technologies. She helps clients identify the data they hold, understand how to use it within the law, and how to innovate responsibly to improve patient care and health system efficiencies. In her health law practice, Mary Jane focuses on clinical and enterprise risk, privacy and information management, health research, governance and more. She currently acts as VP Legal, Chief Legal/Risk to the Centre for Addiction and Mental Health, home of the Krembil Centre for Neuroinformatics, and was instrumental in the development of Ontario’s health privacy legislation.

Mary Jane regularly consults on large data initiatives and use of data for health research, quality, and health system planning. Her consulting work extends to modernizing privacy legislation and digital societies, and she works with Boards, CEOs and CISOs, as well as innovation teams on the emerging risks, trends and opportunities in data. Mary Jane regularly speaks on AI, cyber risk and how to better engage and build trust with clients and customers whose data is at play. She is also a frequent speaker and writer on health law and data law. Mary Jane is co-founder of Canari AI, an AI risk impact solution.

Henry Ehrenberg

As Financial Services increasingly embrace digitization, NLP presents many opportunities for efficiency gains and automation across the entirety of a bank’s operations. However, a lot of these efforts to develop and operate NLP applications have been bottlenecked by the data not being AI ready. Join Henry Ehrenberg, to learn how Snorkel AI helps Financial Services companies solve their data challenges and discuss a few NLP use cases this has unlocked.

 

Pujjit Mathuria

AI touches every aspect of our lives from social media to electric vehicles. One can hardly imagine a sector that has not been influenced by AI. At the heart of successful AI, there is always an AI strategy that is contributing to its success. Corporations are increasingly investing significantly in the building blocks of an AI Strategy.
 
This session will cover what a successful AI strategy looks like: what are the different components, successful implementation and the red flags to avoid. This session will help attendees develop an AI strategy for their organization whether small or large. 

Pujjit Mathuria is a Manager for Data & Analytics at Bank of Montreal (BMO), working on developing strategies for various business units of Technology and Operations org. including (AI / ML, Cloud, Governance and Strategy). He holds a MBA in Business Analytics and is certified in SAS Enterprise Miner. Prior to working at BMO, he has worked on Cloud Analytics for AWS & GCP at Softchoice Inc. & worked on Sales Analytics for FedEx & DHL. He is a Power Bi expert and inherent AI/ML enthusiast.

Neil Kostecki

Neil is Sr. Director of Product Management at Coveo and has over a decade of experience across key Canadian tech companies, with a deep understanding of Service solutions and concepts. Since joining Coveo in 2017, his passion for content, search, and personalization has been a key driver in the evolution of our AI-powered Service products.

Daniel Capriles

In a world where virtually every data point is collected and stored, the challenge of data science has shifted from "not having enough data" to "getting lost in all that data."  Let's discover together what would be "too much data," and how to transform it into powerful data for AI implementation.

Come and learn how transforming credit card transactional data, from raw and detailed to processed and summarized, helped add value and made it more actionable. Also, how targeting a specific part of a call transcript helped Natural Language Processing (NLP) Models explaining what the client intent was.

Daniel Capriles is a professional with more than 10 years of experience in advance analytics in industries like telecom and finance, in different departments like marketing, sales and operations.  Currently he leads a team of Data Scientist that helps National Bank contact center improving its operational efficiency and client satisfaction by leveraging AI, data and analytics.

Daniel received a Bachelor on Electronic Engineering, a Master in Business Administration specialized in Marketing, a Professional Certification on Data Analytics for Business from McGill and Organizational Leadership Specialization from Northwestern University.

Abhishek Mathur

Abhishek is the Director of Product Management at H2O.ai, where he leads the MLOps and AI Governance/Responsible AI product groups. He focuses on helping organizations succeed in their AI transformation, all the way through value realization. Abhishek has previously led product teams in developing NLP (conversational AI), computer vision, and internet of things platforms. He enjoys teaching product management foundations to aspiring PMs, and is fascinated by all new technologies.

Stephen O'Farrell

With the abundance of free-form text data available nowadays, topic modelling has become a fundamental tool for understanding the key issues being discussed online. We found the state-of-the-art topic modelling libraries either too naive or too slow for the amount of data a company like Bumble deals with, so we decided to develop our own solution. BuzzWords runs entirely on GPU using BERT-based models - meaning it can perform topic modelling on multilingual datasets of millions of data points, giving us significantly faster training times when compared to other prominent topic modelling libraries

Stephen O’ Farrell is a machine learning scientist at Bumble, where, as a member of the Integrity & Safety team, he works to ensure user safety across all of Bumble’s platforms. His work generally deals with NLP and Computer Vision tasks - deploying deep learning models at scale across the organisation. He graduated with an MSc in Data Science and BSc in Computational Thinking, both from Maynooth University, Ireland.

Akash Shetty

With the boom in AI, Deep Learning and Machine Learning Implementation are abundant. Nevertheless, often, there are cases where we have a bias in our model, which can have a negative impact on an organization or even our society. We will discuss a few high-profile such cases and gain takeaways from them. For AI to be more widely adopted and trusted, we power our model implementation with Explainable AI; this way, one can trust their model and explain model insights to complement business decisions, process optimization and recommendations. The field of Explainable AI has seen significant developments to assist researchers and AI developers who create AI products and solutions. We will look at Explainable AI solutions that one can apply to their research or work at their organization. We will end by discussing the current state of art approaches and directions with a discussion on challenges and future directions.

Akash Shetty has 3+ year industry experience in the field of Artificial Intelligence. He is currently working as Data Scientist at ApplyBoard and as a course facilitator at McMaster CCE for the Big Data Program. He is currently completing is Master's in Data Science from Colorado Boulder University.

Jason Madge

Jason Madge has been at Snorkel AI for the last year and is responsible for GTM in Canada. 
Prior to that, he was with MuleSoft for 7 years, where he joined when they were a start up to support/open the Canadian market. He started his career in FI as a bond broker but has been in tech on the GTM side for 20+ years.
Over his career, he has likely worked in one capacity or another with most of the companies in attendance.

Tiffany Wong

Tiffany Wong (LL.B/J.D, CIPP/E) is a legal, privacy and regulatory compliance professional currently working on AI/data governance, data ethics, digital and fin-tech initiatives in the finance services industry and has previous experience in journalism and corporate law. She studied at McGill University with a student exchange at Sciences Po Paris and obtained her law degree from Osgoode Hall Law School, York University. She was called to the Ontario bar in 2012.

Ioannis Bakagiannis

I am Ioannis, the Machine Learning Director of Machine Learning for Marketing Science @ RBC. There I lead a team of applied research scientists, data scientists and machine learning engineers. I am passionate about innovation of any shape of form and I am guilty of reading way more academic papers than I should. Looking to make a positive impact on the world through mentoring, teaching and programming. Other than that you will find me in tennis courts or ski slopes.

Armando Ordorica

I am currently a Senior Data Scientist at Pinterest (Shopping), where I design algorithms that predict quality for Pinterest Shopping. I am specialized in Risk Algorithms and Fraud and teach a grad course on this subject at the University of Toronto, where I am based. I have two patents in risk scoring, one on Aggregated Risk Scores and the other one on Optimization of API Workflows. 

In the past, I have worked at Capital One (Toronto) for a number of years, and have also worked remotely at Jumio (Palo Alto) where I've built fraud models for companies like JP Morgan Chase, Coinbase, and Stripe. 

Pujjit Mathuria

AI touches every aspect of our lives from social media to electric vehicles. One can hardly imagine a sector that has not been influenced by AI. At the heart of successful AI, there is always an AI strategy that is contributing to its success. Corporations are increasingly investing significantly in the building blocks of an AI Strategy.
 
This session will cover what a successful AI strategy looks like: what are the different components, successful implementation and the red flags to avoid. This session will help attendees develop an AI strategy for their organization whether small or large. 

Serena McDonnell

Serena is a Lead Data Scientist and quant researcher at Delphia, where she uses machine learning to power the fund's long-short equity market neutral strategy. Passionate about knowledge sharing and continuous learning, Serena co-hosts Deep Random Talks, a podcast which focusses on machine learning, product development, and knowledge management. She is an organizer of AI Socratic Circles (AISC), a highly technical machine learning reading group for industry professionals. As part of AISC, Serena leads a research group that focusses on applying natural language processing and representation learning to recommender systems. Serena holds an M.Sc. in Mathematics from the Hong Kong University of Science and Technology, and a B.S.c. in Mathematics and Biology from McGill University.

Sedef Akinli Kocak

The adoption and application of computer vision (CV) is growing in many fields as new deep learning based architectures deliver greater and more efficient performance on complex image-related tasks. New CV theories and approaches are showing the potential to drive transformative innovation in many industries, including health care, banking, security, transportation, retail, and agriculture. From fully autonomous vehicles to automated clinical diagnosis and surgical support systems, CV systems have a major role in technologies that are ushering in an exciting future. This talk highlights the Vector Institute and its industry sponsor companies collaboration project in the Computer Vision and focuses on the following use cases: anomaly detection in manufacturing, semantic segmentation in aerial and road obstacle imagery, automated traffic incident detection, identifying clinically-relevant features of interest in cholecystectomy procedures and transfer learning for efficient video classification and detection.

Sedef Akinli Kocak is Director of Professional Development at Vector Institute for artificial
intelligence building training programs for Vector sponsors and partners. Prior to this role she was the senior Project Manager and led several multi-industrial participant Applied AI projects.
She holds a PhD degree from the Data Science Lab at Ryerson University, Canada and earned
master’s degrees in both Chemical Engineering and Business of Administration. She worked in data intensive R&D project development and academic industry partnerships in the area of AI/ML at SOSCIP, the Southern Ontario Smart Computing for Innovation Platform. She is also an experienced and accomplished researcher in the area of ICT for sustainability and sustainability design in software intensive systems and a part time Data Science and Analytics lecturer at Ryerson University since 2014. She served as a member of the Compute Ontario Board Advisory Committee and AI program development advisor at the Continuing Education, University of
Toronto.

Kyryl Truskovskyi

Kyryl has over seven years of experience in the field of Machine Learning. For the bulk of this career, he has helped build machine learning startups, from inception to a product. He has also developed expertise in choosing and implementing state-of-the-art deep learning architectures and large-scale solutions based on them.

Kevin F. Li

Existing models for classifying and interpreting cognitive intelligence based on pediatric brain images are usually derived from low-dimensional statistical analysis. While such models are computationally efficient, they use oversimplified representations of a brain's features. They neglect essential brain structure information, such as regions of interest (ROI) and high-density segmentation features. Therefore, we develop a deep learning framework to understand and model cognitive intelligence using CT brain images. 

Our data pipeline provides over 600 billion parameters. Such high-density data requires a novel parallel computing framework for tuning and training tasks. Our framework can tractably handle these computational requirements by utilizing 1) an extensive grid search fitting-training scheme, 2) automated learning that optimizes deep neural network structure, 3) Bayesian variation inference that interprets uncertainty during the learning process, and 4) hardware configurations for both CPU and GPU environments. This framework is adaptable and particularly useful for high- and low-dimensional datasets shown in many cognitive modellings.
 
We will demonstrate the predictive and descriptive capability of such a deep learning framework. One of this research highlights is that we have successfully modelled the uncertainty of the latent intelligence features using ELBO optimization, transformed an integration of a joint distribution into an expectation function.

Kevin is a machine learning engineer with 6+ years of experience in ML inference modelling and data mining. He specializes in developing models for large-scale datasets, particularly in variational inference, GPT, Neural ODE and deep Bayesian neural net. His recent work at the Princess Margaret Cancer Centre models cognitive intelligence and draws probabilistic inferences from brain image data.

Zia Babar

Enterprises are increasingly under pressure to continually evolve and respond to fast-moving consumer trends, and adopt emerging digital technologies for business execution. It is no longer sufficient for them to design and set up their technological infrastructure once, and assume that it would suffice for extended periods of time. By leveraging cloud computing and modern data infrastructure, enterprises can rapidly design and modify solutions to deal with evolving business and technological requirements. For this, we propose and discuss cloud-native and data-centric architectural design patterns that enterprise can effectively utilize to respond to fast-moving internal and external factors.

Jerry Zikun Chen

Jerry Zikun Chen is a senior research scientist at Vanguard AI Research. He holds a Master of Science in Applied Computing from the University of Toronto, and has worked in the financial industry since undergrad. Jerry is passionate about the future of AI technologies and its use cases in the industry. He has been driving machine learning publications and applied projects in reinforcement learning, propensity modeling, and generative AI.

Rishab Goel

In graph learning, there have been two main inductive biases regarding graph-inspired architectures: On the one hand, higher-order interactions and message passing work well on homophilous graphs and are leveraged by GCNs and GATs. Such architectures, however, cannot easily scale to large real-world graphs. On the other hand, shallow (or node-level) models using ego features and adjacency embeddings work well in heterophilous graphs. In this work, we propose a novel scalable shallow method -- GLINKX -- that can work both on homophilous and heterophilous graphs. GLINKX leverages (i) novel monophilous label propagations (ii) ego/node features, (iii) knowledge graph embeddings as positional embeddings, (iv) node-level training, and (v) low-dimensional message passing, to achieve scaling in large graphs. We show the effectiveness of GLINKX on several homophilous and heterophilous datasets.

Rishab Goel is a Machine Learning Engineer. He has done a range of works in AI, from Reinforcement Learning, Graph Learning, Self-Supervised Learning, ML for Software Engineering, Large-Scale Recommendation systems, Large Scale Ads targeting, and NLP consulting. Recently, Rishab focussed on building scalable graph-based representations. Rishab received his Masters from the Indian Institute of Technology, (IIT) Delhi.

Harsha Aduri

In Amazon’s Partner Support organization, we build the technology to provide customer support to 1.5M third-party merchants who sell through our platform (selling partners; SPs) around the world. Each week, around 1.4M contacts (by email, phone, and chat) are serviced by more than 15k of our in-house support agents. At several points in an SP’s troubleshooting journey, automated classification of their problem can serve to streamline the resolution process. First, we can recommend self-service solutions, but if they are insufficient, we can quickly engage an appropriately trained and equipped agent to help with their problem; and we can help that agent by providing quick access to tools and standard procedures for that specific type of issue. The construction of an issue classification system begins with defining the taxonomy of issue types, which comes with the complexities of managing updates and reconciling across versions when extracting data sets for analysis and machine learning. Next, we need to consider how to collect human-annotated data to build a supervised training set, which comes with the complexities of dealing with sparse categories, issues that do not cleanly fit the taxonomy, and labelling errors. Once a labelled training set is available, an ML classifier can be built, but must be iterated and updated on a regular cadence to track the evolution of the business. Additionally, there is different information available at each point in the issue lifecycle, so the ML system must be built to accommodate multiple feature sets. This presentation will discuss the complexities confronted by our team as we have developed this ML system over the last 7 years and will discuss the evolution of our deep learning models for issue classification, from convolutional neural networks to RESNETs to transformers, with consideration of both text and non-text signals.

Richard Boire

Richard Boire's experience in predictive analytics and data science began several decades ago  when he received an MBA from Concordia University in Finance and Statistics. 
His initial experience at organizations such as Reader’s Digest and American Express allowed him to become a pioneer in the application of predictive modelling technology for all database and CRM type marketing programs. This extended to the introduction of models which targeted the acquisition of new customers based on return on investment. Further experience led to the pioneering development of insurance risk models for more effective pricing in the automobile and property sector.
With this experience, Richard formed his own consulting company back in 1994 which was called the Boire Filler Group, a Canadian leader in offering analytical and database services to companies seeking solutions to their existing predictive analytics or database marketing challenges.
Richard is a recognized authority on predictive analytics and is one of the most experienced practitioners in Canada, with expertise and knowledge that is difficult, if not impossible to replicate in Canada. This expertise has evolved into international speaking assignments and workshop seminars in the U.S., England, Eastern Europe, and Southeast Asia. 
Within Canada, he gives seminars on segmentation and predictive analytics for such organizations as Canadian Marketing Association (CMA), Direct Marketing News, Direct Marketing Association Toronto, Association for Advanced Relationship Marketing (AARM.) and Predictive Analytics World (PAW). His written articles have appeared in numerous publications such as Direct Marketing News, Strategy Magazine, and Marketing Magazine, Predictive Analytics Times, and the Canadian Marketing Association. He has taught applied statistics, data mining and database marketing at a variety of institutions across Canada which include University of Toronto, Concordia University, George Brown College, Seneca College, and Centennial College. Richard was Chair at the CMA’s Customer Insight and Analytics Committee and sat on the   CMA’s Board of Directors from 2009-2012. He has chaired numerous full day conferences on behalf of the CMA (the 2000 Database and Technology Seminar as well as the 2002 Database and Technology Seminar and the first-ever Customer Profitability Conference in 2005. He has chaired the Predictive Analytics World conferences in both 2013 and 2014 which were held in Toronto.