ML Logo

MLExperts.ai

Advanced AI & ML Projects

Explore some of our groundbreaking projects where advanced AI meets real-world challenges.

Program analysis and ML/AI

Fine-tune LLM models to reason robustly about computer programs
Code LLMs
Adversarial learning

Fine-tune LLM models to reason robustly about computer programs

We worked on making computer programs more resistant to AI deception. AI models like LLMs can generate programs but can be tricked by cleverly disguised code. We developed a method to create these disguises, called "obfuscations," which change how a program looks without altering its function. By strategically applying these changes, we created programs that AI struggles to analyze, helping us identify weaknesses in the AI and improve its ability to handle deceptive code.

Analyze large corpora of computer programs for patterns
Vector representation of programs
Code pattern discovery
Code search
Code similarity analysis

Analyze large corpora of computer programs for patterns

We used machine learning to classify code lines as vulnerable or not. To do this, we created a deep learning model that captures control and data dependencies, allowing for better representation of program meaning. Our model outperformed traditional classifiers, demonstrating the effectiveness of deep learning in modeling program structure. This approach can be applied to various tasks like code pattern discovery, search, and similarity analysis.

Use AI to discover new dynamic race detection heuristics for concurrent programs
Program analysis
ML
Concurrent programs

Use AI to discover new dynamic race detection heuristics for concurrent programs

We developed an SMT-based approach to generate program traces with injected data races in any given concurrent program. Using these traces, we found counterexamples that state-of-the-art race detection algorithms missed. The generated traces can be used to learn patterns behind data races, thus improving upon extant methods which rely on engineering elaborate heuristics.

Predicting the presence of bugs using program analysis metrics
Program analysis
ML
Code review
Software engineering

Predicting the presence of bugs using program analysis metrics

We looked at a large number of Java repositories and used code review data on past bugs and fixes. Our goal was to see if program analysis metrics, particularly from abstract interpretation, could predict bugs. We found that more complex join operations in abstract states were associated with a higher chance of bugs. We used Facebook's Infer, a static analysis tool, to gather these metrics and developed predictive models to improve code quality and reduce bugs in Java projects.

Interpretability of AI models

Explaining LLM model behavior
Mechanistic interpretability

Explaining LLM model behavior

We wanted to see if large language models (LLMs) can learn to process grammar like humans do. Humans are naturally good at understanding hierarchical structures in language, so we tested if LLMs could do the same after being trained on lots of text. We found that LLMs performed better with hierarchical grammar than with linear grammar. When we looked closer, we saw that different parts of the LLMs were handling each type of grammar. Disabling those parts made the LLMs less accurate. This shows that LLMs can learn specialized grammar processing just from reading text.

Analyze biases in large scale automated algorithms
Algorithmic biases

Analyze biases in large scale automated algorithms

We studied how Facebook's ad delivery algorithms impact climate-related ads and found that the algorithms might influence who sees these ads based on U.S. state, gender, or age. We also discovered that climate contrarians, especially in oil-producing states, receive a cost advantage in ad pricing. This suggests that the algorithms may skew ad delivery, potentially affecting audience targeting and shaping attitudes toward climate action.

How do people consume misinformation spread via deepfakes?
Deepfakes
Cognitive science

How do people consume misinformation spread via deepfakes?

We studied whether people can tell real political speeches from deepfakes, especially with advances in realistic visual effects. We tested participants by showing them speeches in different formats: text, audio, and video. We found that people were better at identifying real speeches when they had both audio and video, relying more on how the speech was delivered rather than the actual content. However, those who scored lower on a reflection test tended to focus too much on the speech content, which affected their accuracy in spotting the fake.

Automated reasoning of documents

AI for finding loopholes in legal contracts
SMT solvers
AI
Legal contracts

AI for finding loopholes in legal contracts

We used SMT (Satisfiability Modulo Theories) solvers to find loopholes in legal contracts. First, we convert the contract into a formal program that captures its terms and conditions. Then, we define specific loophole conditions as assertions in the program. The SMT solver helps us identify inputs that violate these assertions, revealing potential loopholes. This method automates contract analysis, ensuring that legal documents are strong and free from exploitable gaps.

Let's Innovate Together

Contact us to transform
your AI capabilities

© 2024 Ramailo Tech. All rights reserved.

linkedin