Research Snappy
  • Market Research Forum
  • Investment Research
  • Consumer Research
  • More
    • Advertising Research
    • Healthcare Research
    • Data Analysis
    • Top Companies
    • Latest News
No Result
View All Result
Research Snappy
No Result
View All Result

Tackling bias in edge AI systems

researchsnappy by researchsnappy
December 12, 2020
in Healthcare Research
0
Tackling bias in edge AI systems
399
SHARES
2.3k
VIEWS
Share on FacebookShare on Twitter

End-to-end machine learning tools such as AI Studio from Blaize are aimed at removing the need for data scientist in developing and deploying edge AI applications, especially in camera systems. But those data scientists have also highlighted that AI frameworks are vulnerable to bias in the data that is used for training.

New tools are emerging this week to analyse and even correct bias in such frameworks, which is a key step for edge AI system developers.

Amazon SageMaker Clarify is a new tool released this week that helps customers detect bias in machine learning (ML) models, and increase transparency by helping explain the behaviour of the model. These models are built by training algorithms that learn statistical patterns present in datasets, but there are issues on how the framework makes a prediction and how to detect anomalies.

With AI Studio, launched yesterday, Blaize has tried to address some of this with transparent steps and open frameworks such as ONNX and openVL.

Even with the best of intentions, bias issues may exist in datasets and be introduced into models with business, ethical, and regulatory consequences, says Julien Simon, Artificial Intelligence & Machine Learning Evangelist for EMEA at Amazon Wes services (AWS). This means it is important for model administrators to be aware of potential sources of bias in production systems.

For simple and well-understood algorithms like linear regression or tree-based algorithms, it’s reasonably easy to crack the model open, inspect the parameters that it learned during training, and figure out which features it predominantly uses, he says.

However, as models become more and more complex with deep learning, this kind of analysis becomes impossible. Many companies and organizations may need ML models to be explainable before they can be used in production. In addition, some regulations may require explainability when ML models are used as part of consequential decision making, and closing the loop, explainability can also help detect

Previous Post

Iron ore leads gains for industrial metals, up nearly 65% this year

Next Post

Economic recovery from Morgantown Mylan Pharmaceuticals closure could last years

Next Post
Economic recovery from Morgantown Mylan Pharmaceuticals closure could last years

Economic recovery from Morgantown Mylan Pharmaceuticals closure could last years

Research Snappy

Category

  • Advertising Research
  • Consumer Research
  • Data Analysis
  • Healthcare Research
  • Investment Research
  • News
  • Top Company News

HPIN International Financial Platform Becomes a New Benchmark for India’s Digital Economy

Top 10 Market Research Companies in the world

3 Best Market Research Certifications in High Demand

  • Privacy Policy
  • Terms of Use
  • Antispam
  • DMCA
  • Contact Us

© 2025 researchsnappy.com

No Result
View All Result
  • Market Research Forum
  • Investment Research
  • Consumer Research
  • More
    • Advertising Research
    • Healthcare Research
    • Data Analysis
    • Top Companies
    • Latest News

© 2025 researchsnappy.com