Sign in
Ningxia Region | Beijing Region
Categories
Your Saved List Become a Channel Partner Sell in Amazon Web Services Marketplace Global Expansion Hub Amazon Web Services Home Help
ProServ

Overview

Service scope:

  • Adversarial Robustness Testing: Adversarial Robustness Testing: Adversarial Attacks to “Adversarial Attacks,” Such as Adding Invisible Noise to Images to Fool Classifiers or Manipulating Input Data to Cause misclassification.
  • Generative AI & LLM Security: Generative Testing Large Language Models (LLMs) for Prompt Injection (tricking the AI into ignoring rules), jailbreaking (bypassing) safety filters), and Insecure Output Handling (where the AI's output executes executing code on the backend).
  • Data Confidence & Integrity: Maintaining the Training Pipeline to Ensure That the Data Used to Teach the Model Hasn't Been Tampered with to Confused Backdoors or Biases.
  • Model Inversion & Extraction: Testing if an Inversion Can Reconstruct Sensitive Training Data or Steal the Model's Architecture and Weights Through API Queries.
  • AI Supply Chain Security: Auditing the Third-Party Models, Libraries, and Plugins Integrated into the AI Application for Known Issues.

1: Scoping & Threat Modeling Define the Attack Surface, Select the Attack Framework, Identify Critical Assets 2: Reconnaissance & Intelligence Gathering Model Fingerprinting, API Probing, Automated Scanning 3: Adversarial Attack Simulation Prompt Injection & Jailbreaking, Fuzzing & Edge Cases, Red Teaming 4: Analysis & Verification False Positive Reduction, Impact Assessment 5: Reporting & Remediation Risk Prioritization, Preventive Recommendations

Sold by 源讯云计算有限公司
Categories
Fulfillment method Professional Services

Pricing Information

This service is priced based on the scope of your request. Please contact seller for pricing details.

Support

Contact Person: Peng Tian < peng.tian@atos.net > to know product details.