Final Assessment

Module 1: AI & Machine Learning Law

Comprehensive assessment covering all 8 parts. Score 70% or above to earn your Module 1 completion certificate.

50 Questions ~45 minutes Pass: 70% Certificate on Pass

Instructions

  • Answer all 50 questions - there is no negative marking
  • Questions cover: AI Framework, IT Rules 2025, Liability, Sectors, IPR, Ethics, Contracts, Compliance
  • Click on an option to select your answer
  • You can change your answer before submitting
  • After submission, you will see explanations for each question
  • Score 35 or more (70%) to pass and earn your certificate
Question 0 of 50 answered
Q1 Part 1: AI Framework
Which organization developed the AI definition adopted by many jurisdictions including the EU AI Act?
Explanation
The OECD developed the widely adopted AI definition in 2019, which has been incorporated into various regulatory frameworks including the EU AI Act.
Q2 Part 1: AI Framework
The EU AI Act adopts which regulatory approach for AI systems?
Explanation
The EU AI Act adopts a risk-based approach, classifying AI systems as unacceptable risk, high risk, limited risk, or minimal risk, with requirements proportionate to risk level.
Q3 Part 1: AI Framework
How many principles are in NITI Aayog's Responsible AI framework?
Explanation
NITI Aayog's Responsible AI framework includes 7 principles: Safety & Reliability, Equality, Inclusivity, Privacy & Security, Transparency, Accountability, and Positive Human Values.
Q4 Part 1: AI Framework
India currently lacks a statutory definition of AI in which legislation?
Explanation
India currently lacks a statutory definition of AI across all its major legislation including IT Act, DPDPA, Consumer Protection Act, and Copyright Act. This creates regulatory ambiguity.
Q5 Part 2: IT Rules 2025
Under IT Intermediary Rules 2025, "Deepfake" is defined as synthetic media that could deceive:
Explanation
The IT Rules 2025 definition of deepfake uses the "ordinary person" standard - media that could reasonably deceive an ordinary person into believing the depicted content is real.
Q6 Part 2: IT Rules 2025
Which labeling methods are required for AI-generated content under IT Rules 2025?
Explanation
IT Rules 2025 require AI content to be labeled using metadata, watermarks, and/or visible disclosures. Multiple methods may be required depending on content type.
Q7 Part 2: IT Rules 2025
The timeline for deepfake takedown upon complaint under IT Rules 2025 is:
Explanation
IT Rules 2025 require deepfake content removal within 24 hours of receiving a complaint. This is stricter than the general 36-hour timeline for other AI content.
Q8 Part 2: IT Rules 2025
Under Shreya Singhal, "actual knowledge" for Section 79 safe harbour requires:
Explanation
Shreya Singhal clarified that "actual knowledge" under Section 79(3)(b) requires court order or government notification - not mere user complaints or media reports.
Q9 Part 3: AI Liability
The key challenge for AI liability is often referred to as the "liability gap" because:
Explanation
The AI liability gap arises because AI's autonomy, opacity, and adaptability challenge traditional tort frameworks that assume human decision-making and predictable behavior.
Q10 Part 3: AI Liability
Under Consumer Protection Act 2019, AI-powered software is most likely classified as:
Explanation
AI classification as "product" depends on its form: embedded AI in hardware is clearly a product; SaaS AI is traditionally a service; the line is evolving in jurisprudence.
Q11Part 3: AI Liability
For vicarious liability to apply to AI, the key question is whether:
Explanation
Vicarious liability analysis for AI focuses on whether the deployer (analogous to employer) controls AI operations and benefits from them, similar to employer-employee relationships.
Q12Part 4: Sector Regulations
Which regulator oversees AI-powered medical devices in India?
Explanation
CDSCO (Central Drugs Standard Control Organisation) regulates AI-powered medical devices under the Medical Devices Rules, 2017.
Q13Part 4: Sector Regulations
For Class D AI medical devices, what is required before market approval?
Explanation
Class D (high-risk) AI medical devices require full clinical evaluation before market approval, demonstrating safety and efficacy for Indian patients.
Q14Part 4: Sector Regulations
RBI's Digital Lending Guidelines require AI credit decisions to include:
Explanation
RBI Digital Lending Guidelines require disclosure of key factors in AI credit decisions and a human intervention option for rejected applications.
Q15Part 4: Sector Regulations
SEBI's algorithmic trading framework mandates which safety feature?
Explanation
SEBI's algorithmic trading framework mandates a kill switch - the ability to halt algorithm operations instantly in case of malfunction or adverse market conditions.
Q16Part 5: AI & IPR
In Thaler v Comptroller (UK Supreme Court 2023), it was held that:
Explanation
The UK Supreme Court in Thaler held that AI cannot be named as inventor - the Patents Act requires inventors to be natural persons who can hold and transfer rights.
Q17Part 5: AI & IPR
Under Indian Copyright Act, Section 2(d)(vi), authorship of computer-generated works belongs to:
Explanation
Section 2(d)(vi) of the Copyright Act defines authorship of computer-generated works as belonging to "the person who causes the work to be created" - a unique provision in Indian law.
Q18Part 5: AI & IPR
Using copyrighted works for AI training in India faces which primary legal challenge?
Explanation
India's fair dealing provisions are narrower than US fair use and lack a "transformative use" doctrine. AI training on copyrighted works without license is legally risky.
Q19Part 5: AI & IPR
Section 3(k) of the Indian Patent Act excludes which from patentability?
Explanation
Section 3(k) excludes "computer programme per se" from patentability. However, software with technical effect may be patentable - the "per se" limitation is key.
Q20Part 6: AI Ethics
"Historical bias" in AI refers to:
Explanation
Historical bias occurs when training data reflects past discrimination - for example, hiring AI trained on historical data where women were underrepresented in certain roles.
Q21Part 6: AI Ethics
Article 14 of the Indian Constitution can apply to AI discrimination because:
Explanation
Article 14 directly binds government AI. For private AI, constitutional principles may apply through statutes like DPDPA, consumer protection laws, and anti-discrimination provisions.
Q22Part 6: AI Ethics
"Proxy discrimination" in AI occurs when:
Explanation
Proxy discrimination occurs when facially neutral variables (like pin code) correlate with protected characteristics (like caste/religion), causing indirect discrimination.
Q23Part 6: AI Ethics
Under DPDPA 2023, data principals have which right relevant to AI processing?
Explanation
Under DPDPA Section 11, data principals have the right to access and correct personal data being processed, including by AI systems.
Q24Part 6: AI Ethics
"Demographic parity" as a fairness metric means:
Explanation
Demographic parity requires equal positive outcome rates across demographic groups - for example, equal approval rates for different communities in lending.
Q25Part 7: AI Contracts
In AI development contracts, "pre-existing IP" typically refers to:
Explanation
Pre-existing IP refers to each party's background technology, tools, and IP that existed before entering the contract - this typically remains with the original owner.
Q26Part 7: AI Contracts
When drafting AI performance warranties, best practice is to:
Explanation
Best practice is to specify measurable performance metrics (like accuracy percentage on test data) with clear testing methodology, not absolute guarantees given AI's probabilistic nature.
Q27Part 7: AI Contracts
A controversial AI contract clause often negotiated is:
Explanation
Training rights clauses allowing vendors to use customer data to improve their general AI model are frequently contested. Customers may demand opt-out, anonymization, or exclusion.
Q28Part 7: AI Contracts
AI liability limitation clauses typically exclude which claims from caps?
Explanation
Liability caps typically exclude IP infringement, gross negligence, willful misconduct, and data protection violations as these represent serious risks requiring uncapped exposure.
Q29Part 8: Compliance
A multi-level AI governance structure typically includes which bodies?
Explanation
Effective AI governance includes multiple levels: Board (strategy), AI Steering Committee (policy), Ethics Committee (ethical review), and Legal/Compliance (regulatory compliance).
Q30Part 8: Compliance
AI risk classification should be based primarily on:
Explanation
AI risk classification should primarily consider potential impact on individuals and fundamental rights - life/safety, financial harm, discrimination, privacy intrusion.
Q31Part 8: Compliance
Under CERT-In directions, AI system logs must be retained for:
Explanation
CERT-In directions require system logs to be maintained for a minimum of 180 days, which applies to AI systems processing user data.
Q32Part 1: AI Framework
The IndiaAI Mission allocation announced in 2024 is approximately:
Explanation
The IndiaAI Mission approved in March 2024 has an allocation of approximately Rs. 10,372 crore covering compute capacity, innovation centers, datasets, and skills development.
Q33Part 2: IT Rules 2025
The IT Rules 2025 definition of "Synthetic Media" includes:
Explanation
Synthetic Media under IT Rules 2025 broadly covers any image, audio, video, or text content created, modified, or manipulated using AI/ML technologies.
Q34Part 3: AI Liability
AI explainability for legal purposes requires decisions to be:
Explanation
Legal explainability requires AI decisions to provide reasons sufficient for judicial review - meeting natural justice requirements, not just technical documentation.
Q35Part 4: Sector Regulations
IRDAI's key concern about AI in insurance pricing is:
Explanation
IRDAI is concerned about AI using proxy variables that correlate with prohibited factors (like genetic data, HIV status), leading to indirect discrimination in underwriting.
Q36Part 5: AI & IPR
To patent AI innovations while navigating Section 3(k), claims should focus on:
Explanation
To overcome Section 3(k), AI patent claims should emphasize technical effect achieved by the AI and include hardware integration where possible, not just the algorithm.
Q37Part 6: AI Ethics
The "impossibility theorem" in AI fairness refers to:
Explanation
The impossibility theorem shows that different fairness metrics are often mutually incompatible - achieving demographic parity may sacrifice calibration, and vice versa. Organizations must choose.
Q38Part 7: AI Contracts
In AI SaaS agreements, "model isolation" means:
Explanation
Model isolation means each customer's AI instance is logically separated from other customers, ensuring data is not commingled and outputs are not influenced by other customers' data.
Q39Part 8: Compliance
Documentation in AI compliance serves primarily to:
Explanation
Documentation serves dual purposes: operational effectiveness and legal defence. In litigation, documented processes, testing, and oversight demonstrate reasonable care and good faith compliance.
Q40Part 2: IT Rules 2025
Algorithm transparency reports under IT Rules 2025 must be published:
Explanation
SSMIs must publish algorithm transparency reports annually, describing primary parameters, user control options, and measures against harmful content amplification.
Q41Part 3: AI Liability
"Human-in-the-loop" AI oversight means:
Explanation
Human-in-the-loop means a human approves every AI decision before it takes effect. This is the highest level of human oversight, distinct from "on-the-loop" (monitoring with intervention capability).
Q42Part 4: Sector Regulations
For AI in critical infrastructure, NCIIPC requirements include:
Explanation
Critical infrastructure AI faces stringent requirements: security clearance for vendors, data localization in India, and mandatory manual override capability.
Q43Part 5: AI & IPR
Trade secret protection for AI is particularly valuable because:
Explanation
Trade secret protection avoids patent disclosure requirements (which reveal the innovation) and Section 3(k) challenges that computer programs "per se" face in patentability.
Q44Part 6: AI Ethics
Pre-processing approaches to bias mitigation include:
Explanation
Pre-processing approaches address bias before training by resampling training data, removing biased features, or generating synthetic data for underrepresented groups.
Q45Part 7: AI Contracts
For DPDPA compliance, AI vendor contracts must include:
Explanation
DPDPA requires Data Processing Agreements with processing instructions, security measures, sub-processor restrictions, breach notification, audit rights, and data return/deletion provisions.
Q46Part 8: Compliance
When advising AI clients, the first step should be:
Explanation
Effective AI advisory begins with understanding the AI: what type, what data it uses, which sectors apply, risk level, and geographic scope. This informs all subsequent advice.
Q47Part 1: AI Framework
Scenario
A startup wants to deploy a generative AI chatbot that creates content for marketing. Which global regulation has extraterritorial effect and may apply even to Indian companies?
The relevant regulation is:
Explanation
The EU AI Act has extraterritorial application similar to GDPR. Indian companies serving EU customers must comply with its requirements for generative AI systems.
Q48Part 3: AI Liability
Scenario
An AI radiology system fails to detect a tumor. The hospital, AI vendor, and radiologist are sued.
The radiologist's potential defence includes:
Explanation
The radiologist's best defence is demonstrating they followed the standard of care for AI-assisted diagnosis - which includes appropriate human oversight, not blind reliance on AI.
Q49Part 5: AI & IPR
Scenario
An AI image generator creates artwork similar to a registered artist's work. The AI was trained on scraped images.
The key legal issue regarding training data is:
Explanation
Using copyrighted works for AI training involves reproduction - a right held by copyright owners. Without license or valid fair dealing defence, this may constitute copyright infringement.
Q50Part 8: Compliance
The overarching principle for AI governance should be:
Explanation
AI governance should be proportionate to risk. Low-risk AI needs minimal oversight; high-risk AI (affecting life, rights, finances) requires comprehensive governance including ethics review and human oversight.
Your Score
0/50
0%
Correct
0
Incorrect
0
Pass Mark
35 (70%)
Time
--