CBSE Class 12 Mathematics

CBSE class 12 Mathematics NCERT Solutions in pdf and last year Boards Question papers and Sample Paper with solutions, download Formulas sheet, latest syllabus and RD Sharma Solutions for all chapters, access study material for Maths and free download in pdf, practice to get better marks in examinations. all study material has been prepared based on latest guidelines, term examination pattern and blueprint issued by cbse and ncert

click on tabs below for class 12 Mathematics worksheets, assignments, syllabus, ncert cbse books, ncert solutions, hots, multiple choice questions (mcqs), easy to learn concepts and study notes of all class 12 Mathematics maths chapters, online tests, sample papers and last year solved question papers

Unit-I: Relations and Functions (08 Marks)

  • Relations and Functions: Types of relations: reflexive, symmetric, transitive and equivalence relations. One to one and onto functions.
  • Inverse Trigonometric Functions: Definition, range, domain, principal value branch. Graphs of inverse trigonometric functions.

Note for Students: This unit focuses on the foundational concepts of set-based relationships and the properties of inverse functions required for advanced calculus.


Unit-II: Algebra (10 Marks)

  • Matrices: Concept, notation, order, equality, types of matrices, zero and identity matrix, transpose of a matrix, symmetric and skew symmetric matrices. Operations on matrices: Addition and multiplication and multiplication with a scalar. Simple properties of addition, multiplication and scalar multiplication. Non- commutativity of multiplication of matrices and existence of non-zero matrices whose product is the zero matrix (restrict to square matrices of order 2). Invertible matrices and proof of the uniqueness of inverse, if it exists; (Here all matrices will have real entries).
  • Determinants: Determinant of a square matrix (up to 3 x 3 matrices), minors, co-factors and applications of determinants in finding the area of a triangle. Adjoint and inverse of a square matrix. Consistency, inconsistency and number of solutions of system of linear equations by examples, solving system of linear equations in two or three variables (having unique solution) using inverse of a matrix.

Note for Students: Mastering matrices and determinants is essential for solving systems of linear equations and understanding linear transformations.


Unit-III: Calculus (35 Marks)

  • Continuity and Differentiability: Continuity and differentiability, chain rule, derivative of composite functions, derivatives of inverse trigonometric functions like sin⁻¹ x, cos⁻¹ x and tan⁻¹ x, derivative of implicit functions. Concept of exponential and logarithmic functions. Derivatives of logarithmic and exponential functions. Logarithmic differentiation, derivative of functions expressed in parametric forms. Second order derivatives.
  • Applications of Derivatives: Applications of derivatives: rate of change of quantities, increasing/decreasing functions, maxima and minima (first derivative test motivated geometrically and second derivative test given as a provable tool). Simple problems (that illustrate basic principles and understanding of the subject as well as real- life situations).
  • Integrals: Integration as inverse process of differentiation. Integration of a variety of functions by substitution, by partial fractions and by parts, Evaluation of simple integrals of the following types and problems based on them. Fundamental Theorem of Calculus (without proof). Basic properties of definite integrals and evaluation of definite integrals.
  • Application of the Integrals: Applications in finding the area under simple curves, especially lines, circles/ parabolas/ellipses (in standard form only).
  • Differential Equations: Definition, order and degree, general and particular solutions of a differential equation. Solution of differential equations by method of separation of variables, solutions of homogeneous differential equations of first order and first degree. Solutions of linear differential equation of the type: dy/dx + py = q, where p and q are functions of x or constants; dx/dy + px = q, where p and q are functions of y or constants.

Note for Students: Calculus carries the highest weightage in the exam, covering everything from the mechanics of derivatives to the application of integrals in area calculation.


Unit-IV: Vectors and Three-dimensional Geometry (14 Marks)

  • Vectors: Vectors and scalars, magnitude and direction of a vector. Direction cosines and direction ratios of a vector. Types of vectors (equal, unit, zero, parallel and collinear vectors), position vector of a point, negative of a vector, components of a vector, addition of vectors, multiplication of a vector by a scalar, position vector of a point dividing a line segment in a given ratio. Definition, Geometrical Interpretation, properties and application of scalar (dot) product of vectors, vector (cross) product of vectors.
  • Three-dimensional Geometry: Direction cosines and direction ratios of a line joining two points. Cartesian equation and vector equation of a line, skew lines, shortest distance between two lines. Angle between two lines.

Note for Students: This unit bridges the gap between algebraic vectors and spatial geometry, focusing on lines and their relationships in 3D space.


Unit-V: Linear Programming Problem (05 Marks)

  • Linear Programming: Introduction, related terminology such as constraints, objective function, optimization, graphical method of solution for problems in two variables, feasible and infeasible regions (bounded or unbounded), feasible and infeasible solutions, optimal feasible solutions (up to three non-trivial constraints).

Note for Students: This topic provides practical tools for optimization, helping you find the best outcome in various real-world resource constraints.


Unit-VI: Probability (08 Marks)

  • Probability: Conditional probability, multiplication theorem on probability, independent events, total probability, Bayes’ theorem.

Note for Students: Focus on understanding the logic behind Bayes' theorem and conditional events to solve complex probability scenarios.

 

Formulas for Class 12 Mathematics

  • Relations and functions
  • Inverse trigonometric functions
  • Calculus identities
  • Continuity
  • Differentiation
  • Application of derivative
  • Indefinite integral
  • Definite integral
  • Matrices
  • Determinants
  • Solution of system of linear equations

RELATIONS AND FUNCTIONS

I. RELATION

  • Let \( A \) and \( B \) be two sets. A relation between \( A \) and \( B \) is a collection of ordered pairs \( (a, b) \) such that \( a \in A \) and \( b \in B \).
  • If \( R: A \rightarrow B \) is a relation from \( A \) to \( B \), then \( R \subseteq A \times B \).
  • If \( n(A) = m \), \( n(B) = n \), then total number of relations from \( A \) to \( B \) is \( 2^{mn} \).
  • Domain of \( R = \{ a: (a, b) \in R \} \)
  • Range of \( R = \{ b: (a, b) \in R \} \)
  • Co-domain of \( R = B \)

II. Equivalence Relation

Let \( S \) be a set and \( R \) a relation between \( S \) and itself. We call \( R \) an equivalence relation on \( S \) if \( R \) has the following three properties:

  • Reflexivity: Every element of \( S \) is related to itself \( \implies (a, a) \in R, \forall a \in S \).
  • Symmetry: If \( a \) is related to \( b \) then \( b \) is related to \( a \). \( (a, b) \in R \implies (b, a) \in R, \forall a, b \in S \).
  • Transitivity: If \( a \) is related to \( b \) and \( b \) is related to \( c \), then \( a \) is related to \( c \). \( (a, b) \in R, (b, c) \in R \implies (a, c) \in R, \forall a, b, c \in S \).

Antisymmetric - A relation is antisymmetric if \( a R b \) and \( b R a \implies a = b \) for all values \( a \) and \( b \).

III. FUNCTIONS :

  • Definition - Any relation on \( A \times B \) in which
    • No two second elements have a common first element and
    • Every first element has a corresponding second element is called a function. It is also called mapping. A function is said to map an element \( x \) in its domain to an element \( y \) in its range. \( f: A \rightarrow B \) or \( f: x \rightarrow f(x) \) then \( f(x) = y \) where \( y \) is a function of \( x \).
  • DOMAIN - The set of all the first elements of the ordered pairs of a function is called the domain.
  • RANGE - The set of all the second elements of the ordered pairs of a function is called the range.
  • CODOMAIN - If \( (a, b) \) is an ordered pair of the function \( f: A \rightarrow B \) then the set \( B \) is called the Co-Domain. The range is a subset of the co-domain.

IV. Some important facts about a function from A to B:

  • Every element in \( A \) is in the domain of the function; that is, every element of \( A \) is mapped to some element in the range. (If some element in \( S \) has no mapping (arrow), then the relation is not a function!)
  • No element in the domain maps to more than one element in the range.
  • The mapping is not necessarily onto; some elements of \( T \) may not be in the range.
  • The mapping is not necessarily one-one; some elements of \( T \) may have more than one element of \( S \) mapped to them.
  • \( S \) and \( T \) need not be disjoint.

V. Types of functions

  • Injections A function \( f \) from \( A \) to \( B \) is called one to one (or one-one) if whenever \( f(x_1) = f(x_2) \implies x_1 = x_2 \). Note that here \( n(A) \leq n(B) \).
  • Surjections A function \( f \) from \( A \) to \( B \) is called onto if for all \( b \) in \( B \) there is an \( a \) in \( A \) such that \( f(a) = b \). \( \implies \forall b \in B, \exists a \in A : f(a) = b \). Note that here \( n(A) \geq n(B) \). Range = Co-domain.
  • Bijections are functions that are injective and surjective i.e. a function \( f \) from \( A \) to \( B \) is called a bijection if it is one to one and onto. Note that here \( n(A) = n(B) \).

VI. Some special functions with their domain, range and nature

  • Polynomial function \( p(x) = a_0 + a_1x + a_2x^2 + \dots + a_nx^n \); domain = \( R \); range = \( R \); continuous
  • Constant Function \( f(x) = k \); domain = \( R \); range = \( \{k\} \); continuous
  • Identity function \( I(x) = x \); domain = \( R \); range = \( R \); continuous
  • Exponential function \( f(x) = e^x \) or \( a^x \); domain = \( R \); range = \( (0, \infty) \); continuous
  • Logarithmic function \( f(x) = \log x \) or \( \ln x \); domain = \( (0, \infty) \); range = \( R \); continuous
  • Square root function \( f(x) = \sqrt{x} \); domain = \( [0, \infty) \); range = \( [0, \infty) \); continuous
  • Sine function - \( \sin: R \rightarrow [-1, 1] \); continuous
  • Cosine function - \( \cos: R \rightarrow [-1, 1] \); continuous
  • Tangent function - \( \tan: R - \{ x: x = \frac{(2n+1)\pi}{2} \} \rightarrow R \); continuous in its domain
  • Secant function - \( \sec: R - \{ x: x = \frac{(2n+1)\pi}{2} \} \rightarrow R - (-1, 1) \); continuous in its domain
  • Cosecant function - \( \csc: R - \{ x: x = n\pi, n \in Z \} \rightarrow R - (-1, 1) \); continuous in its domain
  • Cotangent function - \( \cot: R - \{ x: x = n\pi, n \in Z \} \rightarrow R \); continuous in its domain
  • Floor function \( \lfloor x \rfloor \) = Greatest integer that is less than or equal to \( x \). domain = \( R \), range = \( Z \); discontinuous.
  • Ceiling function \( \lceil x \rceil \) = Least integer that is greater than or equal to \( x \). domain = \( R \), range = \( Z \); discontinuous.
  • Reciprocal function \( f(x) = \frac{1}{x} \); domain = \( R - \{0\} \); range = \( R - \{0\} \); continuous in \( R^+ \) and \( R^- \)
  • Modulus function \( f(x) = |x| = \begin{cases} x, & \text{if } x \geq 0 \\ -x, & \text{if } x < 0 \end{cases} \); Domain = \( R \); Range = \( R^+ \); continuous.
  • Signum function \( f(x) = \begin{cases} \frac{|x|}{x}, & \forall x \neq 0 \\ 0, & x = 0 \end{cases} = \begin{cases} 1, & x > 0 \\ 0, & x = 0 \\ -1, & x < 0 \end{cases} \); domain = \( R \); range = \( \{-1, 0, 1\} \); discontinuous.

VII. COMPOSITION OF FUNCTIONS

Function composition is the application of one function to the results of another. For instance, the functions \( f: X \rightarrow Y \) and \( g: Y \rightarrow Z \) can be composed by computing the output of \( g \) when it has an input of \( f(x) \) instead of \( x \). A function \( g \circ f: X \rightarrow Z \) defined by \( (g \circ f)(x) = g(f(x)) \) for all \( x \) in \( X \).

  • The composition of functions is always associative. That is, if \( f \), \( g \), and \( h \) are three functions with suitably chosen domains and codomains, then \( f \circ (g \circ h) = (f \circ g) \circ h \).
  • The functions \( g \) and \( f \) are said to commute with each other if \( g \circ f = f \circ g \).

VIII. INVERSE OF A FUNCTION

Let \( f \) be a bijective function whose domain is the set \( X \), and whose range is the set \( Y \). Then, if it exists, the inverse of \( f \) is the function \( f^{-1} \) with domain \( Y \) and range \( X \), defined by the following rule: If \( f(x) = y \), then \( f^{-1}(y) = x \).

  • A function with a codomain is invertible if and only if it is both one-to-one and onto or a bijection and has the property that every element \( y \in Y \) corresponds to exactly one element \( x \in X \).
  • Domain \( (f) = \) range \( (f^{-1}) \) and range \( (f) = \) domain \( (f^{-1}) \).
  • Inverses and composition - If \( f \) is an invertible function with domain \( X \) and range \( Y \), then \( f^{-1}(f(x)) = x \) for every \( x \in X \).
  • There is a symmetry between a function and its inverse. Specifically, if the inverse of \( f \) is \( f^{-1} \), then the inverse of \( f^{-1} \) is the original function \( f \). i.e. If \( f^{-1} \circ f(x) = I_X \) then \( f \circ f^{-1}(y) = I_Y \).
  • Only one-to-one functions have a unique inverse.
  • If the function is not one-to-one, the domain of the function must be restricted so that a portion of the graph is one-to-one.

IX. Inverse of a composition

The inverse of \( g \circ f \) is \( f^{-1} \circ g^{-1} \).

The inverse of a composition of functions is given by the formula \( (f \circ g)^{-1} = g^{-1} \circ f^{-1} \).

X. BINARY OPERATION on a set

Let \( A \) be a non-empty set. A binary operation \( * \) on the set \( A \) is a function \( *: A \times A \rightarrow A \) such that \( a*b \in A, \forall (a, b) \in A \times A \).

  • Commutative property - A binary operation \( * \) on the set \( A \) is said to be commutative if \( a*b = b*a, \forall a, b \in A \).
  • Associative property - A binary operation \( * \) on the set \( A \) is said to be associative if \( a*(b*c) = (a*b)*c, \forall a, b, c \in A \).
  • Identity element of a binary operation – Given a binary operation \( *: A \times A \rightarrow A \), a unique element \( e \in A \), if it exists, is called the identity element for \( * \) if \( a*e = a = e*a, \forall a \in A \).
  • Inverse of an element - Given a binary operation \( *: A \times A \rightarrow A \), the identity element \( e \in A \), an element \( a \) is called invertible w.r.t. \( * \) if \( \exists b \in A \) such that \( a*b = e = b*a \). Then \( b \) is called the inverse of \( a \) and is denoted by \( a^{-1} \) i.e. \( a * a^{-1} = e = a^{-1} * a \).

INVERSE TRIGONOMETRIC FUNCTIONS

INVERSE TRIGONOMETRIC FUNCTIONS or cyclometric functions - are the so-called inverse functions of the trigonometric functions, when their domain are restricted to principal value branch to make the trigonometric functions bijective. The principal inverses are listed below:

  • arcsine: \( y = \sin^{-1} x \), Definition: \( x = \sin y \), Domain: \( -1 \leq x \leq 1 \), Range (radians): \( -\pi/2 \leq y \leq \pi/2 \), Range (degrees): \( -90^\circ \leq y \leq 90^\circ \)
  • arccosine: \( y = \cos^{-1} x \), Definition: \( x = \cos y \), Domain: \( -1 \leq x \leq 1 \), Range (radians): \( 0 \leq y \leq \pi \), Range (degrees): \( 0^\circ \leq y \leq 180^\circ \)
  • arctangent: \( y = \tan^{-1} x \), Definition: \( x = \tan y \), Domain: All real numbers, Range (radians): \( -\pi/2 < y < \pi/2 \), Range (degrees): \( -90^\circ < y < 90^\circ \)
  • arccotangent: \( y = \cot^{-1} x \), Definition: \( x = \cot y \), Domain: All real numbers, Range (radians): \( 0 < y < \pi \), Range (degrees): \( 0^\circ < y < 180^\circ \)
  • arcsecant: \( y = \sec^{-1} x \), Definition: \( x = \sec y \), Domain: \( x \leq -1 \) or \( 1 \leq x \), Range (radians): \( 0 \leq y < \pi/2 \) or \( \pi/2 < y \leq \pi \), Range (degrees): \( 0^\circ \leq y < 90^\circ \) or \( 90^\circ < y \leq 180^\circ \)
  • arccosecant: \( y = \csc^{-1} x \), Definition: \( x = \csc y \), Domain: \( x \leq -1 \) or \( 1 \leq x \), Range (radians): \( -\pi/2 \leq y < 0 \) or \( 0 < y \leq \pi/2 \), Range (degrees): \( -90^\circ \leq y < 0^\circ \) or \( 0^\circ < y \leq 90^\circ \)

Properties of the inverse trigonometric functions

I. COMPLEMENTARY ANGLES:

  • \( \sin^{-1} x + \cos^{-1} x = \frac{\pi}{2} \)
  • \( \sec^{-1} x + \csc^{-1} x = \frac{\pi}{2} \)
  • \( \tan^{-1} x + \cot^{-1} x = \frac{\pi}{2} \)

II. NEGATIVE ARGUMENTS:

  • \( \sin^{-1}(-x) = -\sin^{-1} x \)
  • \( \cos^{-1}(-x) = \pi - \cos^{-1} x \)
  • \( \tan^{-1}(-x) = -\tan^{-1} x \)
  • \( \cot^{-1}(-x) = \pi - \cot^{-1} x \)
  • \( \sec^{-1}(-x) = \pi - \sec^{-1} x \)
  • \( \csc^{-1}(-x) = -\csc^{-1} x \)

III. RECIPROCAL ARGUMENTS:

  • \( \sin^{-1} \left( \frac{1}{x} \right) = \csc^{-1} x \)
  • \( \csc^{-1} \left( \frac{1}{x} \right) = \sin^{-1} x \)
  • \( \cos^{-1} \left( \frac{1}{x} \right) = \sec^{-1} x \)
  • \( \sec^{-1} \left( \frac{1}{x} \right) = \cos^{-1} x \)
  • \( \tan^{-1} \left( \frac{1}{x} \right) = \cot^{-1} x \) if \( x > 0 \)
  • \( \tan^{-1} \left( \frac{1}{x} \right) = -\pi + \cot^{-1} x \) if \( x < 0 \)

IV. CONVERSION FORMULA

Use Pythagoras formula in a right triangle to get the 3rd side: \( \sin^{-1} \left( \frac{p}{h} \right) = \cos^{-1} \left( \frac{b}{h} \right) = \tan^{-1} \left( \frac{p}{b} \right) \)

V. SUM FORMULA

  • \( \sin^{-1} x \pm \sin^{-1} y = \sin^{-1} [x \sqrt{1-y^2} \pm y \sqrt{1-x^2}] \)
  • \( \cos^{-1} x \pm \cos^{-1} y = \cos^{-1} [xy \mp \sqrt{1-x^2}\sqrt{1-y^2}] \)
  • \( \tan^{-1} x \pm \tan^{-1} y = \tan^{-1} \left( \frac{x \pm y}{1 \mp xy} \right) \)

VI. MULTIPLE FORMULA

  • \( 2 \sin^{-1} x = \sin^{-1} [2x \sqrt{1-x^2}] \)
  • \( 2 \cos^{-1} x = \cos^{-1} [2x^2 - 1] \)
  • \( 2 \tan^{-1} x = \tan^{-1} \left( \frac{2x}{1-x^2} \right) = \sin^{-1} \left( \frac{2x}{1+x^2} \right) = \cos^{-1} \left( \frac{1-x^2}{1+x^2} \right) \)

CALCULUS

I. ALGEBRAIC AND TRIGONOMETRIC IDENTITIES

  • \( a^3 + b^3 = (a+b)(a^2 - ab + b^2) \)
  • \( a^3 - b^3 = (a-b)(a^2 + ab + b^2) \)
  • \( \sin^2 x + \cos^2 x = 1 \)
  • \( 1 + \tan^2 x = \sec^2 x \)
  • \( 1 + \cot^2 x = \csc^2 x \)
  • \( \sin(u \pm v) = \sin u \cos v \pm \cos u \sin v \)
  • \( \cos(u \pm v) = \cos u \cos v \mp \sin u \sin v \)
  • \( \sin 2u = 2 \sin u \cos u = \frac{2 \tan u}{1 + \tan^2 u} \)
  • \( \cos 2u = \cos^2 u - \sin^2 u = 2 \cos^2 u - 1 = 1 - 2 \sin^2 u = \frac{1 - \tan^2 u}{1 + \tan^2 u} \)
  • \( \sin^2 u = \frac{1 - \cos 2u}{2} \)
  • \( \cos^2 u = \frac{1 + \cos 2u}{2} \)

IV. CONTINUITY

DEFINITION - Continuity of a function(x) at a point – A function \( f(x) \) is said to be continuous at the point \( x = a \) if \( \lim_{x \to a} f(x) = f(a) \).

  • Continuity of a function \( f(x) \) at \( x = a \) means
    • \( f(x) \) is defined at \( a \) i.e. the point \( a \) lies in the domain of \( f \)
    • \( \lim_{x \to a} f(x) \) exists i.e. \( \lim_{x \to a^-} f(x) = \lim_{x \to a^+} f(x) \)
    • \( \lim_{x \to a} f(x) = f(a) \)

V. DIFFERENTIATION

I. Definition of derivative:

If \( y = f(x) \) then \( y' = \frac{df(x)}{dx} = f'(x) = \lim_{h \to 0} \frac{f(x+h) - f(x)}{h} \)

II. FORMULAS OF DERIVATIVES

  • \( \frac{d(C)}{dx} = 0 \)
  • \( \frac{d(x)}{dx} = 1 \)
  • \( \frac{d(x^n)}{dx} = nx^{n-1} \)
  • \( \frac{d(e^x)}{dx} = e^x \)
  • \( \frac{d(a^x)}{dx} = a^x \log_e a \)
  • \( \frac{d(\log x)}{dx} = \frac{1}{x} \)
  • \( \frac{d(\sin x)}{dx} = \cos x \)
  • \( \frac{d(\cos x)}{dx} = -\sin x \)
  • \( \frac{d(\tan x)}{dx} = \sec^2 x \)
  • \( \frac{d(\cot x)}{dx} = -\csc^2 x \)
  • \( \frac{d(\sec x)}{dx} = \sec x \tan x \)
  • \( \frac{d(\csc x)}{dx} = -\csc x \cot x \)
  • \( \frac{d(\sin^{-1} x)}{dx} = \frac{1}{\sqrt{1-x^2}} \)
  • \( \frac{d(\cos^{-1} x)}{dx} = -\frac{1}{\sqrt{1-x^2}} \)
  • \( \frac{d(\tan^{-1} x)}{dx} = \frac{1}{1+x^2} \)

VII. INDEFINITE INTEGRALS

Definition - if the derivative of \( F(x) \) is \( f(x) \) then ANTIDERIVATIVE or INTEGRAL of \( f(x) \) is \( F(x) \), it is denoted by \( \int f(x) dx = F(x) + C \) where \( C \) is any constant of integration.

I. FORMULA OF INTEGRATION

  • \( \int [f(x) \pm g(x)] dx = \int f(x) dx \pm \int g(x) dx \)
  • \( \int k f(x) dx = k \int f(x) dx + C \)
  • \( \int x^n dx = \frac{x^{n+1}}{n+1} + C, n \neq -1 \)
  • \( \int \frac{1}{x} dx = \log |x| + C \)
  • \( \int e^x dx = e^x + C \)
  • \( \int a^x dx = \frac{a^x}{\log_e a} + C \)
  • \( \int \sin x dx = -\cos x + C \)
  • \( \int \cos x dx = \sin x + C \)
  • \( \int \sec^2 x dx = \tan x + C \)
  • \( \int \csc^2 x dx = -\cot x + C \)
  • \( \int \sec x \tan x dx = \sec x + C \)
  • \( \int \csc x \cot x dx = -\csc x + C \)

VIII. DEFINITE INTEGRAL:

The Fundamental Theorem of Calculus: Let \( f(x) \) be continuous on \( [a, b] \). If \( F(x) \) is any antiderivative of \( f(x) \), then \( \int_a^b f(x) dx = F(b) - F(a) \).

MATRICES AND DETERMINANTS

  • DEFINITION: A matrix \( A = [a_{ij}]_{m \times n} \) is defined as an ordered rectangular array of numbers in \( m \) rows and \( n \) columns.
  • SQUARE MATRIX: A matrix for which horizontal and vertical dimensions are the same (i.e., an \( n \times n \) matrix).
  • IDENTITY MATRIX: A diagonal matrix \( A = [a_{ij}]_{n \times n} \) is called the identity matrix if \( a_{ij} = 1 \) for \( i = j \) and \( a_{ij} = 0 \) for \( i \neq j \).

11. The Determinant of a Matrix

  • DEFINITION: Determinants play an important role in finding the inverse of a matrix and also in solving systems of linear equations. The determinant of a square matrix \( A \) is a number associated with every square matrix and is denoted by \( \text{det}(A) \) or \( |A| \).
  • Singular matrix – A square matrix is said to be singular if \( |A| = 0 \).
  • Non-Singular matrix – A square matrix is said to be non-singular if \( |A| \neq 0 \).

DIFFERENTIAL EQUATIONS

  • ORDER: The ORDER of a differential equation is the highest derivative that appears in the equation.
  • DEGREE: The DEGREE of a differential equation is the power or exponent of the highest derivative that appears in the equation.

VECTORS

  • A quantity that has magnitude as well as direction is called a vector.
  • UNIT VECTOR along \( \vec{a} \) is given by \( \hat{a} = \frac{\vec{a}}{|\vec{a}|} \).
  • SCALAR MULTIPLICATION (Dot Product): \( \vec{a} \cdot \vec{b} = |\vec{a}||\vec{b}| \cos \theta \).
  • VECTOR MULTIPLICATION (Cross Product): \( \vec{a} \times \vec{b} = |\vec{a}||\vec{b}| \sin \theta \hat{n} \).

PROBABILITY THEORY

  • Probability: \( P(A) = \frac{\text{The Number Of Ways Event A Can Occur}}{\text{The total number Of Possible Outcomes}} \).
  • If \( P(A) = 0 \), event \( A \) is impossible.
  • If \( P(A) = 1 \), event \( A \) is certain.
  • Independent Events: Two events \( A \) and \( B \) are independent if \( P(A \cap B) = P(A) \cdot P(B) \).
  • Conditional Probability: \( P(B|A) = \frac{P(A \cap B)}{P(A)} \), provided \( P(A) \neq 0 \).

VI. APPLICATION OF DERIVATIVE

I. APPROXIMATIONS, DIFFERENTIALS AND ERRORS

  • Absolute error - The increment \( \Delta x \) in \( x \) is called the absolute error in \( x \).
  • Relative error - If \( \Delta x \) is an error in \( x \), then \( \frac{\Delta x}{x} \) is called the relative error in \( x \).
  • Percentage error - If \( \Delta x \) is an error in \( x \), then \( \frac{\Delta x}{x} \times 100 \) is called the percentage error in \( x \).
  • Approximation -
    • Take the quantity given in the question as \( y + \Delta y = f(x + \Delta x) \)
    • Take a suitable value of \( x \) nearest to the given value. Calculate \( \Delta x \)
    • Calculate \( y = f(x) \) at the assumed value of \( x \).
    • Calculate \( \frac{dy}{dx} \) at the assumed value of \( x \)
    • Using differential calculate \( \Delta y = \frac{dy}{dx} \times \Delta x \)
    • find the approximate value of the quantity asked in the question as \( y + \Delta y \), from the values of \( y \) and \( \Delta y \) evaluated in step 3 and 5.

 

II. Tangents and normals –

  • Slope of the tangent to the curve \( y = f(x) \) at the point \( (x_0, y_0) \) is given by \( \left. \frac{dy}{dx} \right|_{(x_0, y_0)} \)
  • Equation of the tangent to the curve \( y = f(x) \) at the point \( (x_0, y_0) \) is \( (y - y_0) = \left. \frac{dy}{dx} \right|_{(x_0, y_0)} (x - x_0) \).
  • Slope of the normal to the curve \( y = f(x) \) at the point \( (x_0, y_0) \) is given by \( -\frac{1}{\left. \frac{dy}{dx} \right|_{(x_0, y_0)}} \)
  • Equation of the normal to the curve \( y = f(x) \) at the point \( (x_0, y_0) \) is \( (y - y_0) = -\frac{1}{\left. \frac{dy}{dx} \right|_{(x_0, y_0)}} (x - x_0) \)
  • To curves \( y = f(x) \) and \( y = g(x) \) are orthogonal means their tangents are perpendicular to each other at the point of contact.
  • The condition of orthogonality of two curves \( c_1 \) and \( c_2 \) is \( \left. \frac{dy}{dx} \right|_{c_1} \times \left. \frac{dy}{dx} \right|_{c_2} = -1 \)

III. Increasing/Decreasing Functions

  • Definition of an increasing function: A function \( f(x) \) is "increasing" at a point \( x_0 \) if and only if there exists some interval \( I \) containing \( x_0 \) such that \( f(x_0) > f(x) \) for all \( x \) in \( I \) to the left of \( x_0 \) and \( f(x_0) < f(x) \) for all \( x \) in \( I \) to the right of \( x_0 \).
  • Definition of a decreasing function: A function \( f(x) \) is "decreasing" at a point \( x_0 \) if and only if there exists some interval \( I \) containing \( x_0 \) such that \( f(x_0) < f(x) \) for all \( x \) in \( I \) to the left of \( x_0 \) and \( f(x_0) > f(x) \) for all \( x \) in \( I \) to the right of \( x_0 \).
  • To find the intervals in which a given function is increasing or decreasing
    • Differentiate the given function \( y = f(x) \), to get \( f'(x) \)
    • Solve \( f'(x) = 0 \) to find the critical points.
    • Consider all the subintervals of \( R \) formed by the critical points. (no. of subintervals will be one more than the no. of critical points.)
    • Find the value of \( f'(x) \) in each subinterval.
    • \( f'(x) > 0 \) implies \( f(x) \) is increasing and \( f'(x) < 0 \) implies \( f(x) \) is decreasing.

 

VII. CONCAVITY

  • Definition of a concave up curve: \( f(x) \) is "concave up" at \( x_0 \) if and only if \( f'(x) \) is increasing at \( x_0 \) which means \( f''(x) > 0 \) at \( x_0 \) i.e. it is a minima.
  • Definition of a concave down curve: \( f(x) \) is "concave down" at \( x_0 \) if and only if \( f'(x) \) is decreasing at \( x_0 \) which means \( f''(x) < 0 \) at \( x_0 \) i.e. it is a maxima.
  • The first derivative test: If \( f'(x_0) \) exists and is positive, then \( f(x) \) is increasing at \( x_0 \). If \( f'(x_0) \) exists and is negative, then \( f(x) \) is decreasing at \( x_0 \). If \( f'(x_0) \) does not exist or is zero, then the test fails.
  • The second derivative test: If \( f''(x) \) exists at \( x_0 \) and is positive, then \( f(x) \) is concave up or has minima at \( x_0 \). If \( f''(x_0) \) exists and is negative, then \( f(x) \) is concave down or has maxima at \( x_0 \). If \( f''(x) \) does not exist or is zero, then the test fails.

 

VIII. Critical Points

  • Definition of a critical point: a critical point on \( f(x) \) occurs at \( x_0 \) if and only if either \( f'(x_0) \) is zero or the derivative doesn't exist.
  • Definition of an inflection point: An inflection point occurs on \( f(x) \) at \( x_0 \) if and only if \( f(x) \) has a tangent line at \( x_0 \) and there exists and interval \( I \) containing \( x_0 \) such that \( f(x) \) is concave up on one side of \( x_0 \) and concave down on the other side.

 

IX. Extrema (Maxima and Minima)

  • Definition of a local maxima: A function \( f(x) \) has a local maximum at \( x_0 \) if and only if there exists some interval \( I \) containing \( x_0 \) such that \( f(x_0) \geq f(x) \) for all \( x \) in \( I \).
  • Definition of a local minima: A function \( f(x) \) has a local minimum at \( x_0 \) if and only if there exists some interval \( I \) containing \( x_0 \) such that \( f(x_0) \leq f(x) \) for all \( x \) in \( I \).
  • Occurrence of local extrema: All local extrema occur at critical points, but not all critical points occur at local extrema.
  • The first derivative test for local extrema: If \( f(x) \) is increasing (\( f'(x) > 0 \)) for all \( x \) in some interval \( (a, x_0] \) and \( f(x) \) is decreasing (\( f'(x) < 0 \)) for all \( x \) in some interval \( [x_0, b) \), then \( f(x) \) has a local maximum at \( x_0 \). If \( f(x) \) is decreasing (\( f'(x) < 0 \)) for all \( x \) in some interval \( (a, x_0] \) and \( f(x) \) is increasing (\( f'(x) > 0 \)) for all \( x \) in some interval \( [x_0, b) \), then \( f(x) \) has a local minimum at \( x_0 \).
  • The second derivative test for local extrema: If \( f'(x_0) = 0 \) and \( f''(x_0) > 0 \), then \( f(x) \) has a local minimum at \( x_0 \). If \( f'(x_0) = 0 \) and \( f''(x_0) < 0 \), then \( f(x) \) has a local maximum at \( x_0 \).
  • To solve word problems of maxima and minima:
    • Draw the figure and list down the facts given in the question.
    • From the given function convert one variable in term of the other.
    • Write down the function to be optimized and convert it into a function of one variable by using the result of step 2.
    • Then proceed to find maxima or minima by applying second derivative test.
    • Evaluate all components of the question.

 

X. Absolute Extrema

  • Definition of absolute maxima: \( y_0 \) is the "absolute maximum" of \( f(x) \) on \( I \) if and only if \( y_0 \geq f(x) \) for all \( x \) on \( I \).
  • Definition of absolute minima: \( y_0 \) is the "absolute minimum" of \( f(x) \) on \( I \) if and only if \( y_0 \leq f(x) \) for all \( x \) on \( I \).
  • The extreme value theorem: If \( f(x) \) is continuous in a closed interval \( I \), then \( f(x) \) has at least one absolute maximum and one absolute minimum in \( I \).
  • Occurrence of absolute maxima: If \( f(x) \) is continuous in a closed interval \( I \), then the absolute maximum of \( f(x) \) in \( I \) is the maximum value of \( f(x) \) on all local maxima and endpoints on \( I \).
  • Occurrence of absolute minima: If \( f(x) \) is continuous in a closed interval \( I \), then the absolute minimum of \( f(x) \) in \( I \) is the minimum value of \( f(x) \) on all local minima and endpoints on \( I \).
  • Alternate method of finding extrema: If \( f(x) \) is continuous in a closed interval \( I \), then the absolute extrema of \( f(x) \) in \( I \) occur at the critical points and/or at the endpoints of \( I \).

 

INDEFINITE INTEGRALS

INTEGRAL OF TRIGONOMETRIC FUNCTIONS:

  • \( \int \tan x \, dx = \log |\sec x| + C = -\log |\cos x| + C \)
  • \( \int \cot x \, dx = \log |\sin x| + C \)
  • \( \int \sec x \, dx = \log |\sec x + \tan x| + C \)
  • \( \int \csc x \, dx = \log |\csc x - \cot x| + C \)
  • \( \int \frac{dx}{\sqrt{a^2 - x^2}} = \sin^{-1} \frac{x}{a} + C \)
  • \( \int \frac{dx}{x \sqrt{x^2 - a^2}} = \frac{1}{a} \sec^{-1} \frac{x}{a} + C \)
  • \( \int \frac{dx}{a^2 + x^2} = \frac{1}{a} \tan^{-1} \frac{x}{a} + C \)

 

Standard formula

  • \( \int \frac{1}{a^2 - x^2} \, dx = \frac{1}{2a} \log \left| \frac{a+x}{a-x} \right| + C \)
  • \( \int \frac{1}{x^2 - a^2} \, dx = \frac{1}{2a} \log \left| \frac{x-a}{x+a} \right| + C \)
  • \( \int \frac{1}{\sqrt{x^2 + a^2}} \, dx = \log |x + \sqrt{x^2 + a^2}| + C \)
  • \( \int \frac{1}{\sqrt{x^2 - a^2}} \, dx = \log |x + \sqrt{x^2 - a^2}| + C \)
  • \( \int \sqrt{a^2 - x^2} \, dx = \frac{x}{2} \sqrt{a^2 - x^2} + \frac{a^2}{2} \sin^{-1} \frac{x}{a} + C \)
  • \( \int \sqrt{x^2 + a^2} \, dx = \frac{x}{2} \sqrt{x^2 + a^2} + \frac{a^2}{2} \log |x + \sqrt{x^2 + a^2}| + C \)
  • \( \int \sqrt{x^2 - a^2} \, dx = \frac{x}{2} \sqrt{x^2 - a^2} - \frac{a^2}{2} \log |x + \sqrt{x^2 - a^2}| + C \)

 

Integration by Parts

If \( u \) and \( v \) are two functions of \( x \) then the integral of product of two functions = \( 1^{st} \) function \( \times \) integral of the \( 2^{nd} \) function - integral of the product of the derivative of \( 1^{st} \) function and the integral of the \( 2^{nd} \) function.

Formula: \( \int u(x) \cdot v(x) \, dx = u(x) \int v(x) \, dx - \int \left[ \frac{d}{dx} u(x) \cdot \left( \int v(x) \, dx \right) \right] dx \)

Order of functions (ILATE):

  • I – inverse trigonometric function
  • L – Logarithmic function
  • A – Algebraic function
  • T – Trigonometric function
  • E – Exponential function

 

Integrals of the form \( \int e^x [f(x) + f'(x)] \, dx \)

  • Express the integral as sum of two integrals, one containing \( f(x) \) and other containing \( f'(x) \).
  • Evaluate the first integral by integration by parts by taking \( e^x \) as \( 2^{nd} \) function.
  • \( 2^{nd} \) integral on R.H.S. will get cancelled by the \( 2^{nd} \) term obtained by evaluating the \( 1^{st} \) integral.
  • Result: \( \int e^x [f(x) + f'(x)] \, dx = e^x f(x) + C \)

 

XX. Integrals of the type \( \int e^{ax} \sin bx \, dx \) or \( \int e^{ax} \cos bx \, dx \)

  • Apply integration by parts twice by taking \( e^{ax} \) as the first function.

XXI. INTEGRATION OF SOME SPECIAL IRRATIONAL ALGEBRAIC FUNCTIONS

Integrals of the form \( \int \frac{\phi(x)}{P \sqrt{Q}} dx \)

  • \( \int \frac{1}{(ax+b)\sqrt{cx+d}} \, dx \): \( P \) and \( Q \) are both linear functions of \( x \), put \( Q = t^2 \) i.e., \( cx + d = t^2 \).
  • \( \int \frac{1}{(ax^2+bx+c)\sqrt{px+q}} \, dx \): \( P \) is a quadratic expression and \( Q \) is linear expression of \( x \), put \( Q = t^2 \) i.e., put \( px + q = t^2 \).
  • \( \int \frac{1}{(ax+b)\sqrt{px^2+qx+r}} \, dx \): \( P \) is a linear expression and \( Q \) is quadratic expression of \( x \), put \( P = \frac{1}{t} \), i.e., \( ax + b = \frac{1}{t} \).
  • \( \int \frac{1}{(ax^2+b)\sqrt{cx^2+d}} \, dx \): \( P \) and \( Q \) are pure quadratic expressions, put \( x = \frac{1}{t} \), to obtain \( \int \frac{-t \, dt}{(a+bt^2)\sqrt{c+dt^2}} \), then put \( c + dt^2 = u^2 \).
  • \( \int \frac{px+q}{(ax^2+b)\sqrt{cx^2+d}} \, dx \): \( P \) and \( Q \) are pure quadratic expressions and \( \phi(x) \) is linear, put \( x = t^2 \).

 

VIII. DEFINITE INTEGRAL:

  • The Fundamental Theorem of Calculus: Let \( f(x) \) be continuous on \( [a, b] \). If \( F(x) \) is any antiderivative of \( f(x) \), then \( \int_a^b f(x) \, dx = F(b) - F(a) \) where \( b \) is the upper limit and \( a \) is the lower limit.
  • Areas above and below a curve: If the graph of \( y = f(x) \), between \( x = a \) and \( x = b \), has portions above and portions below the X axis, then \( \int_a^b f(x) \, dx \) is the sum of the absolute values of the positive areas above the X axis and the negative areas below the X axis.
  • Mean Value Theorem (for definite integrals): If \( f \) is continuous on \( [a, b] \), then at some point \( c \) in \( [a, b] \), \( f(c) = \frac{1}{b-a} \int_a^b f(x) \, dx \)
  • Definite integral as the limit of a sum of all the strips between \( a \) and \( b \), having areas of \( f(a + (k-1)h) \cdot h \) that is, \( \int_a^b f(x) \, dx = \lim_{h \to 0} h \sum_{k=1}^{n} f(a + (k-1)h) = \lim_{h \to 0} h[f(a) + f(a+h) + f(a+2h) + \dots + f(a + (n-1)h)] \)
  • Steps:
    1. Find \( nh = b - a \)
    2. Evaluate \( f(a), f(a+h), f(a+2h), \dots, f(a+(n-1)h) \) and set pattern in terms of \( h, h^2, h^3 \) etc.
    3. Use the limit formula.
    4. After combining terms, apply summation formulas:
      • \( 1 + 2 + 3 + \dots + (n-1) = \frac{(n-1)n}{2} \)
      • \( 1^2 + 2^2 + 3^2 + \dots + (n-1)^2 = \frac{(n-1)n(2n-1)}{6} \)
      • \( 1^3 + 2^3 + 3^3 + \dots + (n-1)^3 = \frac{(n-1)^2 n^2}{4} \)
      • \( a + ar + ar^2 + \dots + ar^{n-1} = a \frac{r^n-1}{r-1}, |r| > 1 \)
      • \( \sin a + \sin(a+h) + \dots + \sin\{a+(n-1)h\} = \frac{\sin \{a + (\frac{n-1}{2})h\} \sin(\frac{nh}{2})}{\sin(\frac{h}{2})} \)
      • \( \cos a + \cos(a+h) + \dots + \cos\{a+(n-1)h\} = \frac{\cos \{a + (\frac{n-1}{2})h\} \sin(\frac{nh}{2})}{\sin(\frac{h}{2})} \)

 

Properties of the Definite Integral

If \( f(x) \) and \( g(x) \) are defined and continuous on \( [a, b] \):

  • (i) \( \int_a^b [f(x) \pm g(x)] \, dx = \int_a^b f(x) \, dx \pm \int_a^b g(x) \, dx \)
  • (ii) \( \int_a^b \alpha f(x) \, dx = \alpha \int_a^b f(x) \, dx \)
  • (iii) \( \int_a^a f(x) \, dx = 0 \)
  • \( P_0 \): \( \int_a^b f(x) \, dx = \int_a^b f(t) \, dt \) (Value remains unchanged if variable is changed)
  • \( P_1 \): \( \int_a^b f(x) \, dx = -\int_b^a f(x) \, dx \) (Sign changes if limits are interchanged)
  • \( P_2 \): \( \int_a^b f(x) \, dx = \int_a^c f(x) \, dx + \int_c^b f(x) \, dx \), where \( a < c < b \)
  • \( P_3 \): \( \int_a^b f(x) \, dx = \int_a^b f(a + b - x) \, dx \)
  • \( P_4 \): \( \int_0^a f(x) \, dx = \int_0^a f(a - x) \, dx \)
  • \( P_5 \): \( \int_0^{2a} f(x) \, dx = \int_0^a f(x) \, dx + \int_0^a f(2a - x) \, dx \)
  • \( P_6 \): \( \int_0^{2a} f(x) \, dx = \begin{cases} 2 \int_0^a f(x) \, dx, & \text{if } f(2a - x) = f(x) \\ 0, & \text{if } f(2a - x) = -f(x) \end{cases} \)
  • \( P_7 \): \( \int_{-a}^a f(x) \, dx = \begin{cases} 2 \int_0^a f(x) \, dx, & \text{if } f(-x) = f(x) \\ 0, & \text{if } f(-x) = -f(x) \end{cases} \)

 

IX. AREA UNDER THE BOUNDED REGION

  • Area of the region bounded by the curve \( y = f(x) \), the x-axis and ordinates \( x = a \) and \( x = b \) is \( \int_a^b y \, dx = \int_a^b f(x) \, dx \)
  • Area of the region bounded by the curve \( x = f(y) \), the y-axis and ordinates \( y = a \) and \( y = b \) is \( \int_a^b x \, dy = \int_a^b f(y) \, dy \)
  • If \( y = f_1(x) \) and \( y = f_2(x) \) are two curves intersecting at points \( (a, b) \) and \( (c, d) \) then the area enclosed between the curves is given by \( \int_a^c (y_{\text{upper curve}} - y_{\text{lower curve}}) \, dx \).
  • If \( x = f_1(y) \) and \( x = f_2(y) \) are two curves intersecting at points \( (a, b) \) and \( (c, d) \) then the area enclosed between the curves is given by \( \int_a^c (x_{\text{upper curve}} - x_{\text{lower curve}}) \, dy \).
  • WORKING RULE-
    1. Trace the graph of the curves.
    2. Find the points of intersection.
    3. Express \( y \) in terms of \( x \) (or \( x \) in terms of \( y \)).
    4. Form the definite integral.
    5. Evaluate.
    6. Write answer in sq. units.

 

MATRICES AND DETERMINANTS

  • DEFINITION: A matrix \( A = [a_{ij}]_{m \times n} \) is defined as an ordered rectangular array of numbers in \( m \) rows and \( n \) columns.
  • 1. ROW MATRIX: A matrix having a single row \( A = [a_{11} \ a_{12} \ \dots \ a_{1n}] \)
  • 2. COLUMN MATRIX: A matrix having a single column.
  • 3. ZERO or NULL MATRIX: A matrix is called the zero or null matrix if all the entries are 0.
  • 4. SQUARE MATRIX: A matrix for which horizontal and vertical dimensions are the same.
  • 5. DIAGONAL MATRIX: A square matrix where \( a_{ij} = 0 \) for \( i \neq j \).
  • 6. SCALAR MATRIX: A diagonal matrix where all diagonal elements are equal.
  • 7. IDENTITY MATRIX: A diagonal matrix where \( a_{ij} = 1 \) for \( i = j \).
  • 8. UPPER TRIANGULAR MATRIX: A square matrix where \( a_{ij} = 0 \) for \( i > j \).
  • 9. LOWER TRIANGULAR MATRIX: A square matrix where \( a_{ij} = 0 \) for \( i < j \).

 

MATRIX OPERATIONS

  • Addition/Subtraction: Two matrices \( A \) and \( B \) can be added or subtracted if and only if their dimensions are the same. Add/subtract corresponding elements.
  • Matrix addition properties:
    • \( A + B = B + A \) (Commutative)
    • \( A + (B + C) = (A + B) + C \) (Associative)
    • \( A(B + C) = AB + AC \) (Distributive)
  • Equal matrices: Same order and all corresponding elements are equal.
  • Scalar multiplication: \( kA = [k \cdot a_{ij}] \).
    • \( k(A + B) = kA + kB \)
    • \( (m + n)A = mA + nA \)
    • \( (mk)A = m(kA) = k(mA) \)
  • Matrix Multiplication: Defined if the number of columns of the first equals the number of rows of the second.
    • Properties: \( AB \neq BA \) (Non-commutative), \( A(BC) = (AB)C \) (Associative), \( AI_n = A = I_n A \) (Identity).
    • If \( AB = 0 \), it does not necessarily mean \( A=0 \) or \( B=0 \).
    • If \( AB = AC \), it does not necessarily mean \( B=C \).

 

Transpose of Matrices

  • Found by exchanging rows for columns. Transpose of \( A \) is \( A^T \).
  • Properties: \( (A^T)^T = A \), \( (A + B)^T = A^T + B^T \), \( (kA)^T = k A^T \), \( (AB)^T = B^T A^T \).
  • Symmetric matrix: \( A = A^T \) i.e., \( a_{ij} = a_{ji} \).
  • Skew-symmetric matrix: \( A^T = -A \) i.e., \( a_{ji} = -a_{ij} \). Diagonal elements are 0.
  • Every square matrix \( A \) can be expressed as a sum of symmetric \( P = \frac{A + A^T}{2} \) and skew-symmetric \( Q = \frac{A - A^T}{2} \).

 

11. The Determinant of a Matrix

  • MINOR: \( M_{ij} \) is the determinant of the \( (n-1) \times (n-1) \) matrix obtained by deleting row \( i \) and column \( j \).
  • COFACTOR: \( A_{ij} = (-1)^{i+j} M_{ij} \).
  • ADJOINT (OR ADJUGATE): Transpose of the matrix of cofactors. \( \text{adj}(A) = [C_{ij}]^T \).
  • Properties: \( A \cdot \text{adj}(A) = \text{adj}(A) \cdot A = |A|I \). \( |\text{adj}(A)| = |A|^{n-1} \).
  • Singular matrix: \( |A| = 0 \). Non-Singular matrix: \( |A| \neq 0 \).

 

12. THE PROPERTIES OF DETERMINANTS

  • \( P_1 \): Value remains unchanged if rows and columns are interchanged. \( |A| = |A^T| \).
  • \( P_3 \): Interchanging two rows/columns changes sign.
  • \( P_4 \): Two identical rows/columns \( \implies \) value is 0.
  • \( P_5 \): Multiplying a row/column by scalar \( k \) multiplies value by \( k \).
  • \( P_7 \): Proportional rows/columns \( \implies \) value is 0.
  • \( P_8 \): Adding equimultiples of one row/column to another doesn't change value.
  • \( P_{12} \): \( |AB| = |A| |B| \).
  • \( P_{14} \): \( |kA| = k^n |A| \) where \( n \) is order.

 

13. APPLICATION OF DETERMINANT

  • Area of triangle with vertices \( (x_1, y_1), (x_2, y_2), (x_3, y_3) \) is \( \Delta = \frac{1}{2} \left| \begin{matrix} x_1 & y_1 & 1 \\ x_2 & y_2 & 1 \\ x_3 & y_3 & 1 \end{matrix} \right| \).
  • Condition of collinearity: Area \( \Delta = 0 \).
  • Equation of line passing through \( (x_1, y_1) \) and \( (x_2, y_2) \) is \( \left| \begin{matrix} x & y & 1 \\ x_1 & y_1 & 1 \\ x_2 & y_2 & 1 \end{matrix} \right| = 0 \).

 

14. The Inverse of a Matrix

  • If \( |A| \neq 0 \), then \( A^{-1} = \frac{\text{adj } A}{|A|} \).
  • \( (AB)^{-1} = B^{-1} A^{-1} \).
  • \( (ABC)^{-1} = C^{-1} B^{-1} A^{-1} \).
  • \( (A^T)^{-1} = (A^{-1})^T \).

 

15. SOLVING SYSTEMS OF EQUATIONS USING INVERSE MATRIX METHOD

System \( AX = B \implies X = A^{-1} B \).

  • Cases:
    1. If \( |A| \neq 0 \), consistent with unique solution.
    2. If \( |A| = 0 \) and \( (\text{adj } A) \cdot B \neq 0 \), inconsistent (no solution).
    3. If \( |A| = 0 \) and \( (\text{adj } A) \cdot B = 0 \), consistent with infinitely many solutions (put \( z = k \)).

 

DIFFERENTIAL EQUATIONS

  • A first order linear differential equation: \( \frac{dy}{dx} + P(x)y = Q(x) \).
  • Integrating Factor (I.F.): \( e^{\int P(x) dx} \).
  • General Solution: \( y \cdot (I.F.) = \int Q(x) \cdot (I.F.) \, dx + C \).
  • Variable Separable Form: \( \frac{dy}{dx} = h(x)g(y) \implies \int \frac{dy}{g(y)} = \int h(x) \, dx \).
  • Homogeneous Differential Equation: \( \frac{dy}{dx} = f(\frac{y}{x}) \). Solve by putting \( y = vx \).

 

VECTORS

  • UNIT VECTOR: \( \hat{a} = \frac{\vec{a}}{|\vec{a}|} \).
  • Direction Cosines: \( l = \frac{a_1}{\sqrt{a_1^2+a_2^2+a_3^2}}, m = \frac{a_2}{\sqrt{a_1^2+a_2^2+a_3^2}}, n = \frac{a_3}{\sqrt{a_1^2+a_2^2+a_3^2}} \).
  • Scalar Product (Dot Product): \( \vec{a} \cdot \vec{b} = |\vec{a}| |\vec{b}| \cos \theta \).
    • \( \vec{a} \cdot \vec{b} = a_1b_1 + a_2b_2 + a_3b_3 \).
    • Projection of \( \vec{a} \) on \( \vec{b} = \frac{\vec{a} \cdot \vec{b}}{|\vec{b}|} \).
  • Vector Product (Cross Product): \( \vec{a} \times \vec{b} = |\vec{a}| |\vec{b}| \sin \theta \hat{n} = \left| \begin{matrix} \hat{i} & \hat{j} & \hat{k} \\ a_1 & a_2 & a_3 \\ b_1 & b_2 & b_3 \end{matrix} \right| \).
    • Area of parallelogram \( = |\vec{a} \times \vec{b}| \).
    • Area of triangle \( = \frac{1}{2} |\vec{a} \times \vec{b}| \).
  • Scalar Triple Product: \( [\vec{a} \ \vec{b} \ \vec{c}] = \vec{a} \cdot (\vec{b} \times \vec{c}) = \left| \begin{matrix} a_1 & a_2 & a_3 \\ b_1 & b_2 & b_3 \\ c_1 & c_2 & c_3 \end{matrix} \right| \).
    • Volume of parallelepiped \( = |[\vec{a} \ \vec{b} \ \vec{c}]| \).
    • Coplanarity condition: \( [\vec{a} \ \vec{b} \ \vec{c}] = 0 \).

 

THREE DIMENSIONAL GEOMETRY

  • Distance between \( P_1(x_1, y_1, z_1) \) and \( P_2(x_2, y_2, z_2) \) is \( d = \sqrt{(x_2-x_1)^2 + (y_2-y_1)^2 + (z_2-z_1)^2} \).
  • Section formula (Internal): \( x = \frac{mx_2+nx_1}{m+n}, y = \frac{my_2+ny_1}{m+n}, z = \frac{mz_2+nz_1}{m+n} \).
  • Angle between lines: \( \cos \theta = |l_1l_2 + m_1m_2 + n_1n_2| \).
    • Perpendicular lines: \( a_1a_2 + b_1b_2 + c_1c_2 = 0 \).
    • Parallel lines: \( \frac{a_1}{a_2} = \frac{b_1}{b_2} = \frac{c_1}{c_2} \).
  • Equation of line:
    • Vector: \( \vec{r} = \vec{a} + \lambda \vec{b} \). Cartesian: \( \frac{x-x_1}{a} = \frac{y-y_1}{b} = \frac{z-z_1}{c} \).
  • Shortest distance between skew lines: \( d = \frac{|(\vec{b}_1 \times \vec{b}_2) \cdot (\vec{a}_2 - \vec{a}_1)|}{|\vec{b}_1 \times \vec{b}_2|} \).
  • Equation of plane:
    • Normal form: \( \vec{r} \cdot \hat{n} = d \) or \( lx + my + nz = d \).
    • Intercept form: \( \frac{x}{a} + \frac{y}{b} + \frac{z}{c} = 1 \).
  • Distance of point \( (x_1, y_1, z_1) \) from plane \( ax + by + cz + d = 0 \) is \( \frac{|ax_1 + by_1 + cz_1 + d|}{\sqrt{a^2+b^2+c^2}} \).

 

PROBABILITY THEORY

  • Conditional Probability: \( P(B|A) = \frac{P(A \cap B)}{P(A)} \).
  • Total Probability Theorem: \( P(B) = \sum_{i=1}^{n} P(A_i)P(B|A_i) \).
  • Bayes' Theorem: \( P(A_i|B) = \frac{P(A_i)P(B|A_i)}{\sum_{j=1}^{n} P(A_j)P(B|A_j)} \).
  • Binomial Distribution: \( P(X = r) = {}^n C_r p^r q^{n-r} \).
    • Expectation (Mean) \( E(X) = np \).
    • Variance \( \text{Var}(X) = npq \).

 

LINEAR PROGRAMMING

The mathematical models which tells to optimise (minimize or maximise) the objective function Z subject to certain condition on the variables is called a Linear programming problem (LPP).

The standard form of the linear programming problem is used to develop the procedure for solving a general programming problem.

 

1. A general LPP is of the form

Max (or min) \( Z = c_1x_1 + c_2x_2 + \dots + c_nx_n \)

subject to the constraints

\( a_{11}x_1 + a_{12}x_2 + a_{13}x_3 + \dots + a_{1n}x_n \leq \) or \( = \) or \( \geq b_1 \)

\( a_{21}x_1 + a_{22}x_2 + a_{23}x_3 + \dots + a_{2n}x_n \leq \) or \( = \) or \( \geq b_2 \)

\( a_{m1}x_1 + a_{m2}x_2 + a_{m3}x_3 + \dots + a_{mn}x_n \leq \) or \( = \) or \( \geq b_m \)

\( x_1 \geq 0, x_2 \geq 0, \dots, x_n \geq 0 \) are called the non-negativity conditions.

\( x_1, x_2, \dots, x_n \) are called decision variables.

\( c_1, c_2, \dots, c_n, a_{11}, a_{12}, \dots, a_{mn} \) are all known constants.

\( Z \) is called the "objective function" of the LPP of \( n \) variables which is to be maximized or minimized.

  • OBJECTIVE FUNCTION: The Objective Function is a linear function of variables which is to be optimised i.e., maximised or minimised. e.g., profit function, cost function etc. The objective function may be expressed as a linear expression.
  • CONSTRAINTS: Limited time, labour, resources etc. may be expressed as linear inequations or equations and are called constraints.
  • OPTIMISATION: A decision which is considered the best one, taking into consideration all the circumstances is called an optimal decision. The process of getting the best possible outcome is called optimisation.
  • SOLUTION OF A LPP: A set of values of the variables \( x_1, x_2, \dots, x_n \) which satisfy all the constraints is called the solution of the LPP.
  • FEASIBLE SOLUTION: A set of values of the variables \( x_1, x_2, x_3, \dots, x_n \) which satisfy all the constraints and also the non-negativity conditions is called the feasible solution of the LPP.
  • OPTIMAL SOLUTION: The feasible solution, which optimises (i.e., maximizes or minimizes as the case may be) the objective function is called the optimal solution.

 

2. Mathematical Formulation of Linear Programming Problems

There are mainly four steps in the mathematical formulation of linear programming problem as a mathematical model. We will discuss formulation of those problems which involve only two variables.

  1. Identify the decision variables and assign symbols \( x \) and \( y \) to them. These decision variables are those quantities whose values we wish to determine.
  2. Identify the set of constraints and express them as linear equations/inequations in terms of the decision variables. These constraints are the given conditions.
  3. Identify the objective function and express it as a linear function of decision variables. It might take the form of maximizing profit or production or minimizing cost.
  4. Add the non-negativity restrictions on the decision variables, as in the physical problems, negative values of decision variables have no valid interpretation.

 

3. Graphical Method of Solution of a Linear Programming Problem

The graphical method is applicable to solve the LPP involving two decision variables \( x \) and \( y \). Suppose the LPP is to Optimize \( Z = ax + by \) subject to the constraints.

To solve an LPP by the graphical method includes two major steps.

  • The determination of the solution space that defines the feasible solution - To determine the feasible solution of an LPP, we have the following steps.

Step 1: Since the two decision variable \( x \) and \( y \) are non-negative, consider only the first quadrant of \( xy \)-plane.

Step 2: Each constraint is of the form \( ax + by \leq c \) or \( ax + by \geq c \). Draw the line \( ax + by = c \) ...(1). For each constraint, the line (1) divides the first quadrant in to two regions say \( R_1 \) and \( R_2 \), suppose \( (x_1, y_1) \) is a point in \( R_1 \). If this point satisfies the in equation \( ax + by \geq \text{ or } \leq c \) then shade the region \( R_1 \). If \( (x_1, y_1) \) does not satisfy the inequation, shade the region \( R_2 \). Usually we take the point \( (x_1, y_1) \) as \( (0, 0) \) if the line is not passing through the origin.

Step 3: Corresponding to each constraint, we obtain a shaded region. The intersection of all these shaded regions is the feasible region.

  • The determination of the optimal solution from the feasible region.

There are two techniques to find the optimal solution of an LPP: Corner Point Method and ISO-PROFIT (OR ISO-COST) of the LPP.

 

Corner Point Method

The optimal solution to a LPP, if it exists, occurs at the corners of the feasible region.

The method includes the following steps:

Step 1: Find the feasible region of the LLP.

Step 2: Find the co-ordinates of each vertex of the feasible region. These co-ordinates can be obtained from the graph or by solving the equation of the lines.

Step 3: At each vertex (corner point) compute the value of the objective function.

Step 4: Identify the corner point at which the value of the objective function is maximum (or minimum depending on the LPP). The co-ordinates of this vertex is the optimal solution and the value of \( Z \) is the optimal value.

 

ISO-PROFIT (OR ISO-COST) Method

This method of optimization involves the following method:

Step 1: Draw the half planes of all the constraints.

Step 2: Shade the intersection of all the half planes which is the feasible region.

Step 3: Since the objective function is \( Z = ax + by \), draw a dotted line for the equation \( ax + by = k \), where \( k \) is any constant. Sometimes it is convenient to take \( k \) as the LCM of \( a \) and \( b \).

Step 4: To maximize \( Z \) draw a line parallel to \( ax + by = k \) and farthest from the origin. This line should contain at least one point of the feasible region. Find the coordinates of this point by solving the equations of the lines on which it lies.

To minimize \( Z \) draw a line parallel to \( ax + by = k \) and nearest to the origin. This line should contain at least one point of the feasible region. Find the co-ordinates of this point by solving the equation of the line on which it lies.

Step 5: If \( (x_1, y_1) \) is the point found in step 4, then \( x = x_1, y = y_1 \), is the optimal solution of the LPP and \( Z = ax_1 + by_1 \) is the optimal value.

  • We may come across LPP which may have no feasible (infeasible) solution: If the intersection of the constraints is empty. Therefore the given L.P.P has no solution.
  • We may come across LPP which may have unbounded solution: If the feasible region is an unbounded convex region - then \( M \) is the maximum value of \( z \) if the open half plane determined by \( z = ax + by > M \) has no point in common with the feasible region, otherwise \( z \) has no maximum value. Similarly \( m \) is the minimum value of \( z \) if the open half plane determined by \( z = ax + by < m \) has no point in common with the feasible region, otherwise \( z \) has no minimum value.

 

THREE DIMENSIONAL GEOMETRY

  • Points are defined as ordered triples of real numbers and the distance between points \( P_1 = (x_1, y_1, z_1) \) and \( P_2 = (x_2, y_2, z_2) \) is defined by the formula: \( P_1P_2 = \sqrt{(x_2 - x_1)^2 + (y_2 - y_1)^2 + (z_2 - z_1)^2} \).
  • Distance of the point \( P(x, y, z) \) from the origin is \( \sqrt{x^2 + y^2 + z^2} \).
  • Section formula - the coordinate of a point \( R \) dividing the line segment \( PQ \) joining \( P(\vec{a})(x_1, y_1, z_1) \) and \( Q(\vec{b})(x_2, y_2, z_2) \) in the ratio \( m : n \) is given by:
    • Internal division: Vector form \( \vec{r} = \frac{m\vec{b} + n\vec{a}}{m + n} \), Cartesian form \( x = \frac{mx_2 + nx_1}{m+n}, y = \frac{my_2 + ny_1}{m+n}, z = \frac{mz_2 + nz_1}{m+n} \)
    • External division: Vector form \( \vec{r} = \frac{m\vec{b} - n\vec{a}}{m - n} \), Cartesian form \( x = \frac{mx_2 - nx_1}{m-n}, y = \frac{my_2 - ny_1}{m-n}, z = \frac{mz_2 - nz_1}{m-n} \)
  • Midpoint formula: Vector form \( \vec{r} = \frac{\vec{a} + \vec{b}}{2} \), Cartesian form \( x = \frac{x_1 + x_2}{2}, y = \frac{y_1 + y_2}{2}, z = \frac{z_1 + z_2}{2} \)
  • Position vector of centroid of a triangle with vertices \( A(\vec{a}), B(\vec{b}) \) and \( C(\vec{c}) \) is given by \( \frac{\vec{a} + \vec{b} + \vec{c}}{3} \).
  • DIRECTION COSINES OF A line - if \( \alpha, \beta, \gamma \) be the angles which a given directed line makes with the positive directions of the co-ordinate axes, then \( \cos \alpha, \cos \beta, \cos \gamma \) are called the direction cosines of the given line and are generally denoted by \( l, m, n \) respectively. Thus, \( l = \cos \alpha, m = \cos \beta \) and \( n = \cos \gamma \). \( l^2 + m^2 + n^2 = 1 \).
  • Direction Ratios: If \( a, b, c \) are three numbers proportional to the direction cosine \( l, m, n \) of a straight line, then \( a, b, c \) are called its direction ratios. They are also called direction numbers or direction components. \( l = \pm \frac{a}{\sqrt{\sum a^2}} \), Similarly \( m = \pm \frac{b}{\sqrt{\sum a^2}} \) and \( n = \pm \frac{c}{\sqrt{\sum a^2}} \).
  • Direction Cosine of a Line joining two given Points: \( P(x_1, y_1, z_1) \) and \( Q(x_2, y_2, z_2) \)
    • \( l = \frac{(x_2 - x_1)}{\sqrt{\sum(x_2 - x_1)^2}}, m = \frac{(y_2 - y_1)}{\sqrt{\sum(x_2 - x_1)^2}}, n = \frac{(z_2 - z_1)}{\sqrt{\sum(x_2 - x_1)^2}} \)
  • Angle between two Lines: Let \( \theta \) be the angle between two straight lines \( AB \) and \( AC \) whose direction cosines are \( l_1, m_1, n_1 \) and \( l_2, m_2, n_2 \), then \( \cos \theta = |l_1l_2 + m_1m_2 + n_1n_2| \). If direction ratios of two lines are \( a_1, b_1, c_1 \) and \( a_2, b_2, c_2 \), then angle between two lines is given by \( \cos \theta = \frac{a_1a_2 + b_1b_2 + c_1c_2}{\sqrt{a_1^2 + b_1^2 + c_1^2} \cdot \sqrt{a_2^2 + b_2^2 + c_2^2}} \).
    • Condition of perpendicularity: If the given lines are perpendicular, then \( \theta = 90^\circ \) i.e. \( \cos \theta = 0 \implies l_1l_2 + m_1m_2 + n_1n_2 = 0 \) or \( a_1a_2 + b_1b_2 + c_1c_2 = 0 \).
    • Condition of parallelism: If the given lines are parallel, then \( \theta = 0^\circ \), \( \frac{l_1}{l_2} = \frac{m_1}{m_2} = \frac{n_1}{n_2} \) or \( \frac{a_1}{a_2} = \frac{b_1}{b_2} = \frac{c_1}{c_2} \).
  • Projection of a line joining two point \( P(x_1, y_1, z_1) \) and \( Q(x_2, y_2, z_2) \) on another line whose direction cosines are \( l, m, n \) is \( AB = l(x_2 - x_1) + m(y_2 - y_1) + n(z_2 - z_1) \).

 

PROBABILITY THEORY

  • An experiment is a situation involving chance or probability that leads to results called outcomes.
  • An outcome is the result of a single trial of an experiment.
  • An event is one or more outcomes of an experiment.
  • The sample space of an experiment is the set of all possible outcomes of that experiment.
  • Probability is the measure of how likely an event is.
  • The probability of event \( A \) is the number of ways event \( A \) can occur divided by the total number of possible outcomes: \( P(A) = \frac{\text{The Number Of Ways Event A Can Occur}}{\text{The total number Of Possible Outcomes}} \).
  • If \( P(A) > P(B) \) then event \( A \) is more likely to occur than event \( B \).
  • If \( P(A) = P(B) \) then events \( A \) and \( B \) are equally likely to occur.
  • If event \( A \) is impossible, then \( P(A) = 0 \).
  • If event \( A \) is certain, then \( P(A) = 1 \).
  • The complement of event \( A \) is \( \bar{A} \). \( P(\bar{A}) = 1 - P(A) \).
  • The sum of the probabilities of the distinct outcomes within a sample space is 1.
  • Two events are mutually exclusive if they cannot occur at the same time (i.e., they have no outcomes in common).
  • Two events, \( A \) and \( B \), are independent if the fact that \( A \) occurs does not affect the probability of \( B \) occurring.
  • Two events are dependent if the outcome or occurrence of the first affects the outcome or occurrence of the second so that the probability is changed.

 

The conditional probability

\( P(B|A) \) of an event \( B \) in relationship to an event \( A \) is the probability that event \( B \) occurs given that event \( A \) has already occurred. The formula for conditional probability is: \( P(B|A) = \frac{P(A \text{ and } B)}{P(A)} \). For events \( A \) and \( B \), provided that \( P(A) \neq 0 \).

  • Addition Rule1: When two events, \( A \) and \( B \), are mutually exclusive, the probability that \( A \) or \( B \) will occur is the sum of the probability of each event. \( P(A \text{ or } B) = P(A) + P(B) \).
  • Addition Rule2: When two events, \( A \) and \( B \), are non-mutually exclusive \( P(A \text{ or } B) = P(A) + P(B) - P(A \text{ and } B) \).
  • Addition Rule3: When two events, \( A \) and \( B \), are independent, the probability that \( A \) or \( B \) will occur is \( P(A \text{ or } B) = P(A) + P(B) - P(A) \cdot P(B) \).
  • Multiplication Rule 1: When two events, \( A \) and \( B \), are independent, the probability of both occurring is: \( P(A \text{ and } B) = P(A) \cdot P(B) \).
  • Multiplication Rule 2: When two events, \( A \) and \( B \), are dependent, the probability of both occurring is: \( P(A \text{ and } B) = P(A) \cdot P(B|A) \).
  • Total Probability Theorem: Let \( A_1, A_2, \dots, A_n \) be a set of mutually exclusive events that together form the sample space \( S \). Let \( B \) be any event from the same sample space, such that \( P(B) > 0 \). Then, \( p(B) = p(A_1) \cdot p(B|A_1) + p(A_2) \cdot p(B|A_2) + \dots + p(A_n) \cdot p(B|A_n) \).
  • Bayes' theorem: Let \( A_1, A_2, \dots, A_n \) be a set of mutually exclusive events that together form the sample space \( S \). Let \( B \) be any event from the same sample space, such that \( P(B) > 0 \). Then, \( P(A_i|B) = \frac{P(B|A_i)P(A_i)}{P(B|A_1)P(A_1) + P(B|A_2)P(A_2) + \dots + P(B|A_n)P(A_n)} \).
  • Bernoulli Trials: An experiment in which a single action is repeated identically over and over. The possible results of the action are classified as "success" or "failure". The trials must all be independent. The binomial probability formula is used to find probabilities for Bernoulli trials.
  • The Binomial Distribution: The probability of achieving exactly \( k \) successes in \( n \) trials is \( P(X= r) = {}^n C_r p^r q^{n - r} \)
    • \( n = \) number of trials
    • \( k = \) number of successes
    • \( n - k = \) number of failures
    • \( p = \) probability of success in one trial
    • \( q = 1 - p = \) probability of failure in one trial
  • Expectation and Variance: If \( X \sim B(n, p) \), then the expectation and variance is given by:
    1. \( E(X) = np \)
    2. \( \text{Var}(X) = npq \)