Algebra
1. Defining Sets
One of the most fundamental concepts in Algebra is the concept of a set. This video introduces the concept of a set and various methods for defining sets.

2. Set Equality and Subsets
Sets can be related to each other in different ways. This chapter describes the set relations of equality, subset, superset, proper subset, and proper superset.

3. Venn Diagrams, Unions, and Intersections
Venn diagrams are an important tool allowing relations between sets to be visualized graphically. This chapter introduces the use of Venn diagrams to visualize intersections and unions of sets, as well as subsets and supersets.

4. Complement and Relative Complement
The complement of a set is the collection of all elements which are not members of that set. Although this operation appears to be straightforward, the way we define 'all elements' can significantly change the results.

5. Symmetric Difference
The symmetric difference of two sets is the collection of elements which are members of either set but not both - in other words, the union of the sets excluding their intersection.

6. Interval Notation and the Number Line
Although Venn diagrams are a useful way to visualize sets whose elements can be any type of object, interval notation and the number line are best suited for describing sets of real numbers used in Algebra.

7. Bounded versus Unbounded Intervals
Bounded intervals may be either open or closed. Closed intervals contain a maximum and minimum number, but why is it impossible to find the maximum or minimum number in an open interval?

8. Unions of Intervals
Interval notation is often the simplest way to describe sets of real numbers as regions on the number line. Some sets which cannot be represented by a single interval can be written in interval notation as the union of two or more intervals.

9. Cartesian Products, Ordered Pairs and Triples
Cartesian products can create sets of ordered pairs which correspond to points in 2-dimensional space, or ordered triples which correspond to points in 3-dimensional space. These sets form the logical foundation of the Cartesian coordinate system.

10. The Cartesian Coordinate System
The Cartesian coordinate system, formed from the Cartesian product of the real number line with itself, allows algebraic equations to be visualized as geometric shapes in two or three dimensions.

11. Cartesian Coordinates in Three Dimensions
Just as the Cartesian plane allows sets of ordered pairs to be graphically displayed as 2-dimensional objects, Cartesian space allows us to visualize sets of ordered triples in three dimensions.

12. Binary Relations
Fundamental to Algebra is the concept of a binary relation. This concept is closely related to the concept of a function.

13. Domain and Range of Binary Relations
Two sets which are of primary interest when studying binary relations are the domain and range of the relation.

14. Scatter Plots
Scatter plots are a powerful method of visualizing relations between sets of numeric values. As an example, trends in a binary relation between the height and weight of a group of people could be graphed and analyzed by using a scatter plot.

15. Functions
Functions can be thought of as mathematical machines, which when given an element from a set of permissible inputs, always produce the same element from a set of possible outputs.

16. Real-Valued Functions of a Real Variable
Although the domain and codomain of functions can consist of any type of objects, the most common functions encountered in Algebra are real-valued functions of a real variable, whose domain and codomain are the set of real numbers, R.

17. Vertical Line Test
A graph in Cartesian coordinates may represent a function or may only represent a binary relation. The 'vertical line test' is a simple way to determine whether or not a graph represents a function.

18. Multivariable Functions
Although many familiar functions process a single input variable to produce a single output value, multivariable functions can be created whose output depends upon multiple input variables.

19. Linear Equations: y = mx
Equations of the form y = mx describe lines in the Cartesian plane which pass through the origin. The fact that many functions are linear when viewed on a small scale, is important in branches of mathematics such as calculus.

20. Slope-Intercept Form
Linear equations of the form y = mx+b can describe any non-vertical line in the Cartesian plane. The constant m determines the line's slope, and the constant b determines the y intercept and thus the line's vertical position.

21. Slope
Slope is a fundamental concept in mathematics. Slope if often defined as 'the rise over the run' ... but why?

22. Point-Slope Form
The point-slope form of the equation for a line can describe any non-vertical line in the Cartesian plane, given the slope and the coordinates of a single point which lies on the line.

23. Two-Point Form
The two-point form of the equation for a line can describe any non-vertical line in the Cartesian plane, given the coordinates of two points which lie on the line.

24. Standard Form
The standard form of the equation for a line Ax + By = C can describe any line in the Cartesian plane, including vertical lines. The constants A, B and C have no special meaning, but in combination they tell us many things about the line's graph.

25. Linear Equations in the Real World
Linear equations can be used to solve many types of real-world problems. In this episode, the water depth of a pool is shown to be a linear function of time and an equation is developed to model its behavior.

26. Solving Literal Equations
Literal equations are formulas for calculating the value of one unknown quantity from one or more known quantities.

27. Solving Problems with Linear Equations
How do we create linear equations to solve real-world problems? This video explains the process.

28. Solving Motion Problems with Linear Equations
Based upon the definition of speed, linear equations can be created which allow us to solve problems involving time, distance, and constant speeds.

29. Understanding Percentages
Percentages are one method of describing a fraction of a quantity. The percent is the numerator of a fraction whose denominator is understood to be one-hundred.

30. Solving Percentage Problems with Linear Equations
Many real-world problems involve percentages. This lecture shows how Algebra is used to solve problems involving percent change and profit-and-loss.

31. Calculating Mixtures of Solutions
This lecture shows how Algebra is used to solve problems involving mixtures of solutions of different concentrations.

32. Solving Mixture Problems with Linear Equations
Mixture problems can involve mixtures of things other than liquids. This video shows how Algebra is used to solve problems involving various types of mixtures.

33. Parallel Lines
Parallel lines have the same slope and no points in common. However, it is not always obvious whether two equations describe parallel lines or the same line.

34. Perpendicular Lines
Perpendicular lines have slopes which are negative reciprocals of each other, but why?

35. Systems of Linear Equations in Two Variables
The points of intersection of two graphs represent common solutions to both equations. Finding these intersection points is an important tool in analyzing physical and mathematical systems.

36. Solving Systems of Equations by Substitution
A system of two equations in x and y can be solved by rearranging one equation to represent x in terms of y, and then substituting this expression for x into the other equation to create an equation with only the variable y. This equation can then be solved to find y's value which can then be substituted back into either equation to find the value of x.

37. Solving Systems of Equations by Elimination
Systems of two equations in x and y can be solved by adding the equations to create a new equation with one variable eliminated. That equation can then be solved to find the value of that variable.

38. Why the Elimination Method Works
This chapter takes a geometric look at the logic behind adding equations, the essential technique used when solving systems of equations by elimination.

39. Inconsistent, Dependent, & Independent Systems
Systems of two linear equations in two variables can have a single solution, no solutions, or an infinite number of solutions. This chapter explains why.

40. Solving Inconsistent or Dependent Systems
When solving a system of linear equations in x and y with a single solution, we get a unique pair of values for x and y. But what happens when we try to solve a system with no solutions or an infinite number of solutions?

41. Using Systems of Equations Versus One Equation
When should a system of equations with multiple variables be used to solve an Algebra problem, instead of using a single equation with a single variable?

42. Visualizing Linear Equations in Three Variables
Just as the graph of a linear equation in two variables is a line in the Cartesian plane, the graph of a linear equation in three variables is a plane in Cartesian space.

43. Types of Linear Systems in Three Variables
This video illustrates eight ways in which planes in the graph of a system of three linear equations in three variables can be oriented, thus creating different types of solution sets.

44. Solving Systems of Equations in Three Variables
Systems of three linear equations in three variables can be solved using the same techniques of substitution and elimination used to solve systems of two linear equations in two variables.

45. Three Variable Systems with Infinite or Null Solution Sets
When solving a system of linear equations with a single solution, we get a unique value for each variable. But how do we recognize when a system of equations has infinitely many solutions or no solutions?

46. Parametric Equations
In order to mathematically describe a line in 3-dimensional space, we need a way to define the values of the three coordinates at every point along the line. This can be done by creating a group of 'parametric equations'.

47. Describing Infinite Solution Sets Parametrically
When the graph of the solutions of a system of linear equations in three variables is a line in Cartesian space, the solutions can be described either by a group of three parametric equations, or a parametric ordered-triple. This lecture describes the process of calculating that parametric representation.

48. A Geometrical View of the Elimination Method
When the elimination method is used to solve a system of linear equations, viewing the geometrical changes that happen to the system during this process can give insight into the mechanisms which underlie the logic behind the algebraic method of elimination of variables.

49. Three Variable Systems in the Real World - Problem 1
Algebra 49, 50 and 51 present three real-world problems which can be solved using systems of three linear equations in three variables. This chapter shows how prices of three individual items can be determined, given three combinations of quantities of each item and each combination's total cost.

50. Three Variable Systems in the Real World - Problem 2
Algebra 49, 50 and 51 present three real-world problems which can be solved using systems of three linear equations in three variables. This chapter shows how the parameters of an equation for a parabola can be determined, given three points which satisfy the equation.

51. Three Variable Systems in the Real World - Problem 3
Algebra 49, 50 and 51 present three real-world problems which can be solved using systems of three linear equations in three variables. This chapter shows how the parameters of an equation for a circle can be determined, given three points which satisfy the equation.

52. An Introduction to Matrices
Matrices are an important class of mathematical object used in many branches of mathematics, science and engineering. This lecture also introduces augmented matrices, a compact easy-to-manipulate representation of systems of linear equations, and a valuable tool for solving these systems.

53. Elementary Row Operations
Once a system of linear equations has been converted to augmented matrix form, that matrix can then be transformed using elementary row operations into a matrix which represents a simpler system of equations with the same solutions as the original system. This lecture introduces the three elementary row operations used to achieve this transformation.

54. Gaussian Elimination
A system of linear equations represented as an augmented matrix can be simplified through the process of Gaussian elimination to row echelon form. At that point the matrix can be converted back into equations which are simpler and easy to solve through back substitution.

55. Gauss-Jordan Elimination
A system of linear equations in matrix form can be simplified through the process of Gauss-Jordan elimination to reduced row echelon form. At that point, the solutions can be determined directly from the matrix, without having to convert it back into equations.

56. A Geometrical View of Gauss-Jordan Elimination
Although Gauss-Jordan Elimination is typically thought of as a purely algebraic process, when viewed geometrically, this process is beautiful and amazing, providing insights into the underlying mechanisms of the matrix transformations which lead to the solutions of a system of linear equations. Since a system of linear equations in three variables is graphically represented by a collection of planes, following how these planes change their orientation with each row operation can give us an intuitive understanding of how the transformation to reduced row echelon form works.

57. Dependent Equations and Systems
Some systems of linear equations contain one or more equations which don't add any new information to the system and are therefore redundant. These equations are said to be 'dependent'. In a system of two equations, it is easy to spot when the equations are dependent since the equations will be either identical or multiples of each other. In this case, the system will always have infinitely many solutions. However, in systems of more than two equations, dependent equations are not necessarily multiples of each other and the system may or may not have infinitely many solutions.

58. Gauss-Jordan Elimination with Dependent Systems
This chapter builds on Algebra chapter 57 which explained the concept of dependency. In this chapter, we see that although it can sometimes be difficult to spot when a system of linear equations is dependent, when a dependent system is represented in matrix form and simplified through Gauss-Jordan elimination, an equivalent independent system is automatically produced. This equivalent system typically contains fewer equations, with fewer variables in each equation. From this simpler system, a parametric representation of the solution set can then be easily written.

59. A Geometric View of Gauss-Jordan with Dependent Systems
This lecture examines an example of Gauss-Jordan elimination on a dependent system from Algebra chapter 58, and follows how the planes are geometrically transformed step by step, from a system of three planes, representing three equations, each containing three variables, to a system of two planes representing two equations, each containing only two variables. The result is a simpler system from which a parametric representation of the infinite solution set can then be easily written.

60. Parametric Equations with Gauss-Jordan Elimination
This chapter introduces the concept of 'pivot columns' and shows how they can be used to determine whether a system of linear equations has a single unique solution, no solutions, or infinitely many solutions, simply by looking at the positions of the pivot columns within the reduced row echelon form matrix. If the system has infinitely many solutions, we then see how a set of parametric equations can be easily produced from that matrix. This chapter also examines how the solution set of a system of linear equations forms a subspace of lower dimensionality than the system itself.

61. Gauss-Jordan Elimination with Inconsistent Systems
When Gauss-Jordan elimination transforms a matrix representing an inconsistent system of linear equations to reduced row-echelon form, a matrix row containing all zero coefficient entries and a non-zero constant entry is produced, indicating that the system has no solutions. This lecture shows how inconsistent systems can sometimes be spotted by simply looking at the equations. Examples of three-variable systems represented by groups of planes are then used to show how certain configurations of planes can cause inconsistency, and why this leads to the indication of inconsistency produced during Gauss-Jordan elimination.

62. Gauss-Jordan Elimination with Traffic Flow
In this lecture we examine one application that can be solved by a system of linear equations with four or more variables modeling and predicting the flow of traffic through a network of streets. Examples are given showing how the model can have a single unique solution, infinitely many solutions, or no solution.

63. Gauss-Jordan Elimination with Curve Fitting
This lecture examines a useful mathematical application that can be solved by using a system of linear equations with four or more variables - finding a polynomial function whose graph passes through a given set of data points. We see how in some cases, the system can have a single unique solution corresponding to a single unique function which includes the points, infinitely many solutions which correspond to infinitely many functions which include the points, or no solution.

64. Quadratic Functions and Polynomials
In this lecture, quadratic functions are introduced. We show that a quadratic may be a monomial, binomial, or trinomial, and that the graph of a quadratic function in a single variable is always a parabola. Quadratic functions are one form of a more general class of functions called polynomials.

65. Creating Quadratic Expressions Using the FOIL Method
Quadratic expressions may be created by multiplying two linear binomial expressions together. A common procedure for multiplying two binomial expressions is referred to as the "FOIL" method. FOIL is an acronym whose letters stand for the four terms produced by the products of the First, Outer, Inner, and Last terms of the two binomials.

66. General and Vertex Forms of Quadratic Functions
In this lecture, we examine two common ways to write a quadratic function, the general form and the vertex form, and see how each of these forms are related to the function's graph.