creation of obsidian vault and first notes

This commit is contained in:
Tom Zuidberg
2026-02-10 00:08:46 +01:00
parent c06aa590df
commit c394ee2cce
7 changed files with 333 additions and 0 deletions

1
ObsidianNotes/.obsidian/app.json vendored Normal file
View File

@@ -0,0 +1 @@
{}

View File

@@ -0,0 +1 @@
{}

View File

@@ -0,0 +1,33 @@
{
"file-explorer": true,
"global-search": true,
"switcher": true,
"graph": true,
"backlink": true,
"canvas": true,
"outgoing-link": true,
"tag-pane": true,
"footnotes": false,
"properties": true,
"page-preview": true,
"daily-notes": true,
"templates": true,
"note-composer": true,
"command-palette": true,
"slash-command": false,
"editor-status": true,
"bookmarks": true,
"markdown-importer": false,
"zk-prefixer": false,
"random-note": false,
"outline": true,
"word-count": true,
"slides": false,
"audio-recorder": false,
"workspaces": false,
"file-recovery": true,
"publish": false,
"sync": true,
"bases": true,
"webviewer": false
}

22
ObsidianNotes/.obsidian/graph.json vendored Normal file
View File

@@ -0,0 +1,22 @@
{
"collapse-filter": true,
"search": "",
"showTags": false,
"showAttachments": false,
"hideUnresolved": false,
"showOrphans": true,
"collapse-color-groups": true,
"colorGroups": [],
"collapse-display": true,
"showArrow": false,
"textFadeMultiplier": 0,
"nodeSizeMultiplier": 1,
"lineSizeMultiplier": 1,
"collapse-forces": true,
"centerStrength": 0.518713248970312,
"repelStrength": 10,
"linkStrength": 1,
"linkDistance": 250,
"scale": 1,
"close": true
}

209
ObsidianNotes/.obsidian/workspace.json vendored Normal file
View File

@@ -0,0 +1,209 @@
{
"main": {
"id": "c3a4d69512e9fc7d",
"type": "split",
"children": [
{
"id": "cc5c72d968983eef",
"type": "tabs",
"children": [
{
"id": "6a0dffa8b674a58d",
"type": "leaf",
"state": {
"type": "markdown",
"state": {
"file": "elemlds lecture 8.md",
"mode": "source",
"source": false
},
"icon": "lucide-file",
"title": "elemlds lecture 8"
}
},
{
"id": "d74c0c464422592b",
"type": "leaf",
"state": {
"type": "markdown",
"state": {
"file": "elemlds lecture 9.md",
"mode": "source",
"source": false
},
"icon": "lucide-file",
"title": "elemlds lecture 9"
}
}
],
"currentTab": 1
}
],
"direction": "vertical"
},
"left": {
"id": "5d10435d79168cd3",
"type": "split",
"children": [
{
"id": "9dd8de210b3a219a",
"type": "tabs",
"children": [
{
"id": "73df1fa10360176b",
"type": "leaf",
"state": {
"type": "file-explorer",
"state": {
"sortOrder": "alphabetical",
"autoReveal": false
},
"icon": "lucide-folder-closed",
"title": "Files"
}
},
{
"id": "77451c9a0f0d90b8",
"type": "leaf",
"state": {
"type": "search",
"state": {
"query": "",
"matchingCase": false,
"explainSearch": false,
"collapseAll": false,
"extraContext": false,
"sortOrder": "alphabetical"
},
"icon": "lucide-search",
"title": "Search"
}
},
{
"id": "59e85a91706a7229",
"type": "leaf",
"state": {
"type": "bookmarks",
"state": {},
"icon": "lucide-bookmark",
"title": "Bookmarks"
}
}
]
}
],
"direction": "horizontal",
"width": 300,
"collapsed": true
},
"right": {
"id": "7e5fb0cf1f329d1d",
"type": "split",
"children": [
{
"id": "bb7207b9cf3e7ca8",
"type": "tabs",
"children": [
{
"id": "9bc8d6bb05d6407c",
"type": "leaf",
"state": {
"type": "backlink",
"state": {
"file": "elemlds lecture 9.md",
"collapseAll": false,
"extraContext": false,
"sortOrder": "alphabetical",
"showSearch": false,
"searchQuery": "",
"backlinkCollapsed": false,
"unlinkedCollapsed": true
},
"icon": "links-coming-in",
"title": "Backlinks for elemlds lecture 9"
}
},
{
"id": "2035bfda7a40d552",
"type": "leaf",
"state": {
"type": "outgoing-link",
"state": {
"file": "elemlds lecture 9.md",
"linksCollapsed": false,
"unlinkedCollapsed": true
},
"icon": "links-going-out",
"title": "Outgoing links from elemlds lecture 9"
}
},
{
"id": "b59c87a10b89601c",
"type": "leaf",
"state": {
"type": "tag",
"state": {
"sortOrder": "frequency",
"useHierarchy": true,
"showSearch": false,
"searchQuery": ""
},
"icon": "lucide-tags",
"title": "Tags"
}
},
{
"id": "6eb1031e9bd41e20",
"type": "leaf",
"state": {
"type": "all-properties",
"state": {
"sortOrder": "frequency",
"showSearch": false,
"searchQuery": ""
},
"icon": "lucide-archive",
"title": "All properties"
}
},
{
"id": "30fccf5d4853eead",
"type": "leaf",
"state": {
"type": "outline",
"state": {
"file": "elemlds lecture 9.md",
"followCursor": false,
"showSearch": false,
"searchQuery": ""
},
"icon": "lucide-list",
"title": "Outline of elemlds lecture 9"
}
}
]
}
],
"direction": "horizontal",
"width": 300,
"collapsed": true
},
"left-ribbon": {
"hiddenItems": {
"switcher:Open quick switcher": false,
"graph:Open graph view": false,
"canvas:Create new canvas": false,
"daily-notes:Open today's daily note": false,
"templates:Insert template": false,
"command-palette:Open command palette": false,
"bases:Create new base": false
}
},
"active": "d74c0c464422592b",
"lastOpenFiles": [
"elemlds lecture 8.md",
"elemlds lecture 9.md",
"Elements of Machine Learning and Data Science.md",
"Welcome.md"
]
}

View File

@@ -0,0 +1,60 @@
# Machine Learning Intro
= Machines that *learn* to perform a task from *experience*
3 forms of learning based on labels availability:
- Yes -> Supervised learning
- Some -> Semi-supervised learning
- No -> Unsupervised learning
# Supervised Learning
Training data has labels $\mathcal{D} = \{(x_1, t_1), \dots, (x_N, t_N\}$
Goal: learn a *predictive* function that yields good performance on *unseen* data
Data may need to be preprocessed to handle
- Missing/wrong values
- Outliers
- Inconsistencies
# Features
Feature extraction = process that creates descriptive vectors from samples
- Features should be invariant to irrelevant input variations
- Selecting the *right* features!
- Usually encode some domain knowledge
- Higher-dimensional features are more discriminative
Curse of dimensionality: complexity increases *exponentially* with number of dimensions
# Terms, Concepts, Notation
Mostly based on statistics and probability theory
Notation:
- Scalar $x \in \mathbb{R}$
- Vector-valued $\text{x} \in \mathbb{R}$
- Datasets $\mathcal{X} \in \mathbb{R}$
- Labelled datasets $\mathcal{D} = \{(x_1, t_1), \dots, (x_N, t_N\}$
- Matrices $\text{M} \in \mathbb{R}^{m \times n}$
- Dot product $\text{w}^\text{T}\text{x} = \sum_{j=1}^D w_j x_j$
# Probability Basics
Over random variables:
- Discrete case: $p(X = x_j) = \frac{n_j}{N}$
- Continuous case: $p(X \in (x_1, x_2)) = \int_{x_1}^{x_2}p(x)\, dx$ where $p(x)$ is the probability desnity function (pdf) of $x$
Some formulas:
Let $A \in \{a_i\}, B \in \{b_j\}$
Consider $N$ trials:
- $n_{ij} = \# \{A = a_i \land B =b_j\}$
- $c_i = \#\{A=a_i\}$
- $r_j = \#\{B=b_j\}$
Then we get:
- Joint probability $p(A=a_i, B=b_j) = \frac{n_{ij}}{N}$
- Marginal probability $p(A=a_i) = \frac{c_i}{N}$
- Conditional probability $P(B=b_j | A=a_i)=\frac{n_{ij}}{c_i}$
- Sum rule $p(A=a_i) = \frac{1}{N}\sum_j n_{ij} = \sum_{b_j}p(A=a_i,B=b_j)$
- Product rule $P(A=a_i, B =b_j) = \frac{n_{ij}}{c_i}\cdot \frac{c_i}{N} = p(B=b_j |A=a_i)\cdot p(A=a_i)$
In short:
- Sum rule: $p(A) = \sum_Bp(A,b)$
- Product rule: $p(A,B) = p(B|A)p(A)$
- Bayes' Theorem: $p(A|B)= \frac{p(B|A)p(A)}{\sum_Ap(B|A)p(A)}$

View File

@@ -0,0 +1,7 @@
# Bayes Decision Theory
Goal: predict output class $\mathcal{C}$ from measurements $\text{x}$ by minimizing the probability of misclassification
>[!tip] Main Equation:
>$$p(X,Y)=\frac{p(X|Y)p(Y)}{p(X)}$$