what_is_generative_ai
Differences
This shows you the differences between two versions of the page.
| Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
| what_is_generative_ai [2025/11/02 05:43] – rewrote with ChatGPT. Better then before, still needs work. jedite83 | what_is_generative_ai [2025/11/02 05:46] (current) – [15. Responsible disclosure & whitehat norms for hackers] jedite83 | ||
|---|---|---|---|
| Line 1: | Line 1: | ||
| ====== Generative AI — An in-depth guide for hackers (DokuWiki) ====== | ====== Generative AI — An in-depth guide for hackers (DokuWiki) ====== | ||
| - | |||
| - | ~~TOC~~ | ||
| ===== 1. TL;DR / Executive summary ===== | ===== 1. TL;DR / Executive summary ===== | ||
| Line 13: | Line 11: | ||
| ===== 3. The main model families (what they are, how they sample) ===== | ===== 3. The main model families (what they are, how they sample) ===== | ||
| - | |||
| - | --- | ||
| **3.1 Transformers (autoregressive & encoder-decoder variants)** | **3.1 Transformers (autoregressive & encoder-decoder variants)** | ||
| Line 20: | Line 16: | ||
| * Core idea: self-attention lets the model compute context-dependent representations across all positions. Attention operation: | * Core idea: self-attention lets the model compute context-dependent representations across all positions. Attention operation: | ||
| - | | + | < |
| - | + | ||
| - | Attention(Q, | + | |
| * GPT-style models: stack masked self-attention layers for autoregressive generation (predict next token). Pre-trained on massive corpora, often fine-tuned. | * GPT-style models: stack masked self-attention layers for autoregressive generation (predict next token). Pre-trained on massive corpora, often fine-tuned. | ||
| - | --- | ||
| **3.2 Diffusion / score-based models (images, audio, sometimes text)** | **3.2 Diffusion / score-based models (images, audio, sometimes text)** | ||
| Line 32: | Line 25: | ||
| * Define a forward process that gradually adds noise to data; train a neural network to reverse that noising. Sampling reverses the process via iterative denoising. Connects to denoising score matching and stochastic differential equations. | * Define a forward process that gradually adds noise to data; train a neural network to reverse that noising. Sampling reverses the process via iterative denoising. Connects to denoising score matching and stochastic differential equations. | ||
| - | --- | ||
| **3.3 GANs, flows, VAEs (historical / specialized use)** | **3.3 GANs, flows, VAEs (historical / specialized use)** | ||
| Line 128: | Line 120: | ||
| ===== 15. Responsible disclosure & whitehat norms for hackers ===== | ===== 15. Responsible disclosure & whitehat norms for hackers ===== | ||
| - | * Don’t publish exploit code; follow coordinated disclosure. | + | |
| - | * Contact vendors via official channels. | + | * Contact vendors via official channels. |
| - | * Provide minimal test cases, operational impact, and mitigation suggestions. | + | * Provide minimal test cases, operational impact, and mitigation suggestions. |
| ===== 16. Further reading & canonical sources ===== | ===== 16. Further reading & canonical sources ===== | ||
| - | * OpenAI API docs & overviews. | + | |
| - | * Diffusion model surveys. | + | * Diffusion model surveys. |
| - | * OWASP GenAI. | + | * OWASP GenAI. |
| - | * Academic studies on prompt injection and red-teaming. | + | * Academic studies on prompt injection and red-teaming. |
| - | * Industry reports on failures & jailbreaks. | + | * Industry reports on failures & jailbreaks. |
| ===== 17. Appendix — Glossary ===== | ===== 17. Appendix — Glossary ===== | ||
| - | * LLM — Large Language Model. | + | |
| - | * RLHF — Reinforcement Learning from Human Feedback. | + | * RLHF — Reinforcement Learning from Human Feedback. |
| - | * Diffusion — iterative denoising generative family. | + | * Diffusion — iterative denoising generative family. |
| - | * Hallucination — fluent but false output. | + | * Hallucination — fluent but false output. |
| - | * Prompt injection — input that subverts model intentions. | + | * Prompt injection — input that subverts model intentions. |
| ===== 18. Closing / ethical call to arms ===== | ===== 18. Closing / ethical call to arms ===== | ||
| Generative AI amplifies productivity and abuse potential. Hackers, researchers, | Generative AI amplifies productivity and abuse potential. Hackers, researchers, | ||
what_is_generative_ai.1762062198.txt.gz · Last modified: by jedite83
