text
stringlengths
2
104M
meta
dict
# CSS in JS 相关类库 [![Awesome](https://cdn.rawgit.com/sindresorhus/awesome/d7305f38d29fed78fa85652e3a63e154dd8e8829/media/badge.svg)](https://github.com/sindresorhus/awesome) [![Build Status](https://travis-ci.org/tuchk4/awesome-css-in-js.svg?branch=master)](https://travis-ci.org/tuchk4/awesome-css-in-js) 一个关于CSS in JS方法的很棒的集合 ## 目录 - [库](#库) - [文章](#文章) - [视频](#视频) - [基准测试](#基准测试) ## 库 - [fela](https://github.com/rofrischmann/fela/) - 通用的、动态的、高性能的JavaScript样式 - [styled-jss](https://github.com/cssinjs/styled-jss) - 在JSS上有样式的组件 - [react-jss](https://github.com/cssinjs/react-jss) - 在react中集成jss - [jss](https://github.com/cssinjs/jss) - JSS是一种CSS创作工具,它使用JavaScript作为宿主语言 - [rockey](https://github.com/tuchk4/rockey) - 使用js的组件的Stressless CS. 编写基于组件的CSS和功能混合. - [styled-components](https://github.com/styled-components/styled-components) - 通用的、动态的、高性能的JavaScript样式 - [aphrodite](https://github.com/Khan/aphrodite) - 它是内联样式,但他们工作!还支持通过CSS样式化 - [csx](https://github.com/jxnblk/cxs) - ϟ 一个cssin - js解决方案,用于功能性UI组件的功能性CSS - [styled-jsx](https://github.com/zeit/styled-jsx) - 完整的CSS对JSX的支持 - [glam](https://github.com/threepointone/glam) - 在你的js中使用疯狂的css - [glamor](https://github.com/threepointone/glamor) - 在你的js中使用css - [glamorous](https://github.com/paypal/glamorous) - 通过优雅的API、小的足迹和出色的性能(通过glamor)来解决组件样式的问题 - [styletron](https://github.com/rtsao/styletron) - ⚡️ 通用、高性能JavaScript风格 - [radium](https://github.com/FormidableLabs/radium) - 在react元素上用于管理内联样式的工具集 - [aesthetic](https://github.com/milesj/aesthetic) - Aesthetic是用于样式组件的强大的react库,无论是使用对象的cssin - js,导入样式表,还是仅仅引用外部类名。 - [j2c](https://github.com/j2css/j2c) - CSS在JS库中,小巧而又有特点 > 注释表还没有完成。如果有bug或需要添加另一个库,请建议PR。 如何阅读表格: **As Object** - 当使用对象声明CSS时。 ```js { color: 'red', } ``` **As TL** - 当使用模板文本声明CSS时。 ```js ` color: red; ` ``` **SSR** - 服务端渲染。 **RN Support** - React Native支持。 **Agnostic** - 框架不可知的。 意味着这个库可以用于任何框架。 **Dynamic** - 当可以编写依赖于运行时值的CSS时,就像组件道具一样。 ```js { color: props => props.color } ``` ```js props => ({ color: props.color }) ``` ```js ` color: ${props => props.color} ` ``` **Babel plugins** - 如果有任何用于性能优化的babel插件。 **Bindings** - 如果有为另一个框架或库提供绑定的包。 | Package | As Object | As TL | SSR | RN Support | Agnostic | Dynamic | Babel plugins | Bindings | |:-----------------:|:-------------:|:------------------------:|:--------------------:|----------------------|--------------------|-------------|---------------|----------| | [fela](https://github.com/rofrischmann/fela/) | ✅ | | ✅ | ✅ | ✅ | ✅ | | [react-fela](http://fela.js.org/docs/guides/UsageWithReact.html) [native-fela](http://fela.js.org/docs/guides/UsageWithReactNative.html) [preact-fela](http://fela.js.org/docs/guides/UsageWithPreact.html) [inferno-fela](http://fela.js.org/docs/guides/UsageWithInferno.html) | | [jss](https://github.com/cssinjs/jss) | ✅ | ✅ | ✅ | | ✅ | ✅ | ✅ | [react-jss](https://github.com/cssinjs/react-jss) [styled-jss](https://github.com/cssinjs/styled-jss) | | [rockey](https://github.com/tuchk4/rockey) | | ✅ | | | ✅ | ✅ | | [rockey-react](https://github.com/tuchk4/rockey/tree/master/packages/rockey-react) | | [styled-components](https://github.com/styled-components/styled-components) | | ✅ | ✅ | ✅ | | ✅ | ✅ | | | [aphrodite](https://github.com/Khan/aphrodite) | ✅ | | ✅ | | ✅ | | | | | [csx](https://github.com/jxnblk/cxs) | ✅ | | ✅ | | ✅ | | | | | [glam](https://github.com/threepointone/glam) | ✅ | | ✅ | | ✅ | | ✅ | | | [glamor](https://github.com/threepointone/glamor) | ✅ | | ✅ | | ✅ | | ✅ | | | [glamorous](https://github.com/paypal/glamorous) | ✅ | | ✅ | ✅ | | ✅ | | | | [styletron](https://github.com/rtsao/styletron) | ✅ | | ✅ | | ✅ | ✅ | | [styletron-react](https://github.com/rtsao/styletron#using-styletron-with-react) | | [aesthetic](https://github.com/milesj/aesthetic) | ✅ | | | | ✅ | | | | | [j2c](https://github.com/j2css/j2c) | ✅ | | ✅ | | ✅ | | | | | ## 文章 - [A Unified Styling Language](https://medium.com/seek-blog/a-unified-styling-language-d0c208de2660) - 为什么在JavaScript中写你的样式并不是一个可怕的想法,为什么我认为你应该关注这个快速发展的空间。 - [Is CSS-in-JS really bad for UX?](https://medium.com/@okonetchnikov/is-css-in-js-really-bad-for-ux-e9cce7b2da83) - CSS在JS性能方面的影响——JS开发人员过于专注于DX,而忘记了用户体验的重要性。 - [I swore never to use CSS in JS, here are 6 reasons why I was wrong](https://hackernoon.com/i-swore-never-to-use-css-in-js-here-are-6-reasons-why-i-was-wrong-541fe3dfdeb7)- “当我第一次听说这个想法的时候,我震惊了……”但这里有6个原因,为什么它有用。 - [Journey to Enjoyable, Maintainable Styling with React, ITCSS, and CSS-in-JS](https://medium.com/maintainable-react-apps/journey-to-enjoyable-maintainable-styling-with-react-itcss-and-css-in-js-632cfa9c70d6) - 用更好的CSS /带有组件/ JavaScript和最终的方法,使用ITCSS和Aphrodite - [Rockey. Motivation and Requirements](https://medium.com/@tuchk4/rockey-motivation-and-requirements-f787d1ed61e0) - 关于JS方法中的CSS需求和在JS库中开发另一个CSS的动机——rockey。 - [CSS in JS: The Argument Refined](https://medium.com/@steida/css-in-js-the-argument-refined-471c7eb83955) - [Inline Styles are so 2016](https://medium.com/yplan-eng/inline-styles-are-so-2016-f100b79dafe1) - [“Scale” FUD and Style Components](https://medium.learnreact.com/scale-fud-and-style-components-c0ce87ec9772) - [JSS is a better abstraction over CSS](https://top.fse.guru/jss-is-css-d7d41400b635) - [A 5-minute Intro to Styled Components](https://medium.freecodecamp.com/a-5-minute-intro-to-styled-components-41f40eb7cd55) - [Styled Components: Enforcing Best Practices In Component-Based Systems](https://www.smashingmagazine.com/2017/01/styled-components-enforcing-best-practices-component-based-systems/) - [💅 styled components 💅 — Production Patterns](https://medium.com/@jamiedixon/styled-components-production-patterns-c22e24b1d896) - [Introducing glamorous 💄](https://hackernoon.com/introducing-glamorous-fb3c9f4ed20e) ## 视频 - [Styling React/ReactNative Applications - Max Stoiber at React Amsterdam](https://www.youtube.com/watch?v=bIK2NwoK9xk) - [CSS in JS tech chat with Kent C. Dodds and Sarah Drasner](https://www.youtube.com/watch?v=BXOF_8jDdf8) - [CSS in JS without Compromise by François de Campredon at react-europe 2016](https://www.youtube.com/watch?v=DGEFNBYJRps) - [Glamorous Walkthrough by Kent C. Dodds](https://www.youtube.com/watch?v=lmrQTpJ_3PM) - [ColdFront16 • Glenn Maddern: The Future of Reusable CSS](https://www.youtube.com/watch?v=XR6eM_5pAb0) - [Ryan's Random Thoughts on Inline Styles by Ryan Florence](https://www.youtube.com/watch?v=EkPcGS4TzdQ) ## 基准测试 - [tuchk4/css-in-js-app](https://github.com/tuchk4/css-in-js-app) - 使用不同的cssin - js方法和库进行应用。 - [A-gambit/CSS-IN-JS-Benchmarks](https://github.com/A-gambit/CSS-IN-JS-Benchmarks) [RESULTS.md](https://github.com/A-gambit/CSS-IN-JS-Benchmarks/blob/master/RESULT.md) - [hellofresh/css-in-js-perf-tests](https://github.com/hellofresh/css-in-js-perf-tests) - CSS-in-JS性能测试 - [jsperf: jss-vs-css](https://jsperf.com/jss-vs-css/3) - [jsperf: classes vs inline styles](https://jsperf.com/classes-vs-inline-styles/4) - [MicheleBertoli/css-in-js](https://github.com/MicheleBertoli/css-in-js) React: CSS in JS 技术比较。
{ "repo_name": "tuchk4/awesome-css-in-js", "stars": "609", "repo_language": "None", "file_name": "README.md", "mime_type": "text/plain" }
# Awesome CSS in JS [![Awesome](https://cdn.rawgit.com/sindresorhus/awesome/d7305f38d29fed78fa85652e3a63e154dd8e8829/media/badge.svg)](https://github.com/sindresorhus/awesome) [![Build Status](https://travis-ci.org/tuchk4/awesome-css-in-js.svg?branch=master)](https://travis-ci.org/tuchk4/awesome-css-in-js) A collection of awesome things regarding to CSS in JS approach [中文 README](README-ZH_CN.md) ## Table of Contents - [Libraries](#libraries) - [Articles](#articles) - [Videos](#videos) - [Benchmarks](#benchmarks) ## Libraries - [linaria](https://github.com/callstack/linaria) - Zero-runtime CSS in JS library - [freestyler](https://github.com/streamich/freestyler) - 5<sup>th</sup> generation React styling library - [emotion](https://emotion.sh/) - 👩‍🎤 The Next Generation of CSS-in-JS - [fela](https://github.com/rofrischmann/fela/) - Universal, Dynamic & High-Performance Styling in JavaScript - [styled-jss](https://github.com/cssinjs/styled-jss) - Styled Components on top of JSS - [react-jss](https://github.com/cssinjs/react-jss) - JSS integration for React - [jss](https://github.com/cssinjs/jss) - JSS is a CSS authoring tool which uses JavaScript as a host language - [rockey](https://github.com/tuchk4/rockey) - Stressless CSS for components using JS. Write Component Based CSS with functional mixins. - [styled-components](https://github.com/styled-components/styled-components) - Universal, Dynamic & High-Performance Styling in JavaScript - [aphrodite](https://github.com/Khan/aphrodite) - It's inline styles, but they work! Also supports styling via CSS - [csx](https://github.com/jxnblk/cxs) - ϟ A CSS-in-JS solution for functional CSS in functional UI components - [styled-jsx](https://github.com/zeit/styled-jsx) - Full CSS support for JSX without compromises - [glam](https://github.com/threepointone/glam) - crazy good css in your js - [glamor](https://github.com/threepointone/glamor) - css in your javascript - [glamorous](https://github.com/paypal/glamorous) - React component styling solved with an elegant API, small footprint, and great performance (via glamor) - [styletron](https://github.com/rtsao/styletron) - ⚡️ Universal, high-performance JavaScript styles - [radium](https://github.com/FormidableLabs/radium) - Set of tools to manage inline styles on React elements. - [aesthetic](https://github.com/milesj/aesthetic) - Aesthetic is a powerful React library for styling components, whether it be CSS-in-JS using objects, importing stylesheets, or simply referencing external class names. - [j2c](https://github.com/j2css/j2c) - CSS in JS library, tiny yet featureful > NOTE table is still not completed. If there is bug or need to add another library - please suggest PR. How to read the table: **As Object** - When declare CSS using Objects. ```js { color: 'red', } ``` **As TL** - When declare CSS using Template Literals. ```js ` color: red; ` ``` **SSR** - Server Side Rendering. **RN Support** - React Native support. **Agnostic** - Framework agnostic. Means that library could used with any framework. **Dynamic** - When it is possible to write CSS that depends on runtime values like component props. ```js { color: props => props.color } ``` ```js props => ({ color: props.color }) ``` ```js ` color: ${props => props.color} ` ``` **Babel plugins** - If there are any babel plugins for performance optimization. **Bindings** - If there are packages that provide bindings for another framework or library. | Package | As Object | As TL | SSR | RN Support | Agnostic | Dynamic | Babel plugins | Bindings | |:-----------------:|:-------------:|:------------------------:|:--------------------:|----------------------|--------------------|-------------|---------------|----------| | [emotion](https://github.com/emotion-js/emotion) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | react-emotion, preact-emotion | | [fela](https://github.com/rofrischmann/fela/) | ✅ | | ✅ | ✅ | ✅ | ✅ | | [react-fela](http://fela.js.org/docs/guides/UsageWithReact.html) [native-fela](http://fela.js.org/docs/guides/UsageWithReactNative.html) [preact-fela](http://fela.js.org/docs/guides/UsageWithPreact.html) [inferno-fela](http://fela.js.org/docs/guides/UsageWithInferno.html) | | [jss](https://github.com/cssinjs/jss) | ✅ | ✅ | ✅ | | ✅ | ✅ | ✅ | [react-jss](https://github.com/cssinjs/react-jss) [styled-jss](https://github.com/cssinjs/styled-jss) | | [rockey](https://github.com/tuchk4/rockey) | | ✅ | | | ✅ | ✅ | | [rockey-react](https://github.com/tuchk4/rockey/tree/master/packages/rockey-react) | | [styled-components](https://github.com/styled-components/styled-components) | | ✅ | ✅ | ✅ | | ✅ | ✅ | | | [aphrodite](https://github.com/Khan/aphrodite) | ✅ | | ✅ | | ✅ | | | | | [csx](https://github.com/jxnblk/cxs) | ✅ | | ✅ | | ✅ | | | | | [glam](https://github.com/threepointone/glam) | ✅ | | ✅ | | ✅ | | ✅ | | | [glamor](https://github.com/threepointone/glamor) | ✅ | | ✅ | | ✅ | | ✅ | | | [glamorous](https://github.com/paypal/glamorous) | ✅ | | ✅ | ✅ | | ✅ | | | | [styletron](https://github.com/rtsao/styletron) | ✅ | | ✅ | | ✅ | ✅ | | [styletron-react](https://github.com/rtsao/styletron#using-styletron-with-react) | | [aesthetic](https://github.com/milesj/aesthetic) | ✅ | | | | ✅ | | | | | [j2c](https://github.com/j2css/j2c) | ✅ | | ✅ | | ✅ | | | | | ## Articles - [A Unified Styling Language](https://medium.com/seek-blog/a-unified-styling-language-d0c208de2660) - why writing your styles in JavaScript isn’t such a terrible idea after all, and why I think you should be keeping an eye on this rapidly evolving space. - [Is CSS-in-JS really bad for UX?](https://medium.com/@okonetchnikov/is-css-in-js-really-bad-for-ux-e9cce7b2da83) - CSS in JS performance implications - JS developers are too focused on DX and keep forgetting about how important performance for UX is. - [I swore never to use CSS in JS, here are 6 reasons why I was wrong](https://hackernoon.com/i-swore-never-to-use-css-in-js-here-are-6-reasons-why-i-was-wrong-541fe3dfdeb7)- *"When I first heard of this idea, I was shocked..."* But here are 6 reasons why it is useful. - [Journey to Enjoyable, Maintainable Styling with React, ITCSS, and CSS-in-JS](https://medium.com/maintainable-react-apps/journey-to-enjoyable-maintainable-styling-with-react-itcss-and-css-in-js-632cfa9c70d6) - Making Styling Better With better CSS / with Components / with JavaScript and final approach with ITCSS and Aphrodite - [Rockey. Motivation and Requirements](https://medium.com/@tuchk4/rockey-motivation-and-requirements-f787d1ed61e0) - Article about requirements for CSS in JS approach and motivation to develop another one CSS in JS library - rockey. - [CSS in JS: The Argument Refined](https://medium.com/@steida/css-in-js-the-argument-refined-471c7eb83955) - [Inline Styles are so 2016](https://medium.com/yplan-eng/inline-styles-are-so-2016-f100b79dafe1) - [“Scale” FUD and Style Components](https://medium.learnreact.com/scale-fud-and-style-components-c0ce87ec9772) - [JSS is a better abstraction over CSS](https://top.fse.guru/jss-is-css-d7d41400b635) - [A 5-minute Intro to Styled Components](https://medium.freecodecamp.com/a-5-minute-intro-to-styled-components-41f40eb7cd55) - [Styled Components: Enforcing Best Practices In Component-Based Systems](https://www.smashingmagazine.com/2017/01/styled-components-enforcing-best-practices-component-based-systems/) - [💅 styled components 💅 — Production Patterns](https://medium.com/@jamiedixon/styled-components-production-patterns-c22e24b1d896) - [Introducing glamorous 💄](https://hackernoon.com/introducing-glamorous-fb3c9f4ed20e) ## Videos - [Styling React/ReactNative Applications - Max Stoiber at React Amsterdam](https://www.youtube.com/watch?v=bIK2NwoK9xk) - [CSS in JS tech chat with Kent C. Dodds and Sarah Drasner](https://www.youtube.com/watch?v=BXOF_8jDdf8) - [CSS in JS without Compromise by François de Campredon at react-europe 2016](https://www.youtube.com/watch?v=DGEFNBYJRps) - [Glamorous Walkthrough by Kent C. Dodds](https://www.youtube.com/watch?v=lmrQTpJ_3PM) - [ColdFront16 • Glenn Maddern: The Future of Reusable CSS](https://www.youtube.com/watch?v=XR6eM_5pAb0) - [Ryan's Random Thoughts on Inline Styles by Ryan Florence](https://www.youtube.com/watch?v=EkPcGS4TzdQ) - [CSS in JavaScript](https://www.manning.com/livevideo/css-in-javascript-with-styled-components-and-react) ## Benchmarks and comparison - [tuchk4/css-in-js-app](https://github.com/tuchk4/css-in-js-app) - React application with different css-in-js approaches and libraries. - [A-gambit/CSS-IN-JS-Benchmarks](https://github.com/A-gambit/CSS-IN-JS-Benchmarks) [RESULTS.md](https://github.com/A-gambit/CSS-IN-JS-Benchmarks/blob/master/RESULT.md) - [hellofresh/css-in-js-perf-tests](https://github.com/hellofresh/css-in-js-perf-tests) - CSS-in-JS performance tests - [jsperf: jss-vs-css](https://jsperf.com/jss-vs-css/3) - [jsperf: classes vs inline styles](https://jsperf.com/classes-vs-inline-styles/4) - [MicheleBertoli/css-in-js](https://github.com/MicheleBertoli/css-in-js) React: CSS in JS techniques comparison.
{ "repo_name": "tuchk4/awesome-css-in-js", "stars": "609", "repo_language": "None", "file_name": "README.md", "mime_type": "text/plain" }
### Awesome Podcasts [![Awesome](https://cdn.rawgit.com/sindresorhus/awesome/d7305f38d29fed78fa85652e3a63e154dd8e8829/media/badge.svg)](https://github.com/sindresorhus/awesome) > A curated list of awesome podcasts related to programming, technology and > business. ### Table of Contents * [Apple](#apple) * [Business](#business) * [Comedy](#comedy) * [Cryptocurrencies & Blockchain](#cryptocurrencies--blockchain) * [Databases](#databases) * [Data Science](#data-science) * [Design](#design) * [DevOps](#devops) * [Developer Interviews](#developer-interviews) * [Finance](#finance) * [Gaming](#gaming) * [History](#history) * [Media](#media) * [Microsoft](#microsoft) * [Mobile](#mobile) * [Product](#product) * [Programming](#programming) * [Programming Languages](#programming-languages) * [Radio](#radio) * [Space](#space) * [Stationary](#stationary) * [Technology](#technology) * [Testing](#testing) * [Web](#web) --- #### Apple * [AppStories](https://appstories.net/) - Each week, Federico and John discuss their favorite new apps and noteworthy updates, dive into the stories and people behind the apps they love, and explore the social and cultural impact of the App Store. * [Welcome to Macintosh](https://www.macintosh.fm/) - A tiny show about a big fruit company. #### Business * [Bootstrapped](http://bootstrapped.fm/) - 25+ Years of Software Bootstrapping Experience * [CodePen Radio](https://blog.codepen.io/radio/) - A podcast all about what it's like running a small web software business. * [StartUp](https://gimletmedia.com/show/startup/) - StartUp is a podcast series about what it’s really like to get a business off the ground. * [The Distance](https://thedistance.com/) - The Distance is a podcast by Basecamp about longevity in business, featuring the stories of businesses that have endured for at least 25 years and the people who got them there. * [The Rocketship Podcast](http://rocketship.fm/) - The podcast that inspires tens of thousands of entrepreneurs each week. * [The Tim Ferriss Show](http://fourhourworkweek.com/podcast/) - Time Ferriss is a self-experimenter and bestselling author, best know for the 4-Hour Workweek, which has been translated into 40+ languages. #### Comedy * [Do By Friday](http://dobyfriday.com/) - A weekly challenge show hosted by Merlin Mann, Alex Cox, and Max Temkin. #### Cryptocurrencies & Blockchain * [Blockchain Curated](https://www.blockchaincurated.com/) - Blockchain Curated is a project by Zach Segal twitter that makes the best blockchain-related content more accessible to the world. We curate the top 1% of cryptocurrency articles and convert them into free weekly podcasts that are emailed to our listeners. * [BlockChannel](http://blockchannel.com/) - At BlockChannel we believe that education about digital currencies and blockchains should be relevant, fun, and easy to find. We hope to help create a new generation of financial-tech developers/enthusiasts! * [Flippening](https://p.nomics.com/podcast/) - Flippening Is A Podcast For Full-time & Institutional Crypto Investors * [Ledger Cast](https://ledgerstatus.com/) - Blockchain & Cryptocurrency Information, Analysis, and News * [Unchained](https://www.forbes.com/podcasts/unchained/#18aa921d5b4f) - In this podcast, host Laura Shin talks with industry pioneers across tech, financial services, health care, government and other sectors about how the blockchain and fintech will open up new opportunities for incumbents, startups and everyday people to interact more efficiently, directly and globally. #### Databases * [Scaling Postgres](https://player.fm/series/scaling-postgres) - Learn how to get the best performance and scale your PostgreSQL database via curated content from this weekly show. * [The NoSQL Database Podcast](https://nosql.libsyn.com/) - NoSQL tips and tricks as well as success stories from around the world. #### Data Science * [Data Skeptic](https://dataskeptic.com/) - Data Skeptic is your source for a perseptive of scientific skepticism on topics in statistics, machine learning, big data, artificial intelligence, and data science. * [Learning Machines 101](http://www.learningmachines101.com/) - A gentle introduction to Artificial Intelligence and Machine Learning * [Talking Machines](http://www.thetalkingmachines.com/) - Human conversation about machine learning. * [The AI Podcast](https://blogs.nvidia.com/ai-podcast/) - Artificial Intelligence has been described as “Thor’s Hammer“ and “the new electricity.” But it’s also a bit of a mystery – even to those who know it best. * [The O'Reilly Bots Podcast](https://www.oreilly.com/topics/oreilly-bots-podcast) - The O'Reilly Bots Podcast covers advances in conversational user interfaces, artificial intelligence, and messaging that are revolutionizing the way we interact with software. * [The O'Reilly Data Show Podcast](https://www.oreilly.com/topics/oreilly-data-show-podcast) - The O'Reilly Data Show Podcast explores the opportunities and techniques driving big data and data science. * [This Week in Machine Learning & AI](https://twimlai.com/) - Your guide to all that’s interesting and important in the world of machine learning and AI. #### Design * [Layout](http://layout.fm/) - Layout is a weekly podcast about design, technology, programming and everything else. * [Responsive Web Design Podcast](http://responsivewebdesign.com/podcast/) - A podcast from Karen McGrane and Ethan Marcotte, who interview people who make responsive designs happen. * [Tentative](http://tentative.fm/) - Tentative is a podcast about digital product design, hosted by thoughtbot designers Reda Lemeden & Kyle Fiedler. * [Through Process](http://www.throughprocess.com/) - Through Process is a podcast about how we become designers. #### DevOps * [AWS Podcast](https://aws.amazon.com/podcasts/aws-podcast/) - Jeff Barr discusses various aspects of the Amazon Web Services (AWS) offering. Each podcast include AWS news, tech tips, and interviews with startups, AWS partners, and AWS employees. * [DevOps Cafe](http://devopscafe.org/) - In this interview driven show, John Willis and Damon Edwards take a pragmatic look at the technology, tools, and business developments behind the emerging DevOps movement. * [Food Fight Show](http://foodfightshow.org/) - The Podcast where DevOps chefs do battle * [Google Cloud Platform Podcast](https://www.gcppodcast.com/) - Tune in every week to listen to Francesc and Mark discuss Google Cloud Platform! * [The Cloudcast](http://www.thecloudcast.net/) - Award-winning podcast on all things | Cloud Computing | AWS Ecosystem | OpenSource | DevOps | AppDev | SaaS | SDN * [The Ship Show](http://theshipshow.com/) - Build engineering, DevOps, release management & everything in between! #### Developer Interviews * [Away From The Keyboard](http://awayfromthekeyboard.com/category/podcasts/) Away From The Keyboard is a podcast that talks to technologists and tells their stories. * [Descriptive](http://descriptive.audio/) - Programmer origin stories * [Hello World Podcast](https://wildermuth.com/hwpod) - Interviews with your favorite speakers about how they got started! * [no dogma](http://nodogmapodcast.bryanhogan.net/) - Discussions on software development * [Scale Your Code](https://scaleyourcode.com/) - Learn from today's most successful developers and CTOs with in-depth interviews. * [Software Engineering Daily](http://softwareengineeringdaily.com/) - Software Engineering Daily features daily interviews about technical software topics. * [Software Engineering Radio](http://www.se-radio.net/) - The Podcast for Professional Software Developers #### Finance * [Freakonomics Radio](http://freakonomics.com/archive/) - Freakonomics Radio is an award-winning weekly podcast with 8 million downloads per month. * [Invest Like the Best](http://investorfieldguide.com/podcast/) - Welcome to Invest Like the Best. In each episode, I speak with the most interesting people I can find, whose stories will help you better invest your time and your money, and teach you how to play with boundaries in your own life. * [Planet Money](https://www.npr.org/sections/money/127413671/finance/) - The economy explained. #### Gaming * [Polygon Longform](http://www.polygon.com/longform) - Polygon Longform is a podcast containing spoken word versions of our in-depth features and monthly covers. #### History * [Dan Carlin's Hardcore History](http://www.dancarlin.com/hardcore-history-series/) - In "Hardcore History" journalist and broadcaster Dan Carlin takes his "Martian", unorthodox way of thinking and applies it to the past. #### Media * [The Incomparable](https://www.theincomparable.com/theincomparable/) - The Incomparable is a weekly dive into geeky media we love, including movies, books, TV, comics, and more, featuring a rotating panel of guests and hosted by Jason Snell. #### Microsoft * [Coding Blocks](http://www.codingblocks.net/) - Coding Blocks is the podcast and website for learning how to become a better software developer * [Ctrl+Click](http://ctrlclickcast.com/) - CTRL+CLICK CAST inspects the web for you! * [Herding Code](http://herdingcode.com/) * [MS Dev Show](http://msdevshow.com/) - THE podcast for Microsoft developers. * [RunAs Radio](http://www.runasradio.com/) - RunAs Radio is a weekly podcast for IT Professionals working with Microsoft products. * [Yet Another Podcast](http://jesseliberty.com/podcast/) #### Mobile * [Android Developers Backstage](http://androidbackstage.blogspot.com/) * [Fragmented](http://fragmentedpodcast.com/) - An Android Developer Podcast * [iOSBytes](https://iosbytes.codeschool.com/) - The latest news in the iOS community. * [iPhreaks](https://devchat.tv/iphreaks) - The iPhreaks Show is a weekly group discussion about iOS development and related technology by development veterans. #### Product * [Giant Robots Smashing into other Giant Robots Podcast](http://giantrobots.fm/) - The Giant Robots Smashing into Other Giant Robots podcast is a weekly show from thoughtbot discussing the business of great products. * [Product People](http://productpeople.tv/) - A podcast focused on great products and the people who make them * [This is Product Management](http://www.thisisproductmanagement.com/) - This is Product Management interviews brilliant minds across numerous disciplines that fuel modern product teams. #### Programming * [/dev/hell](http://devhell.info/) - Chris Hartjes and Ed Finkler are trapped in Development Hell. * [Code Podcast](http://codepodcast.com/) - A show about modern software engineering. * [Coder Radio](http://www.jupiterbroadcasting.com/show/coderradio/) * [Dev Discussions](http://www.devdiscussions.com/) - Dev Discussions are recorded conversations between web-application developers as we discuss relevant topics. * [Developer On Fire](http://developeronfire.com/) * [Developer Tea](https://developertea.com/) - A podcast for developers designed to fit inside your tea break. * [DevelopersHangout](http://www.developershangout.io/) * [Full Stack Radio](http://fullstackradio.com/) - A podcast for developers interested in building great software products. * [Functional Geekery](http://www.functionalgeekery.com/) * [Hanselminutes](http://hanselminutes.com/) - Fresh Air for Developers * [My Code Is Broken](https://mycodeisbroken.com/) * [Programming Throwdown](http://www.programmingthrowdown.com/) * [Reactive](http://reactive.audio/) - A podcast in which we merge, filter, scan and map streams of thoughts and talk about software engineering, culture, and technology. * [That Podcast](https://thatpodcast.io/) - Beau and Dave talking about life as dads, programmers, and entrepreneurs. * [The Bike Shed](http://bikeshed.fm/) - On The Bike Shed, hosts Derek Prior, Sean Griffin, Laila Winner, and guests discuss their development experience and challenges with Ruby, Rails, JavaScript, and whatever else is drawing their attention, admiration, or ire this week. * [The Changelog](https://changelog.com/) - Open Source moves fast. Keep up. * [The Creative Coding Podcast](http://creativecodingpodcast.com/) - Iain and Seb discuss the ins and outs of programming for creative applications * [The Manifest](https://manifest.fm/) - A podcast all about package management * [ThoughtWorks](https://www.thoughtworks.com/search?q=podcast&c=sitewide) - A community of passionate individuals whose purpose is to revolutionize software design, creation and delivery, while advocating for positive social change. * [Turing Incomplete](http://turing.cool/) - A Podcast About Programming #### Programming Languages ##### C * [.Net Rocks!](https://www.dotnetrocks.com/) - .NET Rocks! is a weekly talk show for anyone interested in programming on the Microsoft .NET platform. * [CodeChat](https://channel9.msdn.com/Shows/codechat) - CodeChat is an endless collection of casual conversations with software developers, technologists, gadgeteers, innovators, makers, and more. ##### C++ * [CppCast](http://cppcast.com/) - CppCast is the only podcast for C++ developers by C++ developers. ##### Clojure * [The Cognicast](http://blog.cognitect.com/cognicast/) - A podcast by Cognitect, Inc. about software and the people that create it. ##### Elixir * [The Elixir Fountain](https://soundcloud.com/elixirfountain) - Your weekly podcast for news & interviews from around the Elixir lang Community ##### Go * [Go Gab](https://www.briefs.fm/go-gab) - A podcast about everything Go. * [Go Time](https://changelog.com/gotime/) - Go Time is a weekly podcast featuring specials guests where we'll discuss interesting topics around Go the language, the community, and everything in between. ##### JavaScript * [FiveJS](https://fivejs.codeschool.com/) - The latest news in the JavaScript community. * [JS Party](https://changelog.com/jsparty) - A community celebration of JavaScript and the web. * [Javascript Jabber](https://devchat.tv/js-jabber) - JavaScript Jabber is a weekly discussion about JavaScript, front-end development, community, careers, and frameworks. * [NodeUp](http://nodeup.com/) - NodeUp is a podcast about Node.js ##### Kotlin * [Talking Kotlin](http://talkingkotlin.com/) - A bimonthly podcast on Kotlin and more hosted by Hadi Hariri. ##### PHP * [PHP Roundtable](https://www.phproundtable.com/) - The PHP podcast where everyone chimes in. * [PHP Town Hall](https://phptownhall.com/) - A podcast for developers who want to keep up to date with the latest random happenings in the PHP community, with occasional updates about Phil's turtle. * [The Laravel Podcast](http://www.laravelpodcast.com/) - The Laravel Podcast brings you Laravel and PHP development news and discussion. * [Three Devs and a Maybe](http://threedevsandamaybe.com/) - Join us each week as we discuss all things web development. * [Voices of the elePHPant](https://voicesoftheelephpant.com/) - Meet the people that make the PHP community special ##### Python * [Podcast.\_\_init\_\_](http://podcastinit.com/) - A podcast about Python and the people who make it great * [Talk Python To Me](https://talkpython.fm/) - A podcast on Python and related technologies ##### Ruby * [Ruby5](https://ruby5.codeschool.com/) - The latest news in the Ruby and Rails community. * [Ruby on Rails Podcast](http://5by5.tv/rubyonrails) - The Ruby on Rails Podcast, a weekly conversation about Ruby on Rails, Ember.js, open source software, and the programming profession. * [The Ruby Rogues](https://devchat.tv/ruby-rogues) - The Ruby Rogues podcast is a panel discussion about topics relating to programming, careers, community, and Ruby. ##### Rust * [New Rustacean](http://www.newrustacean.com/) - A Podcast About Learning Rust * [Rusty Radio](https://soundcloud.com/posix4e/sets/rustyradio) ##### Scala * [The Scalawags](http://scalawags.tv/) - Monthly podcasts about Scala language #### Radio * [99% Invisible](http://99percentinvisible.org/) - Design is everywhere in our lives, perhaps most importantly in the places where we've just stopped noticing. * [Benjamen Walker's Theory of Everything](https://toe.prx.org/) - Personally connecting the dots. All of them. * [Radiolab](http://www.radiolab.org/) - Radiolab is a show about curiosity. * [Slack Variety Pack](https://slack.com/varietypack) - A podcast about work, and the people and teams who do amazing work together. #### Space * [Liftoff](https://www.relay.fm/liftoff) - All systems go * [NASA in Silicon Valley Podcast](https://www.nasa.gov/ames/NASASiliconValleyPodcast) - An in-depth look at the various researchers, scientists, engineers and all around cool people who work at NASA to push the boundaries of innovation. * [StarTalk Radio](http://www.startalkradio.net/) - Science meets comedy and pop culture on StarTalk Radio! #### Stationary * [The Erasable Podcast](http://www.erasable.us/) - From literature to carpentry, accounting to space travel, the wood-cased pencil and its ancestor have undeniably been at the center of the creation and innovation for centuries. * [The Pen Addict](http://www.relay.fm/penaddict) - The Pen Addict is a weekly fix for all things stationery. #### Technology * [a16z](http://a16z.com/podcasts/) - Software Is Eating the World * [Accidental Tech Podcast](http://atp.fm/) - A tech podcast we accidentally created while trying to do a car show. * [Analog(ue)](http://www.relay.fm/analogue) - So many podcasts are about our digital devices. Analog(ue) is a show about how these devices make us feel and how they change our lives for the better, but also for the worse. * [Ctrl-Walt-Delete](http://www.theverge.com/label/ctrl-walt-delete) - Ctrl-Walt-Delete is a new show from The Verge featuring legendary tech reviewer Walt Mossberg and Verge editor-in-chief Nilay Patel. * [The Vergecast](http://www.theverge.com/label/the-vergecast) - The Vergecast is your source for an irreverent and informative look at what's happening right now (and next) in the world of technology and gadgets. * [What's Tech](http://www.theverge.com/whatstech) - We live in the future, where drones skim the sky, corporations enter the space race, and smart watches track our every movement. But how? And why? #### Testing * [Test Talks](http://joecolantonio.com/testtalks/) - A podcast about software testing and automation * [Testing In The Pub](http://testinginthepub.co.uk/testinginthepub/) - Your regular podcast all about software testing and delivery. #### Web * [Between Screens](https://soundcloud.com/between-screens) * [Boagworld](https://boagworld.com/show) - Every Thursday Paul Boag and Marcus Lillington are joined by a variety of guests to discuss a range of web design related topics. * [Front-end Five](https://frontendfive.codeschool.com/) - All of your Front-end News in 5 Minutes * [Relative Paths](http://relativepaths.uk/) - A UK based podcast for web industry types * [Shop Talk](http://shoptalkshow.com/) - ShopTalk is a podcast about front end web design, development and UX. * [The Path to Performance](http://pathtoperf.com/) - A podcast for everyone dedicated to making websites faster. * [The Web Platform Podcast](http://www.thewebplatformpodcast.com/) - A weekly show that dives deep into all things web from the developers building the platform today. * [TTL Podcast](http://ttlpodcast.com/) - The TTL Podcast, hosted by Rebecca Murphey, features conversations with front-end developers at large organizations about how they do their jobs. * [Web of Tomorrow](http://www.weboftomorrowpodcast.com/) - A podcast about JavaScript, web development, web design, and technology. ### License [![CC0](http://i.creativecommons.org/p/zero/1.0/88x31.png)](http://creativecommons.org/publicdomain/zero/1.0/) To the extent possible under law, [Wayne Ashley Berry](http://wayneashleyberry.com) has waived all copyright and related or neighboring rights to this work.
{ "repo_name": "wayneashleyberry/awesome-podcasts", "stars": "61", "repo_language": "None", "file_name": "license.md", "mime_type": "text/plain" }
# CC0 1.0 Universal ## Statement of Purpose The laws of most jurisdictions throughout the world automatically confer exclusive Copyright and Related Rights (defined below) upon the creator and subsequent owner(s) (each and all, an "owner") of an original work of authorship and/or a database (each, a "Work"). Certain owners wish to permanently relinquish those rights to a Work for the purpose of contributing to a commons of creative, cultural and scientific works ("Commons") that the public can reliably and without fear of later claims of infringement build upon, modify, incorporate in other works, reuse and redistribute as freely as possible in any form whatsoever and for any purposes, including without limitation commercial purposes. These owners may contribute to the Commons to promote the ideal of a free culture and the further production of creative, cultural and scientific works, or to gain reputation or greater distribution for their Work in part through the use and efforts of others. For these and/or other purposes and motivations, and without any expectation of additional consideration or compensation, the person associating CC0 with a Work (the "Affirmer"), to the extent that he or she is an owner of Copyright and Related Rights in the Work, voluntarily elects to apply CC0 to the Work and publicly distribute the Work under its terms, with knowledge of his or her Copyright and Related Rights in the Work and the meaning and intended legal effect of CC0 on those rights. 1. **Copyright and Related Rights**. A Work made available under CC0 may be protected by copyright and related or neighboring rights ("Copyright and Related Rights"). Copyright and Related Rights include, but are not limited to, the following: i. the right to reproduce, adapt, distribute, perform, display, communicate, and translate a Work; ii. moral rights retained by the original author(s) and/or performer(s); iii. publicity and privacy rights pertaining to a person's image or likeness depicted in a Work; iv. rights protecting against unfair competition in regards to a Work, subject to the limitations in paragraph 4(a), below; v. rights protecting the extraction, dissemination, use and reuse of data in a Work; vi. database rights (such as those arising under Directive 96/9/EC of the European Parliament and of the Council of 11 March 1996 on the legal protection of databases, and under any national implementation thereof, including any amended or successor version of such directive); and vii. other similar, equivalent or corresponding rights throughout the world based on applicable law or treaty, and any national implementations thereof. 2. **Waiver**. To the greatest extent permitted by, but not in contravention of, applicable law, Affirmer hereby overtly, fully, permanently, irrevocably and unconditionally waives, abandons, and surrenders all of Affirmer's Copyright and Related Rights and associated claims and causes of action, whether now known or unknown (including existing as well as future claims and causes of action), in the Work (i) in all territories worldwide, (ii) for the maximum duration provided by applicable law or treaty (including future time extensions), (iii) in any current or future medium and for any number of copies, and (iv) for any purpose whatsoever, including without limitation commercial, advertising or promotional purposes (the "Waiver"). Affirmer makes the Waiver for the benefit of each member of the public at large and to the detriment of Affirmer's heirs and successors, fully intending that such Waiver shall not be subject to revocation, rescission, cancellation, termination, or any other legal or equitable action to disrupt the quiet enjoyment of the Work by the public as contemplated by Affirmer's express Statement of Purpose. 3. **Public License Fallback**. Should any part of the Waiver for any reason be judged legally invalid or ineffective under applicable law, then the Waiver shall be preserved to the maximum extent permitted taking into account Affirmer's express Statement of Purpose. In addition, to the extent the Waiver is so judged Affirmer hereby grants to each affected person a royalty-free, non transferable, non sublicensable, non exclusive, irrevocable and unconditional license to exercise Affirmer's Copyright and Related Rights in the Work (i) in all territories worldwide, (ii) for the maximum duration provided by applicable law or treaty (including future time extensions), (iii) in any current or future medium and for any number of copies, and (iv) for any purpose whatsoever, including without limitation commercial, advertising or promotional purposes (the "License"). The License shall be deemed effective as of the date CC0 was applied by Affirmer to the Work. Should any part of the License for any reason be judged legally invalid or ineffective under applicable law, such partial invalidity or ineffectiveness shall not invalidate the remainder of the License, and in such case Affirmer hereby affirms that he or she will not (i) exercise any of his or her remaining Copyright and Related Rights in the Work or (ii) assert any associated claims and causes of action with respect to the Work, in either case contrary to Affirmer's express Statement of Purpose. 4. **Limitations and Disclaimers**. a. No trademark or patent rights held by Affirmer are waived, abandoned, surrendered, licensed or otherwise affected by this document. b. Affirmer offers the Work as-is and makes no representations or warranties of any kind concerning the Work, express, implied, statutory or otherwise, including without limitation warranties of title, merchantability, fitness for a particular purpose, non infringement, or the absence of latent or other defects, accuracy, or the present or absence of errors, whether or not discoverable, all to the greatest extent permissible under applicable law. c. Affirmer disclaims responsibility for clearing rights of other persons that may apply to the Work or any use thereof, including without limitation any person's Copyright and Related Rights in the Work. Further, Affirmer disclaims responsibility for obtaining any necessary consents, permissions or other rights required for any use of the Work. d. Affirmer understands and acknowledges that Creative Commons is not a party to this document and has no duty or obligation with respect to this CC0 or use of the Work. For more information, please see <http://creativecommons.org/publicdomain/zero/1.0/>
{ "repo_name": "wayneashleyberry/awesome-podcasts", "stars": "61", "repo_language": "None", "file_name": "license.md", "mime_type": "text/plain" }
# How to develop this package If you are not @fonsp and you are interested in developing this, get in touch! ### Step 1 (only once) Clone this repo to say `~/PlutoSliderServer.jl/`. Clone Pluto.jl to say `~/Pluto.jl/`. Create a new VS Code session and add both folders. You are interested in these files: - `Pluto.jl/frontend/components/Editor.js` search for `use_slider_server` - `Pluto.jl/frontend/common/PlutoHash.js` - `PlutoSliderServer.jl/src/PlutoSliderServer.jl` - `PlutoSliderServer.jl/src/MoreAnalysis.jl` - `PlutoSliderServer.jl/test/runtestserver.jl` ### Step 2 (only once) ```julia julia> ] pkg> dev ~/PlutoSliderServer.jl pkg> dev ~/Pluto.jl ``` ### Step 3 (every time) You can run the bind server like so: ``` bash> cd PlutoSliderServer.jl bash> julia --project -e "import Pkg; Pkg.instantiate()" bash> julia --project test/runtestserver.jl ``` Edit the `runtestserver.jl` file to suit your needs. The bind server will start running on port 2341. It can happen that HTTP.jl does a goof and the port becomes unavaible until you reboot. Edit `runtestserver.jl` to change the port. ### Step 4 (every time) If you run the Slider server using the `runtestserver.jl`, it will also run a static HTTP server for the exported files on the same port. E.g. the export for `test/dir1/a.jl` will be available at `localhost:2345/test/dir1/a.html`. Go to `localhost:2345/test/dir1/a.html`. Pluto's assets are also being server over this server, you can edit them and refresh.
{ "repo_name": "JuliaPluto/PlutoSliderServer.jl", "stars": "114", "repo_language": "Julia", "file_name": "c.plutojl", "mime_type": "text/plain" }
name = "PlutoSliderServer" uuid = "2fc8631c-6f24-4c5b-bca7-cbb509c42db4" authors = ["Fons van der Plas <fons@plutojl.org>"] version = "0.3.26" [deps] AbstractPlutoDingetjes = "6e696c72-6542-2067-7265-42206c756150" Base64 = "2a0f44e3-6c83-55bd-87e4-b1978d98bd5f" BetterFileWatching = "c9fd44ac-77b5-486c-9482-9798bd063cc6" Configurations = "5218b696-f38b-4ac9-8b61-a12ec717816d" Distributed = "8ba89e20-285c-5b6f-9357-94700520ee1b" FromFile = "ff7dd447-1dcb-4ce3-b8ac-22a812192de7" Git = "d7ba0133-e1db-5d97-8f8c-041e4b3a1eb2" GitHubActions = "6b79fd1a-b13a-48ab-b6b0-aaee1fee41df" Glob = "c27321d9-0574-5035-807b-f59d2c89b15c" HTTP = "cd3eb016-35fb-5094-929b-558a96fad6f3" JSON = "682c06a0-de6a-54ab-a142-c8b1cf79cde6" Logging = "56ddb016-857b-54e1-b83d-db4d58db5568" Pkg = "44cfe95a-1eb2-52ea-b672-e2afdf69b78f" Pluto = "c3e4b0f8-55cb-11ea-2926-15256bba5781" SHA = "ea8e919c-243c-51af-8825-aaa63cd721ce" Sockets = "6462fe0b-24de-5631-8697-dd941f90decc" TOML = "fa267f1f-6049-4f14-aa54-33bafae1ed76" TerminalLoggers = "5d786b92-1e48-4d6f-9151-6b4477ca9bed" UUIDs = "cf7118a7-6976-5b1a-9a39-7adc72f591a4" [compat] AbstractPlutoDingetjes = "1.1" BetterFileWatching = "^0.1.2" Configurations = "0.16, 0.17" FromFile = "0.1" Git = "1" GitHubActions = "0.1" Glob = "1" HTTP = "^1.0.2" JSON = "0.21" Pluto = "0.19.18" TerminalLoggers = "0.1" julia = "1.6" [extras] Random = "9a3f8284-a2c9-5f02-9a11-845980a1fd5c" Test = "8dfed614-e22c-5e08-85e1-65c5234f0b40" [targets] test = ["Test", "Random"]
{ "repo_name": "JuliaPluto/PlutoSliderServer.jl", "stars": "114", "repo_language": "Julia", "file_name": "c.plutojl", "mime_type": "text/plain" }
# PlutoSliderServer.jl > _**not just sliders!**_ Web server to run just the @bind parts of a [Pluto.jl](https://github.com/fonsp/Pluto.jl) notebook. See it in action at [computationalthinking.mit.edu](https://computationalthinking.mit.edu/)! Sliders, buttons and camera inputs work _instantly_, without having to wait for a Julia process. Plutoplutopluto [![](https://data.jsdelivr.com/v1/package/gh/fonsp/Pluto.jl/badge)](https://www.jsdelivr.com/package/gh/fonsp/Pluto.jl) # Try it out ```julia using PlutoSliderServer path_to_notebook = download("https://raw.githubusercontent.com/fonsp/Pluto.jl/v0.17.2/sample/Interactivity.jl") # fill in your own notebook path here! PlutoSliderServer.run_notebook(path_to_notebook) ``` Now open a browser, and go to the address printed in your terminal! # What can it do? ## 1. HTML export PlutoSliderServer can **run a notebook** and generate the **export HTML** file. This will give you the same file as the export button inside Pluto (top right), but automatically, without opening a browser. One use case is to automatically create a **GitHub Pages site from a repository with notebooks**. For this, take a look at [our template repository](https://github.com/JuliaPluto/static-export-template) that used GitHub Actions and PlutoSliderServer to generate a website on every commit. ### Example ```julia PlutoSliderServer.export_notebook("path/to/notebook.jl") # will create a file `path/to/notebook.html` ``` ## 2. Run a slider server The main functionality of PlutoSliderServer is to run a ***slider server***. This is a web server that **runs a notebook using Pluto**, and allows visitors to **change the values of `@bind`-ed variables**. The important **differences** between running a *slider server* and running Pluto with public access are: - A *slider server* can only set `@bind` values, it is not possible to change the notebook's code. - A *slider server* is **stateless**: it does not keep track of user sessions. Every request to a slider server is an isolated HTTP `GET` request, while Pluto maintains a WebSocket connection. - Pluto synchronizes everything between all connected clients in realtime. The *slider server* does the opposite: all 'clients' are **disconnected**, they don't see the `@bind` values or state of others. > **To learn more, watch the [PlutoCon 2020 presentation about how PlutoSliderServer works](https://www.youtube.com/watch?v=QZ3xlKm92tk)**. ### Example ```julia PlutoSliderServer.run_notebook("path/to/notebook.jl") # will create a file `path/to/notebook.html` ``` ## 3. _(WIP): Precomputed slider server_ Many input elements only have a finite number of possible values, for example, `PlutoUI.Slider(5:15)` can only have 11 values. For finite inputs like the slider, PlutoSliderServer can run the slider server **in advance**, and precompute the results to all possible inputs (in other words: precompute the response to all possible requests). This will generate a directory of subdirectories and files, each corresponding to a possible request. You can host this directory along with the generated HTML file (e.g. on GitHub pages), and Pluto will be able to use these pregenerated files as if they are a slider server! **You can get the interactivity of a slider server, without running a Julia server!** #### Combinatorial explosion We use the *bond connections graph* to understand which bound variables are co-dependent, and which are disconnected. For all groups of co-dependent variables, we precompute all possible combinations of their values. This allows us to **tame the 'combinatorial explosion'** that you would get when considering all possible combinations of all bound variables! If two variables are 'disconnected', then we don't need to consider possible *combinations* between them. > This part is still work-in-progress: https://github.com/JuliaPluto/PlutoSliderServer.jl/pull/29 ## Directories All of the functionality above can also be used on all notebooks in a directory. PlutoSliderServer will scan a directory recursively for notebook files. See `PlutoSliderServer.export_directory` and `PlutoSliderServer.run_directory`. ### Watching a directory After scanning a directory for notebook files, you can ask Pluto to continue watching the directory for changes. When notebook files are added/removed, they are also added/removed from the server. When a notebook file changes, the notebook session is restarted. This works especially well when this directory is a git-tracked directory. When running in a git directory, PlutoSliderServer can keep `git pull`ing the directory, updating from the repository automatically. See the `SliderServer_watch_dir` option and `PlutoSliderServer.run_git_directory`. #### Continuous Deployment The result is a *Continuous Deployment* setup: you can set up your PlutoSliderServer on a dedicated server running online, synced with your repository on github. You can then update the repository, and the PlutoSliderServer will update automatically. The alternative is to redeploy the entire server every time a notebook changes. We found that this setup works fairly well, but causes long downtimes whenever a notebook changes, because all notebooks need to re-run. This can be a problem if your project consists of many notebooks, and they change frequently. See `PlutoSliderServer.run_git_directory`. # How does it work? > [PlutoCon 2020 presentation about how PlutoSliderServer works](https://www.youtube.com/watch?v=QZ3xlKm92tk) ## Bond connections graph A crucial idea in the PlutoSliderServer is the *bond connections graph*. This is a bit of a mathematical adventure, I tried my best to explain it **in the [PlutoCon 2020 presentation about how PlutoSliderServer works](https://www.youtube.com/watch?v=QZ3xlKm92tk)**. Here is another explanation in text: ### Example notebook Let's take a look at this simple notebook: ```julia @bind x Slider(1:10) @bind y Slider(1:5) x + y @bind z Slider(1:100) "Hello $(z)!" ``` We have three **bound variables**: `x`, `y` and `z`. When analyzed by Pluto, we find the dependecies between cells: `1 -> 3`, `2 -> 3`, `4 -> 5`. This means that, as a graph, the last two cells are completely disconnected from the rest of the graph. Our *bond connections graph* will capture this idea. ### Procedure For each bound variable, we use Pluto's reactivity graph to know: 1. Which cells depend on the bound variable? 2. Which bound variables are (indirect) dependencies of any cell from (1)? These are called the co-dependencies of the bound variable. In our example, `x` influences the result of `x + y`, which depends on `y`. So `x` and `y` are the co-dependencies of `x`. Variable `z` influences `"Hello $(z)!"`, which is does not have `x` or `y` as dependencies. So `z` is *not* codependent with `x` or with `y`. This forms a dictionary, which looks like: ```julia Dict( :x => [:x, :y], :y => [:x, :y], :z => [:z], ) ``` For more examples, take a look at [this notebook](https://github.com/JuliaPluto/PlutoSliderServer.jl/blob/v0.2.6/test/parallelpaths4.jl), which has [this bond connection graph](https://github.com/JuliaPluto/PlutoSliderServer.jl/blob/v0.2.6/test/connections.jl#L28-L43). ### Application in the slider server Now, whenever you send the value of a bound variable `x` to the slider server, you *also have to send the values of the co-dependencies of `x`*, which are `x` and `y` in our example. By sending both, you are sending all the information that is needed to fully determine the dependent cells. ### Application in the precomputed slider server Like the regular slider server, we use the *bond connections graph*, which tells us which bound variables are co-dependent. This allows us to **tame the 'combinatorical explosion'** that you would get when considering all possible combinations of all bound variables! If two variables are 'disconnected', then we don't need to consider possible *combinations* between them. In our example notebook, there are `10 (x) * 5 (y) + 100 (z) = 150` combinations to precompute. Without considering the connections graph, there would be `10 (x) * 5 (y) * 100 (z) = 5000` possible combinations. # How to use this package As PlutoSliderServer embeds so much functionality, it may be confusing to figure out how to approach your setting. Here is an overview of our most important functions: - `export_directory` will find all notebooks in a directory, run them, and generate HTML files. *(`export_notebook` for a single file.)* One example use case is https://github.com/JuliaPluto/static-export-template - `run_directory` does the same as `export_directory`, but it **keeps the notebooks running** and runs the slider server! It will also watch the given directory for changes to notebook files, and automatically update the slider server. *(`run_notebook` for a single file.)* - `run_git_directory` does the same as `run_directory`, but it will keep running `git pull` in the given directory. Any changes will get picked up by our directory watching! ## Configuration PlutoSliderServer is very configurable, and we use [Configurations.jl](https://github.com/Roger-luo/Configurations.jl) to configure the server. We try our best to be smart about the default settings, and we hope that most users do not need to configure anything. There are two ways to change configurations: using keywords arguments, and using a `PlutoDeployment.toml` file. ### 1. Keyword arguments Our functions can take keyword arguments, for example: ```julia run_directory("my_notebooks"; SliderServer_port=8080, SliderServer_host="0.0.0.0", Export_baked_notebookfile=false, ) ``` > 🌟 For the full list of options, see the documentation for the function you are using. For example, in the Julia REPL, run `?run_directory`. ### 2. `PlutoDeployment.toml` If you are using a package environment for your slider server (if you are deploying it on a server, you probably should), then you can also use a TOML file to configure PlutoSliderServer. In the same folder where you have your `Project.toml` and `Manifest.toml` files, create a third file, called `PlutoDeployment.toml`. Its contents should look something like: ```toml [Export] baked_notebookfile = true [SliderServer] port = 8080 host = "0.0.0.0" # You can also set Pluto's configuration here: [Pluto] [Pluto.compiler] threads = 2 # See documentation for `Pluto.Configuration` for the full list of options. You need specify the categories within `Pluto.Configuration.Options` (`compiler`, `evaluation`, etc). ``` > 🌟 For the full list of options, run `PlutoSliderServer.show_sample_config_toml_file()`. Our functions will look for the existance of a file called `PlutoDeployment.toml` in the active package environment, and use it automatically. You can also combine the two configuration methods: keyword options and toml options will be merged, the former taking precedence. # Sample setup: Given a repository, start a PlutoSliderServer to serve static exports with live preview These instructions set up a slider server on a dedicated server, which automatically synchronises with a git repository, containing the notebook files. Make sure to create one before we start. > _Disclaimer: This is work in progress, there might be holes!_ ## Part 1: setup and running locally ### 1. Initialize Create a folder called `pluto-slider-server-environment` with the `Project.toml` and `Manifest.toml` for the `PlutoSliderServer`: (Not the notebooks - the notebooks should contain their own package environment.) ```shell $ cd <your-repository-with-notebooks> $ mkdir pluto-slider-server-environment $ julia --project=pluto-slider-server-environment julia> ] pkg> add Pluto PlutoSliderServer ``` ### 2. Configuration file Create a configuration file in the same folder as `Project.toml`, see the section about `PlutoDeployment.toml` above. ```shell TEMPFILE=$(mktemp) cat > $TEMPFILE << __EOF__ [SliderServer] port = 8080 host = "0.0.0.0" # more configuration can go here! __EOF__ sudo mv $TEMPFILE pluto-slider-server-environment/PlutoDeployment.toml ``` This configuration sets the port to `8080` (not `80`, this requires sudo), and the host to `"0.0.0.0"` (which allows traffic from outside the computer, unlike the default `"127.0.0.1"`). ### 3. Run it Let's try running it locally before setting up our server: ```shell julia --project="pluto-slider-server-environment" -e "import PlutoSliderServer; PlutoSliderServer.run_git_directory(\".\")" ``` `run_git_directory` will periodically call `git pull`, which requires the `start_dir` to be a repository in which you can `git pull` without password (which means it's either public, or you have the required keys in `~/.ssh/` and your git's provider security page!) >**Note** >Julia by default uses `libgit2` for git operations, [which can be problematic](https://github.com/JuliaLang/Pkg.jl/issues/2679). It is also known to cause issues in cloud environments like AWS's CodeCommit >where re-authentication is required at regular intervals. > > A simple workaround is to set the `JULIA_PKG_USE_CLI_GIT` environment variable to `true`, which will fallback to the system git (the one on the shell). > Make sure that this is installed! (`sudo apt-get install git` does the trick in Ubuntu). Also note that `git pull` may fail on the server if you force push the branch from your laptop, so handle history-rewriting commands, like `git push -f`, `git rebase` etc with care! ## Part 2: setting up the web server For this step, we'll assume a very specific but also common setup: - Ubuntu-based server with `apt-get`, `git`, `vim` and internet - access through SSH - root access - port 80 is open to the web The easiest way to get this is to **rent a server** from digitalocean.com, AWS, Google Cloud, etc. This setup was tested with digitalocean.com, which has the easiest interface for beginners. > ### Required memory, disk space, CPU power > When renting a server, you need to decide which "droplet size" you want. The bottleneck is memory – CPU power and disk space will always be sufficient. As minimum, you need `500MB + 300MB * length(notebooks)`. But if you use large packages, like Plots or DifferentialEquations, a notebook might need 1000MB memory. > > There is no minimum requirement on CPU power, but it does have a big impact on *launch time* and *responsiveness*. We found that DigitalOcean "dedicated CPU" is noticably faster (more than 2x) in both areas than "shared CPU". > > It is really important to make sure that you will be able to **resize your server later**, adding/removing memory as needed, to minimize your costs. For DigitalOcean, we have a specific tip: *always **start** with the smallest possible droplet (512MB or 1000MB), and then resize memory/CPU to fit your needs, without resizing the disk*. When resizing, DigitalOcean does not allow *shrinking* the disk size. ### 0. Update packages ```shell sudo apt-get update sudo apt-get upgrade ``` You should run `systemd --version` to verify that we have version 230 or higher. ### 1. Install Julia (run as root) ```shell # You can edit me: The Julia version (1.8.0) split into three parts: JULIA_MAJOR_VERSION=1 JULIA_MINOR_VERSION=8 JULIA_PATCH_VERSION=0 JULIA_VERSION="$(echo $JULIA_MAJOR_VERSION).$(echo $JULIA_MINOR_VERSION).$(echo $JULIA_PATCH_VERSION)" wget https://julialang-s3.julialang.org/bin/linux/x64/$(echo $JULIA_MAJOR_VERSION).$(echo $JULIA_MINOR_VERSION)/julia-$(echo $JULIA_VERSION)-linux-x86_64.tar.gz tar -xvzf julia-$JULIA_VERSION-linux-x86_64.tar.gz rm julia-$JULIA_VERSION-linux-x86_64.tar.gz sudo ln -s `pwd`/julia-$JULIA_VERSION/bin/julia /usr/local/bin/julia ``` ### 2. get your repository ```shell git clone https://github.com/<user>/<repo-with-notebooks> cd <repo-with-notebooks> git pull ``` ### 3. Create a service ```shell TEMPFILE=$(mktemp) cat > $TEMPFILE << __EOF__ [Unit] After=network.service StartLimitIntervalSec=500 StartLimitBurst=5 [Service] ExecStart=/usr/local/bin/pluto-slider-server.sh Restart=always RestartSec=5 User=$(whoami) Group=$(id -gn) [Install] WantedBy=default.target __EOF__ sudo mv $TEMPFILE /etc/systemd/system/pluto-server.service ``` This script uses `whoami` and `id -gn` to automatically insert your username an group name into the configuration file. We want to run the PlutoSliderServer as your user, not as root. ### 4. Create the startup script ```shell TEMPFILE=$(mktemp) cat > $TEMPFILE << __EOF__ #!/bin/bash # this env var allows us to side step various issues with the Julia-bundled git export JULIA_PKG_USE_CLI_GIT=true cd /home/<your-username>/<your-repo> # Make sure to change to the absolute path to your repository. Don't use ~. julia --project="pluto-slider-server-environment" -e "import Pkg; Pkg.instantiate(); import PlutoSliderServer; PlutoSliderServer.run_git_directory(\".\")" __EOF__ sudo mv $TEMPFILE /usr/local/bin/pluto-slider-server.sh ``` ### 5. Permissions stuff ```shell sudo chmod 744 /usr/local/bin/pluto-slider-server.sh sudo chmod 664 /etc/systemd/system/pluto-server.service ``` ### 6. Start & enable ```shell sudo systemctl daemon-reload sudo systemctl start pluto-server sudo systemctl enable pluto-server ``` Tip: If you need to change the service file or the startup script later, re-run this step to update the daemon. ### 7. View logs ```shell # To see quick status (running/failed and memory): systemctl -l status pluto-server # To browse past logs: sudo journalctl --pager-end -u pluto-server # To see logs coming in live: sudo journalctl --follow -u pluto-server ``` ### 8. Server available TODO ### 9. Live updates When you change the notebooks in the git repository, your server will automatically update (it keeps calling `git pull`)! Awesome! If the configuration file (`PlutoDeployment.toml`) changes, PlutoSliderServer will detect a change in configuration and shut down. Because we set up our service using `systemctl`, the server will automatically restart! (With the new settings) ## Part 3: port, domain name, https The default settings will serve Pluto on the IP address of your server, on `http` (not `https`), on port 8080 (not 80 or 443). Normally, websites are available on a domain name, on https, on the default port (80 for http, 443 for https) (e.g. `https://plutojl.org/`). Here's how you get there! If you use a server managed by your university/company, ask your system administrator how to achieve these steps. ### 1. Domain name You need to buy a domain name, and get access to the DNS settings. Set an "A record" that points to your IP address. You can now access your PlutoSliderServer at `http://mydomain.org:8080/`. Nice! ### 2. Port 80 We don't want everyone to add `:8080` to the URL! The default port for http is 80, so we want our website to be available at port 80. The tricky thing is: we don't want to run PlutoSliderServer directly on port 80, because this requires `sudo` privileges for running `julia`. We want to avoid this because we don't want `julia` to read/write files as `root` (this would mess up your git directory). The solution is to run PlutoSliderServer on port 8080, and use a separate server (running as root) to redirect traffic from port 80 to port 8080. We use `nginx` for that! ```shell sudo apt install nginx ``` nginx is now installed and it is configured to run at startup. Let's configure nginx as a redirect from port 80 to port 8080. ```shell TEMPFILE=$(mktemp) cat > $TEMPFILE << __EOF__ server { listen 80 default_server; listen [::]:80 default_server; location / { proxy_pass http://localhost:8080; } } __EOF__ sudo mv $TEMPFILE /etc/nginx/sites-available/default ``` After changing configuration, restart nginx: ```shell sudo systemctl restart nginx ``` ### 3. HTTPS The easiest way to get https is to use cloudflare. Register an account, set up your domain, use their DNS, and enable the "Always HTTPS" service. (Cloudflare is also very useful for caching! This will make your PlutoSliderServer faster.) Alternatively, you can set up HTTPS yourself with `nginx` and Let's Encrypt, but this is beyond the scope of this tutorial. 💛 Now, your service should be available at `https://yourdomain.org/`. Nice! # Similar/alternative packages ## Generating HTML exports There are many packages that *evaluate literate Julia documents to generate HTML or PDF output*! The most similar project is [PlutoStaticHTML.jl](https://github.com/rikhuijzer/PlutoStaticHTML.jl). This package generates **static** HTML files from Pluto notebooks, meaning that they do not require JavaScript to load: cell inputs and outputs are stored directly as HTML. (PlutoSliderServer.jl uses the same technique as the "Export to HTML" button inside Pluto: an HTML file is generated with no contents, but with an embedded data stream containing the *editor state*. This HTML file loads Pluto's JS assets and displays this state just like the editor would.) This means that the output of PlutoSliderServer.jl will look exactly the same as what you see while writing the notebook. Output from PlutoStaticHTML.jl is more minimal, which means that it loads faster, it can be styled with CSS, and it can more easily be embedded within other web pages (like Documenter.jl sections). Other Julia packages which export to HTML/PDF, but not necessarily with Pluto notebook files as input, include: - Documenter.jl - Franklin.jl - Books.jl - Weave.jl ## Slider server PlutoSliderServer is the only package that lets you run a *slider server* for Pluto notebooks (an interactive site to interact with a Pluto notebook through `@bind`). There are alternatives for running a Julia-backed interactive site if your code is *not* a Pluto notebook, including [JSServe.jl](https://github.com/SimonDanisch/JSServe.jl), [Stipple.jl](https://github.com/GenieFramework/Stipple.jl) and [Dash.jl](https://github.com/plotly/Dash.jl), each with their own philosophy and ideal use case. *(Feel free to suggest others!)* ## Precomputer slider server [PlutoStaticHTML.jl](https://github.com/rikhuijzer/PlutoStaticHTML.jl) should also have this feature in the future, after it is added to PlutoSliderServer (it is still [being worked on](https://github.com/JuliaPluto/PlutoSliderServer.jl/pull/29)). If you code is *not* a Pluto notebook, then [JSServe.jl](https://github.com/SimonDanisch/JSServe.jl) also has precomputing abilities, with a different approach and philosophy. # Authentication and security Since this server is a new and experimental concept, we highly recommend that you run it inside an isolated environment. While visitors are not able to change the notebook code, it is possible to manipulate the API to set bound values to arbitrary objects. For example, when your notebook uses `@bind x Slider(1:10)`, the API could be used to set the `x` to `9000`, `[10,20,30]` or `"👻"`. In the future, we are planning to implement a hook that allows widgets (such as `Slider`) to validate a value before it is run: [`AbstractPlutoDingetjes.Bonds.validate_value`](https://docs.juliahub.com/AbstractPlutoDingetjes/UHbnu/1.1.1/#AbstractPlutoDingetjes.Bonds.validate_value-Tuple{Any,%20Any}). Of course, we are not security experts, and this software does not come with any kind of security guarantee. To be completely safe, assume that someone who can visit the server can execute arbitrary code in the notebook, despite our measures to prevent it. Run PlutoSliderServer in a containerized environment.
{ "repo_name": "JuliaPluto/PlutoSliderServer.jl", "stars": "114", "repo_language": "Julia", "file_name": "c.plutojl", "mime_type": "text/plain" }
name: CompatHelper on: schedule: - cron: 0 0 * * * workflow_dispatch: jobs: CompatHelper: runs-on: ubuntu-latest steps: - name: "Add the General registry via Git" run: | import Pkg ENV["JULIA_PKG_SERVER"] = "" Pkg.Registry.add("General") shell: julia --color=yes {0} - name: "Install CompatHelper" run: | import Pkg name = "CompatHelper" uuid = "aa819f21-2bde-4658-8897-bab36330d9b7" version = "3" Pkg.add(; name, uuid, version) shell: julia --color=yes {0} - name: "Run CompatHelper" run: | import CompatHelper CompatHelper.main() shell: julia --color=yes {0} env: GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} COMPATHELPER_PRIV: ${{ secrets.COMPATHELPER_PRIV }}
{ "repo_name": "JuliaPluto/PlutoSliderServer.jl", "stars": "114", "repo_language": "Julia", "file_name": "c.plutojl", "mime_type": "text/plain" }
# This GitHub action checks whether a new version of this package # has been registred in the official registry. # If so, it creates a new _release_: # https://github.com/JuliaPluto/PlutoSliderServer.jl/releases name: TagBot on: issue_comment: types: - created workflow_dispatch: inputs: lookback: default: 3 permissions: actions: read checks: read contents: write deployments: read issues: read discussions: read packages: read pages: read pull-requests: read repository-projects: read security-events: read statuses: read jobs: TagBot: if: github.event_name == 'workflow_dispatch' || github.actor == 'JuliaTagBot' runs-on: ubuntu-latest steps: - uses: JuliaRegistries/TagBot@v1 with: token: ${{ secrets.GITHUB_TOKEN }}
{ "repo_name": "JuliaPluto/PlutoSliderServer.jl", "stars": "114", "repo_language": "Julia", "file_name": "c.plutojl", "mime_type": "text/plain" }
name: Julia tests on: workflow_dispatch: push: paths-ignore: - '**.md' branches: - main pull_request: paths-ignore: - '**.md' jobs: test: runs-on: ${{ matrix.os }} timeout-minutes: 15 # https://github.com/JuliaRegistries/General/issues/16777 env: JULIA_PKG_SERVER: '' # Uncomment if you want to see all results for all OSses. Otherwise, the first failed test cancels all others continue-on-error: true strategy: fail-fast: false matrix: julia-version: ['1.6', '1'] os: [ubuntu-latest] steps: # Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it - uses: actions/checkout@v3 - uses: julia-actions/setup-julia@v1 with: version: ${{ matrix.julia-version }} - uses: julia-actions/cache@v1 - uses: julia-actions/julia-runtest@v1
{ "repo_name": "JuliaPluto/PlutoSliderServer.jl", "stars": "114", "repo_language": "Julia", "file_name": "c.plutojl", "mime_type": "text/plain" }
import Pluto import Pluto: Cell, Notebook, NotebookTopology, ExpressionExplorer, ServerSession "Return the given cells, and all cells that depend on them (recursively)." function downstream_recursive( notebook::Notebook, topology::NotebookTopology, from::Union{Vector{Cell},Set{Cell}}, ) found = Set{Cell}(copy(from)) downstream_recursive!(found, notebook, topology, from) found end function downstream_recursive!( found::Set{Cell}, notebook::Notebook, topology::NotebookTopology, from::Vector{Cell}, ) for cell in from one_down = Pluto.where_referenced(notebook, topology, cell) for next in one_down if next ∉ found push!(found, next) downstream_recursive!(found, notebook, topology, Cell[next]) end end end end "Return all cells that are depended upon by any of the given cells." function upstream_recursive( notebook::Notebook, topology::NotebookTopology, from::Union{Vector{Cell},Set{Cell}}, ) found = Set{Cell}(copy(from)) upstream_recursive!(found, notebook, topology, from) found end function upstream_recursive!( found::Set{Cell}, notebook::Notebook, topology::NotebookTopology, from::Vector{Cell}, ) for cell in from references = topology.nodes[cell].references for upstream in Pluto.where_assigned(notebook, topology, references) if upstream ∉ found push!(found, upstream) upstream_recursive!(found, notebook, topology, Cell[upstream]) end end end end "All cells that can affect the outcome of changing the given variable." function codependents(notebook::Notebook, topology::NotebookTopology, var::Symbol) assigned_in = filter(notebook.cells) do cell var ∈ topology.nodes[cell].definitions end downstream = collect(downstream_recursive(notebook, topology, assigned_in)) downupstream = upstream_recursive(notebook, topology, downstream) end "Return a `Dict{Symbol,Vector{Symbol}}` where the _keys_ are the bound variables of the notebook. For each key (a bound symbol), the value is the list of (other) bound variables whose values need to be known to compute the result of setting the bond." function bound_variable_connections_graph(session::ServerSession, notebook::Notebook)::Dict{Symbol,Vector{Symbol}} topology = notebook.topology bound_variables = Pluto.get_bond_names(session, notebook) Dict{Symbol,Vector{Symbol}}( var => let cells = codependents(notebook, topology, var) defined_there = union!( Set{Symbol}(), (topology.nodes[c].definitions for c in cells)..., ) # Set([var]) ∪ collect((defined_there ∩ bound_variables)) end for var in bound_variables ) end
{ "repo_name": "JuliaPluto/PlutoSliderServer.jl", "stars": "114", "repo_language": "Julia", "file_name": "c.plutojl", "mime_type": "text/plain" }
begin struct KWDefField name::Symbol typename::Union{Nothing,Some} default::Union{Nothing,Some} docstring::Union{Nothing,Some} end function KWDefField(expr::Expr, docstring) default, expr = if Meta.isexpr(expr, :(=)) Some(expr.args[2]), expr.args[1] else nothing, expr end typename, expr = if Meta.isexpr(expr, :(::)) Some(expr.args[2]), expr.args[1] else nothing, expr end @assert expr isa Symbol KWDefField( expr, typename, default, docstring isa Nothing ? nothing : Some(docstring), ) end KWDefField(expr::Symbol, docstring) = KWDefField( expr, nothing, nothing, docstring isa Nothing ? nothing : Some(docstring), ) end function get_kwdocs end function list_options_md( t::Type; prefix::Union{Nothing,String}=nothing, hide_undocumented_fields::Bool=true, ) fields = get_kwdocs(t) fields = hide_undocumented_fields ? filter(f -> f.docstring !== nothing, fields) : fields lines = map(fields) do field "- `$( prefix === nothing ? "" : "$(prefix)_" )$( field.name )$( field.typename === nothing ? "" : "::$(something(field.typename))" )$( field.default === nothing ? "" : " = $(something(field.default))" )`$( field.docstring === nothing ? "" : ": $(something(field.docstring))" )" end join(lines, "\n") end function list_options_toml(t::Type; hide_undocumented_fields::Bool=true) fields = get_kwdocs(t) fields = hide_undocumented_fields ? filter(f -> f.docstring !== nothing, fields) : fields lines = map(fields) do field "$( # TOML has no `nothing`-like value, so we comment out the line: field.default === nothing || something(field.default) === :nothing ? "# " : "" )$( field.name )$( field.default === nothing ? "" : " = $(show_value(something(field.default)))" ) # $( field.typename === nothing ? "" : "($(something(field.typename))) " )$( field.docstring === nothing ? "" : "$(something(field.docstring))" )" end join(lines, "\n\n") end function show_value(e) if Meta.isexpr(e, :ref) "[]" elseif e isa String repr(e) else string(e) end end this_mod = @__MODULE__ macro extract_docs(raw_expr::Expr) struct_def = let local e = raw_expr while e.head != :struct e = e.args[end] end e end struct_name = struct_def.args[2] found = collect_field_info(struct_def) return quote result = $(esc(raw_expr)) m = $(this_mod) m.get_kwdocs(::Type{$(esc(struct_name))}) = $(found) result end end function collect_field_info(struct_def::Expr)::Vector{KWDefField} struct_lines = filter(s -> !isa(s, LineNumberNode), struct_def.args[3].args) last_docstring = nothing found = KWDefField[] for line in struct_lines if line isa String last_docstring = line else push!(found, KWDefField(line, last_docstring)) last_docstring = nothing end end found end
{ "repo_name": "JuliaPluto/PlutoSliderServer.jl", "stars": "114", "repo_language": "Julia", "file_name": "c.plutojl", "mime_type": "text/plain" }
using FromFile import Pluto: Pluto, without_pluto_file_extension @from "./Configuration.jl" import PlutoDeploySettings @from "./Types.jl" import NotebookSession, RunningNotebook @from "./Export.jl" import try_get_exact_pluto_version function generate_basic_index_html(paths) """ <!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1"> <style> body { font-family: sans-serif; } </style> <link rel="stylesheet" href="index.css"> <script src="index.js" type="module" defer></script> </head> <body> <h1>Notebooks</h1> <ul> $(join( if link === nothing """<li>$(name) <em style="opacity: .5;">(Loading...)</em></li>""" else """<li><a href="$(link)">$(name)</a></li>""" end for (name,link) in paths )) </ul> </body> </html> """ end function generate_index_html( sessions::Vector{NotebookSession}; settings::PlutoDeploySettings, ) if something(settings.Export.create_pluto_featured_index, false) Pluto.generate_index_html(; pluto_cdn_root=settings.Export.pluto_cdn_root, version=try_get_exact_pluto_version(), featured_direct_html_links=true, featured_sources_js="[{url:`./pluto_export.json`}]", ) else generate_basic_index_html(( without_pluto_file_extension(s.path) => without_pluto_file_extension(s.path) * ".html" for s in sessions )) end end function temp_index(notebook_sessions::Vector{NotebookSession}) generate_basic_index_html(Iterators.map(temp_index_item, notebook_sessions)) end function temp_index_item(s::NotebookSession) without_pluto_file_extension(s.path) => nothing end function temp_index_item(s::NotebookSession{String,String,<:Any}) without_pluto_file_extension(s.path) => without_pluto_file_extension(s.path) * ".html" end
{ "repo_name": "JuliaPluto/PlutoSliderServer.jl", "stars": "114", "repo_language": "Julia", "file_name": "c.plutojl", "mime_type": "text/plain" }
using FromFile import Pluto import Pluto: ServerSession, Firebasey, Token, withtoken, pluto_file_extensions, without_pluto_file_extension using HTTP using Sockets import JSON @from "./IndexJSON.jl" import generate_index_json @from "./IndexHTML.jl" import temp_index, generate_basic_index_html @from "./Types.jl" import NotebookSession, RunningNotebook, FinishedNotebook @from "./Configuration.jl" import PlutoDeploySettings, get_configuration @from "./PlutoHash.jl" import base64urldecode ready_for_bonds(::Any) = false ready_for_bonds(sesh::NotebookSession{String,String,RunningNotebook}) = sesh.current_hash == sesh.desired_hash queued_for_bonds(::Any) = false queued_for_bonds(sesh::NotebookSession{<:Any,String,<:Any}) = sesh.current_hash != sesh.desired_hash function make_router( notebook_sessions::AbstractVector{<:NotebookSession}, server_session::ServerSession; settings::PlutoDeploySettings, start_dir::AbstractString, static_dir::Union{String,Nothing}=nothing, ) router = HTTP.Router() with_cacheable_configured! = with_cacheable!(settings.SliderServer.cache_control) function get_sesh(request::HTTP.Request) uri = HTTP.URI(request.target) parts = HTTP.URIs.splitpath(uri.path) # parts[1] == "staterequest" notebook_hash = parts[2] i = findfirst(notebook_sessions) do sesh sesh.desired_hash == notebook_hash end if i === nothing #= ERROR HINT This means that the notebook file used by the web client does not precisely match any of the notebook files running in this server. If this is an automated setup, then this could happen inbetween deployments. If this is a manual setup, then running the .jl notebook file might have caused a small change (e.g. the version number or a whitespace change). Copy notebooks to a temporary directory before running them using the bind server. =# @info "Request hash not found. See error hint in my source code." notebook_hash maxlog = 50 nothing else notebook_sessions[i] end end function get_bonds(request::HTTP.Request) request_body = if request.method == "POST" IOBuffer(HTTP.payload(request)) elseif request.method == "GET" uri = HTTP.URI(request.target) parts = HTTP.URIs.splitpath(uri.path) # parts[1] == "staterequest" # notebook_hash = parts[2] @assert length(parts) == 3 base64urldecode(parts[3]) end bonds_raw = Pluto.unpack(request_body) Dict{Symbol,Any}(Symbol(k) => v for (k, v) in bonds_raw) end "Happens whenever you move a slider" function serve_staterequest(request::HTTP.Request) sesh = get_sesh(request) response = if ready_for_bonds(sesh) notebook = sesh.run.notebook bonds = try get_bonds(request) catch e @error "Failed to deserialize bond values" exception = (e, catch_backtrace()) return HTTP.Response(500, "Failed to deserialize bond values") |> with_cors! |> with_not_cacheable! end @debug "Deserialized bond values" bonds let lag = settings.SliderServer.simulated_lag lag > 0 && sleep(lag) end topological_order, new_state = withtoken(sesh.run.token) do try notebook.bonds = bonds names::Vector{Symbol} = Symbol.(keys(bonds)) topological_order = Pluto.set_bond_values_reactive( session=server_session, notebook=notebook, bound_sym_names=names, is_first_values=[false for _n in names], # because requests should be stateless. We might want to do something special for the (actual) initial request (containing every initial bond value) in the future. run_async=false, )::Pluto.TopologicalOrder new_state = Pluto.notebook_to_js(notebook) topological_order, new_state catch e @error "Failed to set bond values" exception = (e, catch_backtrace()) nothing, nothing end end topological_order === nothing && return ( HTTP.Response(500, "Failed to set bond values") |> with_cors! |> with_not_cacheable! ) ids_of_cells_that_ran = [c.cell_id for c in topological_order.runnable] @debug "Finished running!" length(ids_of_cells_that_ran) # We only want to send state updates about... function only_relevant(state) new = copy(state) # ... the cells that just ran and ... new["cell_results"] = filter(state["cell_results"]) do (id, cell_state) id ∈ ids_of_cells_that_ran end # ... nothing about bond values, because we don't want to synchronize among clients. and... delete!(new, "bonds") # ... we ignore changes to the status tree caused by a running bonds. delete!(new, "status_tree") new end patches = Firebasey.diff( only_relevant(sesh.run.original_state), only_relevant(new_state), ) patches_as_dicts::Array{Dict} = Firebasey._convert(Array{Dict}, patches) HTTP.Response( 200, Pluto.pack( Dict{String,Any}( "patches" => patches_as_dicts, "ids_of_cells_that_ran" => ids_of_cells_that_ran, ), ), ) |> with_cacheable_configured! |> with_cors! |> with_msgpack! elseif queued_for_bonds(sesh) HTTP.Response(503, "Still loading the notebooks... check back later!") |> with_cors! |> with_not_cacheable! else HTTP.Response(404, "Not found!") |> with_cors! |> with_not_cacheable! end end function serve_bondconnections(request::HTTP.Request) sesh = get_sesh(request) response = if ready_for_bonds(sesh) HTTP.Response(200, Pluto.pack(sesh.run.bond_connections)) |> with_cors! |> with_cacheable_configured! |> with_msgpack! elseif queued_for_bonds(sesh) HTTP.Response(503, "Still loading the notebooks... check back later!") |> with_cors! |> with_not_cacheable! elseif sesh isa NotebookSession{<:Any,String,FinishedNotebook} HTTP.Response(422, "Notebook is no longer running") |> with_cors! |> with_not_cacheable! else HTTP.Response(404, "Not found!") |> with_cors! |> with_not_cacheable! end end HTTP.register!( router, "GET", "/", r -> let done = count(sesh -> sesh.current_hash == sesh.desired_hash, notebook_sessions) if static_dir !== nothing path = joinpath(static_dir, "index.html") if !isfile(path) path = tempname() * ".html" write(path, temp_index(notebook_sessions)) end Pluto.asset_response(path) else if done < length(notebook_sessions) HTTP.Response( 503, "Still loading the notebooks... check back later! [$(done)/$(length(notebook_sessions)) ready]", ) else HTTP.Response(200, "Hi!") end end |> with_cors! |> with_not_cacheable! end, ) HTTP.register!( router, "GET", "/pluto_export.json", r -> let HTTP.Response(200, generate_index_json(notebook_sessions; settings, start_dir)) |> with_json! |> with_cors! |> with_not_cacheable! end, ) # !!!! IDEAAAA also have a get endpoint with the same thing but the bond data is base64 encoded in the URL # only use it when the amount of data is not too much :o HTTP.register!(router, "POST", "/staterequest/*/", serve_staterequest) HTTP.register!(router, "GET", "/staterequest/*/*", serve_staterequest) HTTP.register!(router, "GET", "/bondconnections/*/", serve_bondconnections) if static_dir !== nothing function serve_pluto_asset(request::HTTP.Request) uri = HTTP.URI(request.target) filepath = Pluto.project_relative_path( "frontend", relpath(HTTP.unescapeuri(uri.path), "/pluto_asset/"), ) Pluto.asset_response(filepath) end HTTP.register!(router, "GET", "/pluto_asset/**", serve_pluto_asset) function serve_asset(request::HTTP.Request) uri = HTTP.URI(request.target) filepath = joinpath(static_dir, relpath(HTTP.unescapeuri(uri.path), "/")) Pluto.asset_response(filepath) end HTTP.register!(router, "GET", "/**", serve_asset) end router end ### # HEADERS function with_msgpack!(response::HTTP.Response) HTTP.setheader(response, "Content-Type" => "application/msgpack") response end function with_json!(response::HTTP.Response) HTTP.setheader(response, "Content-Type" => "application/json; charset=utf-8") response end function with_cors!(response::HTTP.Response) HTTP.setheader(response, "Access-Control-Allow-Origin" => "*") response end function with_cacheable!(cache_control::String) return (response::HTTP.Response) -> begin HTTP.setheader(response, "Cache-Control" => cache_control) response end end function with_not_cacheable!(response::HTTP.Response) HTTP.setheader(response, "Cache-Control" => "no-store, no-cache") response end function ReferrerMiddleware(handler) return function (req::HTTP.Request) response = handler(req) HTTP.setheader(response, "Referrer-Policy" => "origin-when-cross-origin") return response end end
{ "repo_name": "JuliaPluto/PlutoSliderServer.jl", "stars": "114", "repo_language": "Julia", "file_name": "c.plutojl", "mime_type": "text/plain" }
import Pluto: is_pluto_notebook flatmap(args...) = vcat(map(args...)...) list_files_recursive(dir=".") = let paths = flatmap(walkdir(dir)) do (root, dirs, files) joinpath.([root], files) end relpath.(paths, [dir]) end """ Search recursively for Pluto notebook files. Return paths relative to the search directory. """ function find_notebook_files_recursive(start_dir) notebook_files = filter(list_files_recursive(start_dir)) do path is_pluto_notebook(joinpath(start_dir, path)) end not_interal_notebook_files = filter(notebook_files) do f !occursin(".julia", f) || occursin(".julia", start_dir) end # reverse alphabetical order so that week5 becomes available before week4 :) reverse(not_interal_notebook_files) end
{ "repo_name": "JuliaPluto/PlutoSliderServer.jl", "stars": "114", "repo_language": "Julia", "file_name": "c.plutojl", "mime_type": "text/plain" }
import Git: git function current_branch_name(dir=".") cd(dir) do read(`$(git()) rev-parse --abbrev-ref HEAD`, String) |> strip end end function fetch(dir=".")::Bool branch_name = current_branch_name(dir) cd(dir) do run(`$(git()) fetch origin $(branch_name) --quiet`) local_hash = read(`$(git()) rev-parse HEAD`, String) remote_hash = read(`$(git()) rev-parse \@\{u\}`, String) return local_hash == remote_hash end end function pullhard(dir=".") branch_name = current_branch_name(dir) cd(dir) do run(`$(git()) reset --hard origin/$(branch_name)`) end end function fetch_pull(dir=","; pull_sleep=1) in_sync = fetch(dir) if !in_sync pullhard(dir) sleep(pull_sleep) end end function poll_pull_loop(dir="."; interval=5) while true try fetch_pull(dir) catch e @error "Error in poll_pull_loop" exception=(e, catch_backtrace()) end sleep(interval) end end
{ "repo_name": "JuliaPluto/PlutoSliderServer.jl", "stars": "114", "repo_language": "Julia", "file_name": "c.plutojl", "mime_type": "text/plain" }
import Pluto: Pluto, without_pluto_file_extension, generate_html, @asynclog using Base64 using FromFile @from "./MoreAnalysis.jl" import bound_variable_connections_graph @from "./Export.jl" import try_get_exact_pluto_version, try_fromcache, try_tocache @from "./Types.jl" import NotebookSession, RunningNotebook, FinishedNotebook, RunResult @from "./Configuration.jl" import PlutoDeploySettings, is_glob_match @from "./FileHelpers.jl" import find_notebook_files_recursive @from "./PlutoHash.jl" import plutohash showall(xs) = Text(join(string.(xs), "\n")) ### # Shutdown function process( s::NotebookSession{String,Nothing,<:Any}; server_session::Pluto.ServerSession, settings::PlutoDeploySettings, output_dir::AbstractString, start_dir::AbstractString, progress, )::NotebookSession if s.run isa RunningNotebook Pluto.SessionActions.shutdown(server_session, s.run.notebook) end try remove_static_export(s.path; settings, output_dir) catch e @warn "Failed to remove static export files" s.path exception = (e, catch_backtrace()) end @info "### ✓ $(progress) Shutdown complete" s.path NotebookSession(; path=s.path, current_hash=nothing, desired_hash=nothing, run=nothing) end ### # Launch function process( s::NotebookSession{Nothing,String,<:Any}; server_session::Pluto.ServerSession, settings::PlutoDeploySettings, output_dir::AbstractString, start_dir::AbstractString, progress, )::NotebookSession path = s.path abs_path = joinpath(start_dir, path) @info "###### ◐ $(progress) Launching..." s.path jl_contents = read(abs_path, String) new_hash = plutohash(jl_contents) if new_hash != s.desired_hash @warn "Notebook file does not have desired hash. This probably means that the file changed too quickly. Continuing and hoping for the best!" s.path new_hash s.desired_hash end keep_running = settings.SliderServer.enabled && !is_glob_match(path, settings.SliderServer.exclude) && occursin("@bind", jl_contents) skip_cache = keep_running || is_glob_match(path, settings.Export.ignore_cache) cached_state = skip_cache ? nothing : try_fromcache(settings.Export.cache_dir, new_hash) run = if cached_state !== nothing @info "Loaded from cache, skipping notebook run" s.path new_hash original_state = cached_state FinishedNotebook(; path, original_state) else try # open and run the notebook notebook = Pluto.SessionActions.open(server_session, abs_path; run_async=false) # get the state object original_state = Pluto.notebook_to_js(notebook) delete!(original_state, "status_tree") # shut down the notebook if !keep_running @info "Shutting down notebook process" s.path Pluto.SessionActions.shutdown(server_session, notebook) end try_tocache(settings.Export.cache_dir, new_hash, original_state) if keep_running bond_connections = bound_variable_connections_graph(server_session, notebook) @info "Bond connections" s.path showall(collect(bond_connections)) RunningNotebook(; path, notebook, original_state, bond_connections) else FinishedNotebook(; path, original_state) end catch e (e isa InterruptException) || rethrow(e) @error "$(progress) Failed to run notebook!" path exception = (e, catch_backtrace()) # continue nothing end end if run isa RunResult generate_static_export( path, run.original_state, jl_contents; settings, start_dir, output_dir, ) end @info "### ✓ $(progress) Ready" s.path new_hash NotebookSession(; path, current_hash=new_hash, desired_hash=s.desired_hash, run) end ### # Update if needed function process( s::NotebookSession{String,String,<:Any}; server_session::Pluto.ServerSession, settings::PlutoDeploySettings, output_dir::AbstractString, start_dir::AbstractString, progress, )::NotebookSession if s.current_hash != s.desired_hash @info "Updating notebook... will shut down and relaunch" s.path # Simple way to update: shut down notebook and start new one if s.run isa RunningNotebook Pluto.SessionActions.shutdown(server_session, s.run.notebook) end @info "Shutdown complete" s.path result = process( NotebookSession(; path=s.path, current_hash=nothing, desired_hash=s.desired_hash, run=nothing, ); server_session, settings, output_dir, start_dir, progress, ) result else s end end ### # Leave it shut down process(s::NotebookSession{Nothing,Nothing,<:Any}; kwargs...)::NotebookSession = s should_shutdown(::NotebookSession{String,Nothing,<:Any}) = true should_shutdown(::NotebookSession) = false should_update(s::NotebookSession{String,String,<:Any}) = s.current_hash != s.desired_hash should_update(::NotebookSession) = false should_launch(::NotebookSession{Nothing,String,<:Any}) = true should_launch(::NotebookSession) = false will_process(s) = should_update(s) || should_launch(s) || should_shutdown(s) function generate_static_export( path, original_state, jl_contents; settings, output_dir, start_dir, ) pluto_version = try_get_exact_pluto_version() export_jl_path = let relative_to_notebooks_dir = path joinpath(output_dir, relative_to_notebooks_dir) end export_html_path = let relative_to_notebooks_dir = without_pluto_file_extension(path) * ".html" joinpath(output_dir, relative_to_notebooks_dir) end export_statefile_path = let relative_to_notebooks_dir = without_pluto_file_extension(path) * ".plutostate" joinpath(output_dir, relative_to_notebooks_dir) end mkpath(dirname(export_jl_path)) mkpath(dirname(export_html_path)) mkpath(dirname(export_statefile_path)) slider_server_running_somewhere = settings.Export.slider_server_url !== nothing || (settings.SliderServer.serve_static_export_folder && settings.SliderServer.enabled) notebookfile_js = if settings.Export.offer_binder || slider_server_running_somewhere if settings.Export.baked_notebookfile "\"data:text/julia;charset=utf-8;base64,$(base64encode(jl_contents))\"" else repr(basename(export_jl_path)) end else "undefined" end slider_server_url_js = if slider_server_running_somewhere abs_path = joinpath(start_dir, path) url_of_root = relpath(start_dir, dirname(abs_path)) # e.g. "." or "../../.." repr(something(settings.Export.slider_server_url, url_of_root)) else "undefined" end binder_url_js = if settings.Export.offer_binder repr(something(settings.Export.binder_url, Pluto.default_binder_url)) else "undefined" end statefile_js = if !settings.Export.baked_state write_statefile(export_statefile_path, original_state) repr(basename(export_statefile_path)) else statefile64 = base64encode() do io Pluto.pack(io, original_state) end "\"data:;base64,$(statefile64)\"" end frontmatter = convert( Pluto.FrontMatter, get( () -> Pluto.FrontMatter(), get(() -> Dict{String,Any}(), original_state, "metadata"), "frontmatter", ), ) header_html = Pluto.frontmatter_html(frontmatter) html_contents = Pluto.generate_html(; pluto_cdn_root=settings.Export.pluto_cdn_root, version=pluto_version, notebookfile_js, statefile_js, slider_server_url_js, binder_url_js, disable_ui=settings.Export.disable_ui, header_html, ) write(export_html_path, html_contents) if (settings.Export.offer_binder || settings.Export.slider_server_url !== nothing) && !settings.Export.baked_notebookfile write(export_jl_path, jl_contents) end @debug "Written to $(export_html_path)" end function write_statefile(path, state) data = Pluto.pack(state) write(path, data) @assert read(path) == data end tryrm(x) = isfile(x) && rm(x) function remove_static_export(path; settings, output_dir) export_jl_path = let relative_to_notebooks_dir = path joinpath(output_dir, relative_to_notebooks_dir) end export_html_path = let relative_to_notebooks_dir = without_pluto_file_extension(path) * ".html" joinpath(output_dir, relative_to_notebooks_dir) end export_statefile_path = let relative_to_notebooks_dir = without_pluto_file_extension(path) * ".plutostate" joinpath(output_dir, relative_to_notebooks_dir) end if !settings.Export.baked_state tryrm(export_statefile_path) end tryrm(export_html_path) if (settings.Export.offer_binder || settings.Export.slider_server_url !== nothing) && !settings.Export.baked_notebookfile tryrm(export_jl_path) end end
{ "repo_name": "JuliaPluto/PlutoSliderServer.jl", "stars": "114", "repo_language": "Julia", "file_name": "c.plutojl", "mime_type": "text/plain" }
import Pluto: Pluto, ServerSession using HTTP import Pkg ## CACHE export try_fromcache, try_tocache cache_filename(cache_dir::String, current_hash::String) = joinpath( cache_dir, replace( HTTP.URIs.escapeuri(string(try_get_exact_pluto_version(), current_hash)), "." => "_", ) * ".plutostate", ) function try_fromcache(cache_dir::String, current_hash::String) p = cache_filename(cache_dir, current_hash) if isfile(p) try open(Pluto.unpack, p, "r") catch e @warn "Failed to load statefile from cache" current_hash exception = (e, catch_backtrace()) end end end try_fromcache(cache_dir::Nothing, current_hash) = nothing function try_tocache(cache_dir::String, current_hash::String, state) mkpath(cache_dir) try open(cache_filename(cache_dir, current_hash), "w") do io Pluto.pack(io, state) end catch e @warn "Failed to write to cache file" current_hash exception = (e, catch_backtrace()) end end try_tocache(cache_dir::Nothing, current_hash, state) = nothing ## FINDING THE PLUTO VERSION const found_pluto_version = Ref{Any}(nothing) function try_get_exact_pluto_version() if found_pluto_version[] !== nothing return found_pluto_version[] end found_pluto_version[] = try deps = Pkg.API.dependencies() p_index = findfirst(p -> p.name == "Pluto", deps) p = deps[p_index] if p.is_tracking_registry p.version elseif p.is_tracking_path error( "Do not add the Pluto dependency as a local path, but by specifying its VERSION or an exact COMMIT SHA.", ) else # ugh is_probably_a_commit_thing = all(in(('0':'9') ∪ ('a':'f')), p.git_revision) && length(p.git_revision) ∈ (8, 40) if !is_probably_a_commit_thing error( "Do not add the Pluto dependency by specifying its BRANCH, but by specifying its VERSION or an exact COMMIT SHA.", ) end p.git_revision end catch e if get(ENV, "HIDE_PLUTO_EXACT_VERSION_WARNING", "false") == "false" @error "Failed to get exact Pluto version from dependency. Your website is not guaranteed to work forever." exception = (e, catch_backtrace()) maxlog = 1 end Pluto.PLUTO_VERSION end end
{ "repo_name": "JuliaPluto/PlutoSliderServer.jl", "stars": "114", "repo_language": "Julia", "file_name": "c.plutojl", "mime_type": "text/plain" }
using FromFile @from "./PlutoHash.jl" import plutohash_contents @from "./Types.jl" import NotebookSession @from "./Configuration.jl" import PlutoDeploySettings, get_configuration @from "./FileHelpers.jl" import find_notebook_files_recursive import Pluto: without_pluto_file_extension function d3join(notebook_sessions, new_paths; start_dir::AbstractString) desired_notebook_sessions = filter(s -> s.desired_hash !== nothing, notebook_sessions) old_paths = map(s -> s.path, desired_notebook_sessions) old_hashes = map(s -> s.current_hash, desired_notebook_sessions) new_hashes = map(old_paths) do path abs_path = joinpath(start_dir, path) isfile(abs_path) ? plutohash_contents(abs_path) : nothing end ( enter=setdiff(new_paths, old_paths), update=String[ path for (i, path) in enumerate(old_paths) if path ∈ new_paths && old_hashes[i] !== new_hashes[i] ], exit=setdiff(old_paths, new_paths), ) end select(f::Function, xs) = for x in xs if f(x) return x end end function update_sessions!(notebook_sessions, new_paths; start_dir::AbstractString)::Bool added, updated, removed = d3join(notebook_sessions, new_paths; start_dir) if any(!isempty, [added, updated, removed]) @info "Notebook list updated" added updated removed for path in added push!( notebook_sessions, NotebookSession(; path=path, current_hash=nothing, desired_hash=plutohash_contents(joinpath(start_dir, path)), run=nothing, ), ) end for path in updated ∪ removed old = select(s -> s.path == path, notebook_sessions) @assert old !== nothing new = NotebookSession(; path=path, current_hash=old.current_hash, desired_hash=( path ∈ removed ? nothing : plutohash_contents(joinpath(start_dir, path)) ), run=old.run, ) replace!(notebook_sessions, old => new) end false else true end end
{ "repo_name": "JuliaPluto/PlutoSliderServer.jl", "stars": "114", "repo_language": "Julia", "file_name": "c.plutojl", "mime_type": "text/plain" }
using Configurations import TOML import Pluto: Pluto, Token, Notebook export NotebookSession, SliderServerSettings, ExportSettings, PlutoDeploySettings, get_configuration using TerminalLoggers: TerminalLogger ### # SESSION DEFINITION abstract type RunResult end Base.@kwdef struct RunningNotebook <: RunResult path::String notebook::Pluto.Notebook original_state::Any token::Token = Token() bond_connections::Dict{Symbol,Vector{Symbol}} end Base.@kwdef struct FinishedNotebook <: RunResult path::String original_state::Any end Base.@kwdef struct NotebookSession{ C<:Union{Nothing,String}, D<:Union{Nothing,String}, R<:Union{Nothing,RunResult}, } path::String current_hash::C desired_hash::D run::R end
{ "repo_name": "JuliaPluto/PlutoSliderServer.jl", "stars": "114", "repo_language": "Julia", "file_name": "c.plutojl", "mime_type": "text/plain" }
import Pluto: Token export withlock # LOCK const locked_objects = Dict{UInt,Token}() function withlock(f, x) l = get!(Token, locked_objects, objectid(x)) take!(l) local result try result = f() catch e rethrow(e) finally put!(l) end result end
{ "repo_name": "JuliaPluto/PlutoSliderServer.jl", "stars": "114", "repo_language": "Julia", "file_name": "c.plutojl", "mime_type": "text/plain" }
using Configurations import TOML import Pluto export SliderServerSettings, ExportSettings, PlutoDeploySettings, get_configuration using FromFile import Glob @from "./ConfigurationDocs.jl" import @extract_docs, get_kwdocs, list_options_md @extract_docs @option struct SliderServerSettings enabled::Bool = true "List of notebook files to skip. Provide paths relative to `start_dir`. *If `Export.enabled` is `true` (default), then only paths in `SliderServer_exclude ∩ Export_exclude` will be skipped, paths in `setdiff(SliderServer_exclude, Export_exclude)` will be shut down after exporting. You can use the `*` wildcard and other [glob patterns](https://en.wikipedia.org/wiki/Glob_(programming))." exclude::Vector = String[] "Port to run the HTTP server on." port::Integer = 2345 """Often set to `"0.0.0.0"` on a server.""" host::Any = "127.0.0.1" "Watch the input directory for file changes, and update the slider server sessions automatically. Only takes effect when running the slider server. More info in the README." watch_dir::Bool = true "Besides handling slider server request, should we also run a static file server of the export output folder? Set to `false` if you are serving the HTML files in another way, e.g. using GitHub Pages, and, for some reason, you do not want to *also* serve the HTML files using this serve." serve_static_export_folder::Bool = true simulated_lag::Real = 0 "Cache-Control header sent on requests in which caching is enabled. Set to `no-store, no-cache` to completely disable caching" cache_control::String = "public, max-age=315600000, immutable" end @extract_docs @option struct ExportSettings "Generate static HTML files? This setting can only be `false` if you are also running a slider server." enabled::Bool = true "Folder to write generated HTML files to (will create directories to preserve the input folder structure). The behaviour of the default value depends on whether you are running the slider server, or just exporting. If running the slider server, we use a temporary directory; otherwise, we use `start_dir` (i.e. we generate each HTML file in the same folder as the notebook file)." output_dir::Union{Nothing,String} = nothing "List of notebook files to skip. Provide paths relative to `start_dir`. You can use the `*` wildcard and other [glob patterns](https://en.wikipedia.org/wiki/Glob_(programming))." exclude::Vector{String} = String[] "List of notebook files that should always re-run, skipping the `cache_dir` system. Provide paths relative to `start_dir`. You can use the `*` wildcard and other [glob patterns](https://en.wikipedia.org/wiki/Glob_(programming))." ignore_cache::Vector = String[] "If provided, use this directory to read and write cached notebook states. Caches will be indexed by the hash of the notebook file, but you need to take care to invalidate the cache when Pluto or this export script updates. Useful in combination with https://github.com/actions/cache, see https://github.com/JuliaPluto/static-export-template for an example." cache_dir::Union{Nothing,String} = nothing "base64-encode the state object and write it inside the .html file. If `false`, a separate `.plutostate` file is generated. A separate statefile allows us to show a loading bar in pluto while the statefile is loading, but it can complicate setup in some environments." baked_state::Bool = true """base64-encode the .jl notebook source and write it inside the .html file. If `false`, a separate `.jl` file is generated (or the original is used). The main difference is in the "Edit or run this notebook > On your computer" flow on the HTML file: with a separate notebook file (default), it will be a URL that you can copy and open with Pluto. With a baked notebook file, it will be a download button, and visitors need to save the notebook on their local drive, which can be more complicated.""" baked_notebookfile::Bool = true "Hide all buttons and toolbars in Pluto to make it look like an article." disable_ui::Bool = true """Show a "Run on Binder" button on the notebooks.""" offer_binder::Bool = true """ADVANCED: URL of the binder repository to load when you click the "Run on binder" button in the top right, this will be set automatically if you leave it at the default value. This setting is quite advanced, and only makes sense if you have a fork of `https://github.com/fonsp/pluto-on-binder/` (because you want to control the binder launch, or because you are using your own fork of Pluto). If so, the setting should be of the form `"https://mybinder.org/v2/gh/fonsp/pluto-on-binder/v0.17.2"`, where `fonsp/pluto-on-binder` is the name of your repository, and `v0.17.2` is a tag or commit hash.""" binder_url::Union{Nothing,String} = nothing """If 1) you are using this setup to export HTML files for notebooks, AND 2) you are running the slider server **on another setup/computer**, then this setting should be the URL pointing to the slider server, e.g. `"https://sliderserver.mycoolproject.org/"`. For example, you need this if you use GitHub Actions and GitHub Pages to generate HTML files, with a slider server on DigitalOcean. === If you only have *one* server for both the static exports and the slider server, and people will read notebooks directly on your server, then the default value `nothing` will work: it will automatically make the HTML files use their current domain for the slider server.""" slider_server_url::Union{Nothing,String} = nothing "Automatically generate an `index.html` file, listing all the exported notebooks (only if no `index.jl` or `index.html` file exists already)." create_index::Bool = true "Use the Pluto Featured GUI to display the notebooks on the auto-generated index page, using frontmatter for title, description, image, and more. The default is currently `false`, but it might change in the future. Set to `true` or `false` explicitly to fix a value." create_pluto_featured_index::Union{Nothing,Bool} = nothing pluto_cdn_root::Union{Nothing,String} = nothing end @option struct PlutoDeploySettings SliderServer::SliderServerSettings = SliderServerSettings() Export::ExportSettings = ExportSettings() Pluto::Pluto.Configuration.Options = Pluto.Configuration.Options() end function get_configuration( toml_path::Union{Nothing,String}=nothing; kwargs..., )::PlutoDeploySettings # we set `Pluto_server_notebook_path_suggestion=joinpath(homedir(),"")` to because the default value for this setting changes when the pwd changes. This causes our run_git_directory to exit... Not necessary for Pluto 0.17.3 and up. if !isnothing(toml_path) && isfile(toml_path) Configurations.from_toml( PlutoDeploySettings, toml_path; Pluto_server_notebook_path_suggestion=joinpath(homedir(), ""), kwargs..., ) else Configurations.from_kwargs( PlutoDeploySettings; Pluto_server_notebook_path_suggestion=joinpath(homedir(), ""), kwargs..., ) end end merge_recursive(a::AbstractDict, b::AbstractDict) = mergewith(merge_recursive, a, b) merge_recursive(a, b) = b is_glob_match(path::AbstractString, pattern::AbstractString) = occursin(Glob.FilenameMatch(pattern, ""), path) is_glob_match(path::AbstractString, patterns::AbstractVector{<:AbstractString}) = any(p -> is_glob_match(path, p), patterns) is_glob_match(pattern) = path -> is_glob_match(path, pattern)
{ "repo_name": "JuliaPluto/PlutoSliderServer.jl", "stars": "114", "repo_language": "Julia", "file_name": "c.plutojl", "mime_type": "text/plain" }
using Base64 using SHA const plus = codeunit("+", 1) const minus = codeunit("-", 1) const slash = codeunit("/", 1) const equals = codeunit("=", 1) const underscore = codeunit("_", 1) without_equals(s) = rstrip(s, '=') # is a lazy SubString! yay const base64url_docs_info = """ This function implements the *`base64url` algorithm*, which is like regular `base64` but it produces legal URL/filenames: it uses `-` instead of `+`, `_` instead of `/`, and it does not pad (with `=` originally). See [the wiki](https://en.wikipedia.org/wiki/Base64#Variants_summary_table) or [the official spec (RFC 4648 §5)](https://datatracker.ietf.org/doc/html/rfc4648#section-5). """ """ ```julia base64urldecode(s::AbstractString)::Vector{UInt8} ``` $base64url_docs_info """ function base64urldecode(s::AbstractString)::Vector{UInt8} # This is roughly 0.5 as fast as `base64decode`. See https://gist.github.com/fonsp/d2b84265012942dc40d0082b1fd405ba for benchmark and even slower alternatives. Of course, we could copy-paste the Base64 code and modify it to do base64url, but then Donald Knuth would get angry. cs = codeunits(s) io = IOBuffer(; sizehint=length(cs) + 2) iob64_decode = Base64DecodePipe(io) write(io, replace(codeunits(s), minus => plus, underscore => slash)) for _ = 1:mod(-length(cs), 4) write(io, equals) end seekstart(io) read(iob64_decode) end """ ```julia base64urlencode(data)::String ``` $base64url_docs_info """ function base64urlencode(data)::String # This is roughly 0.5 as fast as `base64encode`. See comment above. String( replace( base64encode(data) |> without_equals |> codeunits, plus => minus, slash => underscore, ), ) end const plutohash = base64urlencode ∘ sha256 plutohash_contents(path) = plutohash(read(path))
{ "repo_name": "JuliaPluto/PlutoSliderServer.jl", "stars": "114", "repo_language": "Julia", "file_name": "c.plutojl", "mime_type": "text/plain" }
module PlutoSliderServer using FromFile @from "./MoreAnalysis.jl" import bound_variable_connections_graph @from "./FileHelpers.jl" import find_notebook_files_recursive, list_files_recursive @from "./IndexHTML.jl" import generate_index_html @from "./IndexJSON.jl" import generate_index_json @from "./Actions.jl" import process, should_shutdown, should_update, should_launch, will_process @from "./Types.jl" import NotebookSession @from "./Lock.jl" import withlock @from "./Configuration.jl" import PlutoDeploySettings, ExportSettings, SliderServerSettings, get_configuration, is_glob_match @from "./ConfigurationDocs.jl" import @extract_docs, get_kwdocs, list_options_md, list_options_toml @from "./ReloadFolder.jl" import update_sessions!, select @from "./HTTPRouter.jl" import make_router, ReferrerMiddleware @from "./gitpull.jl" import fetch_pull @from "./PlutoHash.jl" import plutohash, base64urlencode, base64urldecode export plutohash, base64urlencode, base64urldecode import Pluto import Pluto: ServerSession, Firebasey, Token, withtoken, pluto_file_extensions, without_pluto_file_extension using HTTP using Sockets import Pkg import BetterFileWatching: watch_folder import AbstractPlutoDingetjes: is_inside_pluto import TerminalLoggers: TerminalLogger import Logging: global_logger, ConsoleLogger import GitHubActions: GitHubActionsLogger export export_directory, run_directory, run_git_directory, github_action export export_notebook, run_notebook export show_sample_config_toml_file const logger_loaded = Ref{Bool}(false) function load_cool_logger() if !logger_loaded[] logger_loaded[] = true if ((global_logger() isa ConsoleLogger) && !is_inside_pluto()) if get(ENV, "GITHUB_ACTIONS", "false") == "true" # TODO: disabled because of https://github.com/JuliaWeb/HTTP.jl/issues/921 # global_logger(GitHubActionsLogger()) else global_logger(try TerminalLogger(; margin=1) catch TerminalLogger() end) end end end end const sample_config_toml_file = """ # WARNING: this sample configuration file contains settings for **all options**, to demonstrate what is possible. For most users, we recommend keeping the configuration file small, and letting PlutoSliderServer choose the default settings automatically. # This means: DO NOT use this file in your setup, instead, create an empty toml file and **add only the settings that you want to change**. [Export] $(list_options_toml(ExportSettings)) [SliderServer] $(list_options_toml(SliderServerSettings)) [Pluto] [Pluto.compiler] threads = 1 # See documentation for `Pluto.Configuration` for the full list of options. You need specify the categories within `Pluto.Configuration.Options` (`compiler`, `evaluation`, etc). """ function show_sample_config_toml_file() Text(sample_config_toml_file) end merge_recursive(a::AbstractDict, b::AbstractDict) = mergewith(merge_recursive, a, b) merge_recursive(a, b) = b merge_recursive(a::PlutoDeploySettings, b::PlutoDeploySettings) = Configurations.from_dict( PlutoDeploySettings, merge_recursive(Configurations.to_dict(a), Configurations.to_dict(b)), ) with_kwargs(original::PlutoDeploySettings; kwargs...) = merge_recursive(original, Configurations.from_kwargs(PlutoDeploySettings; kwargs...)) showall(xs) = Text(join(string.(xs), "\n")) default_config_path() = joinpath(Base.active_project() |> dirname, "PlutoDeployment.toml") macro ignorefailure(x) quote try $(esc(x)) catch e showerror(stderr, e, catch_backtrace()) end end end """ export_directory(start_dir::String="."; kwargs...) Search recursively for all Pluto notebooks in the current folder, and for each notebook: - Run the notebook and wait for all cells to finish - Export the state object - Create a .html file with the same name as the notebook, which has: - The JS and CSS assets to load the Pluto editor - The state object embedded - Extra functionality enabled, such as hidden UI, binder button, and a live bind server # Keyword rguments $(list_options_md(ExportSettings; prefix="Export")) - `notebook_paths::Vector{String}=find_notebook_files_recursive(start_dir)`: If you do not want the recursive save behaviour, then you can set this to a vector of absolute paths. In that case, `start_dir` is ignored, and you should set `Export_output_dir`. """ function export_directory(args...; kwargs...) run_directory(args...; Export_enabled=true, SliderServer_enabled=false, kwargs...) end """ export_notebook(notebook_filename::String; kwargs...) A single-file version of [`export_directory`](@ref). """ export_notebook(p; kwargs...) = run_notebook( p; Export_enabled=true, SliderServer_enabled=false, Export_create_index=false, kwargs..., ) github_action = export_directory """ run_notebook(notebook_filename::String; kwargs...) A single-file version of [`run_directory`](@ref). """ function run_notebook(path::String; kwargs...) path = Pluto.tamepath(path) @assert isfile(path) run_directory(dirname(path); notebook_paths=[basename(path)], kwargs...) end """ run_directory(start_dir::String="."; export_options...) Run the Pluto bind server for all Pluto notebooks in the given directory (recursive search). # Keyword arguments $(list_options_md(SliderServerSettings; prefix="SliderServer")) - `notebook_paths::Union{Nothing,Vector{String}}=nothing`: If you do not want the recursive save behaviour, then you can set this to a vector of absolute paths. In that case, `start_dir` is ignored, and you should set `Export_output_dir`. - `Export_enabled::Bool=true`: Also export HTML files? --- ## Export keyword arguments If `Export_enabled` is `true`, then additional `Export_` keywords can be given: $(list_options_md(ExportSettings; prefix="Export")) """ function run_directory( start_dir::String="."; notebook_paths::Union{Nothing,Vector{String}}=nothing, on_ready::Function=((args...) -> nothing), config_toml_path::Union{String,Nothing}=default_config_path(), kwargs..., ) @assert joinpath("a", "b") == "a/b" "PlutoSliderServer does not work on Windows yet!" load_cool_logger() start_dir = Pluto.tamepath(start_dir) @assert isdir(start_dir) settings = get_configuration(config_toml_path; kwargs...) output_dir = something( settings.Export.output_dir, settings.SliderServer.enabled ? mktempdir() : start_dir, ) mkpath(output_dir) if joinpath("a", "b") != "a/b" @error "PlutoSliderServer.jl is only designed to work on unix systems." exit() end function getpaths() all_nbs = notebook_paths !== nothing ? notebook_paths : find_notebook_files_recursive(start_dir) s_remaining = filter(!is_glob_match(settings.SliderServer.exclude), all_nbs) e_remaining = filter(!is_glob_match(settings.Export.exclude), all_nbs) if settings.Export.enabled if settings.SliderServer.enabled s_remaining ∪ e_remaining else e_remaining end else filter(s_remaining) do f try occursin("@bind", read(joinpath(start_dir, f), String)) catch true end end end end @info "Settings" Text(settings) settings.SliderServer.enabled && @warn "Make sure that you run this slider server inside an isolated environment -- it is not intended to be secure. Assume that users can execute arbitrary code inside your notebooks." settings.Pluto.server.disable_writing_notebook_files = true settings.Pluto.evaluation.lazy_workspace_creation = true server_session = Pluto.ServerSession(; options=settings.Pluto) notebook_sessions = NotebookSession[] if settings.SliderServer.enabled static_dir = (settings.Export.enabled && settings.SliderServer.serve_static_export_folder) ? output_dir : nothing router = make_router(notebook_sessions, server_session; settings, static_dir, start_dir) # This is boilerplate HTTP code, don't read it host = settings.SliderServer.host port = settings.SliderServer.port # This is boilerplate HTTP code, don't read it hostIP = parse(Sockets.IPAddr, host) if port === nothing port, serversocket = Sockets.listenany(hostIP, UInt16(1234)) else serversocket = try Sockets.listen(hostIP, UInt16(port)) catch e @error "Port with number $port is already in use." return end end address = let host_str = string(hostIP) host_pretty = if isa(hostIP, Sockets.IPv6) if host_str == "::1" "localhost" else "[$(host_str)]" end elseif host_str == "127.0.0.1" # Assuming the other alternative is IPv4 "localhost" else host_str end port_pretty = Int(port) "http://$(host_pretty):$(port_pretty)/" end @info "# Starting server..." address # We start the HTTP server before launching notebooks so that the server responds to heroku/digitalocean garbage fast enough http_server = HTTP.serve!( router |> ReferrerMiddleware, hostIP, UInt16(port), server=serversocket, ) @info "# Server started" else http_server = nothing serversocket = nothing end function write_index(sessions) if ( settings.Export.enabled && settings.Export.create_index && # If `settings.SliderServer.serve_static_export_folder`, then we serve a dynamic index page (inside HTTPRouter.jl), so we don't want to create a static index page. !( settings.SliderServer.enabled && settings.SliderServer.serve_static_export_folder ) ) # HTML exists = any([ "index.html", "index.md", ("index" * e for e in pluto_file_extensions)..., ]) do f joinpath(output_dir, f) |> isfile end if !exists write( joinpath(output_dir, "index.html"), generate_index_html(sessions; settings), ) end # JSON write( joinpath(output_dir, "pluto_export.json"), generate_index_json(sessions; settings, start_dir), ) end end function refresh_until_synced(check_dir_on_every_step::Bool, did_something::Bool=false) should_continue = withlock(notebook_sessions) do if check_dir_on_every_step update_sessions!(notebook_sessions, getpaths(); start_dir) write_index(notebook_sessions) end # todo: try catch to release lock? to_shutdown = select(should_shutdown, notebook_sessions) to_update = select(should_update, notebook_sessions) to_launch = select(should_launch, notebook_sessions) s = something(to_shutdown, to_update, to_launch, "not found") if s != "not found" progress = "[$( count(!will_process, notebook_sessions) + 1 )/$(length(notebook_sessions))]" new = process(s; server_session, settings, output_dir, start_dir, progress) if new !== s if new isa NotebookSession{Nothing,Nothing,<:Any} # remove it filter!(!isequal(s), notebook_sessions) elseif s !== new replace!(notebook_sessions, s => new) end end true else did_something && @info "# ALL NOTEBOOKS READY" false end end if did_something || should_continue write_index(notebook_sessions) end should_continue && refresh_until_synced(check_dir_on_every_step, true) end # RUN ALL NOTEBOOKS AND KEEP THEM RUNNING update_sessions!(notebook_sessions, getpaths(); start_dir) write_index(notebook_sessions) refresh_until_synced(false) should_watch = settings.SliderServer.enabled && settings.SliderServer.watch_dir watch_dir_task = Pluto.@asynclog if should_watch @info "Watching directory for changes..." debounced = kind_of_debounced() do _ @debug "File change detected!" sleep(0.5) refresh_until_synced(true) end watch_folder(debounced, start_dir) end if http_server === nothing on_ready((; serversocket, http_server, server_session, notebook_sessions)) else try if should_watch # todo: skip first watch_folder so that we dont need this sleepo (EDIT: i forgot why this sleep is here.. oops!) sleep(2) end on_ready((; serversocket, http_server, server_session, notebook_sessions)) # blocking call, waiting for a Ctrl-C interrupt wait(http_server) catch e @info "# Closing web server..." @ignorefailure close(http_server) if should_watch @info "Stopping directory watching..." istaskdone(watch_dir_task) || @ignorefailure schedule(watch_dir_task, e; error=true) end e isa InterruptException || rethrow(e) @info "Server exited ✅" end end nothing end """ run_git_directory(start_dir::String="."; export_options...) Like [`run_directory`](@ref), but will automatically keep running `git pull` in `start_dir` and update the slider server to match changes in the directory. See our README for more info. If you use a `PlutoDeployment.toml` file, we also keep checking whether this file changed. If it changed, we **exit Julia session**, and you will have to restart it. Open an issue if this is a problem. """ function run_git_directory( start_dir::String="."; notebook_paths::Union{Nothing,Vector{String}}=nothing, on_ready::Function=((args...) -> ()), config_toml_path::Union{String,Nothing}=default_config_path(), kwargs..., ) start_dir = Pluto.tamepath(start_dir) @assert isdir(start_dir) get_settings() = get_configuration(config_toml_path; SliderServer_watch_dir=true, kwargs...) old_settings = get_settings() run_dir_task = Pluto.@asynclog begin run_directory( start_dir; notebook_paths, on_ready, config_toml_path, SliderServer_watch_dir=true, kwargs..., ) end old_deps = Pkg.dependencies() pull_loop_task = Pluto.@asynclog while true new_settings = get_settings() new_deps = Pkg.dependencies() if old_settings != new_settings @error "Configuration changed. Shutting down!" println(stderr, "Old settings:") println(stderr, repr(old_settings)) println(stderr, "") println(stderr, "New settings:") println(stderr, repr(new_settings)) exit() # this should trigger a restart, using the new settings end if old_deps != new_deps @error "Package environment changed. Shutting down!" exit() # this should trigger a restart, using the new settings end sleep(5) try fetch_pull(start_dir) catch e @warn "git: Error in poll_pull_loop" exception = (e, catch_backtrace()) end end waitall([run_dir_task, pull_loop_task]) end function waitall(tasks) killing = Ref(false) @sync for t in tasks try wait(t) catch e if !(e isa InterruptException) showerror(stderr, e, catch_backtrace()) end if !killing[] killing[] = true for t2 in tasks try schedule(t2, e; error=true) catch e end end end end end end function kind_of_debounced(f) update_waiting = Ref(true) running = Ref(false) function go(args...) # @info "trigger" update_waiting[] running[] update_waiting[] = true if !running[] running[] = true update_waiting[] = false f(args...) running[] = false end update_waiting[] && go(args...) end return go end end
{ "repo_name": "JuliaPluto/PlutoSliderServer.jl", "stars": "114", "repo_language": "Julia", "file_name": "c.plutojl", "mime_type": "text/plain" }
using FromFile import JSON import Pluto: Pluto, without_pluto_file_extension @from "./Types.jl" import NotebookSession, RunningNotebook @from "./Configuration.jl" import PlutoDeploySettings id(s::NotebookSession) = s.path function json_data( s::NotebookSession; settings::PlutoDeploySettings, start_dir::AbstractString, ) ( id=id(s), hash=s.current_hash, html_path=without_pluto_file_extension(s.path) * ".html", statefile_path=settings.Export.baked_state ? nothing : without_pluto_file_extension(s.path) * ".plutostate", notebookfile_path=settings.Export.baked_notebookfile ? nothing : s.path, current_hash=s.current_hash, desired_hash=s.desired_hash, frontmatter=merge( Dict{String,Any}( # default title if none given in frontmatter "title" => basename(without_pluto_file_extension(s.path)), ), Pluto.frontmatter( # Pluto.frontmatter accepts either a Notebook, or a path (in which case it will parse the file). if s.run isa RunningNotebook s.run.notebook else joinpath(start_dir, s.path) end; raise=false, ), ), ) end function json_data( sessions::Vector{NotebookSession}; settings::PlutoDeploySettings, start_dir::AbstractString, config_data::Dict{String,Any}, ) ( notebooks=Dict(id(s) => json_data(s; settings, start_dir) for s in sessions), pluto_version=lstrip(Pluto.PLUTO_VERSION_STR, 'v'), julia_version=lstrip(string(VERSION), 'v'), format_version="1", # title=get(config_data, "title", nothing), description=get(config_data, "description", nothing), collections=get(config_data, "collections", nothing), # collections=let c = settings.Export.collections # c === nothing ? nothing : [ # v for (k,v) in sort(pairs(c); by=((k,v)) -> parse(Int, k)) # ], ) end function generate_index_json( sessions::Vector{NotebookSession}; settings::PlutoDeploySettings, start_dir::AbstractString, ) p = joinpath(start_dir, "pluto_export_configuration.json") config_data = if isfile(p) JSON.parse(read(p, String))::Dict{String,Any} else Dict{String,Any}() end result = json_data(sessions; settings, start_dir, config_data) JSON.json(result) end
{ "repo_name": "JuliaPluto/PlutoSliderServer.jl", "stars": "114", "repo_language": "Julia", "file_name": "c.plutojl", "mime_type": "text/plain" }
using PlutoSliderServer ENV["JULIA_DEBUG"] = PlutoSliderServer import Random Random.seed!(time_ns()) test_dir = tempname(cleanup=false) cp(@__DIR__, test_dir) try # open the folder on macos: run(`open $(test_dir)`) catch end port = 2341 # We want to use our local development version of Pluto for the frontend assets. This allows us to change and test the Pluto assets without having to restart the slider server. cdn = "http://127.0.0.1:$(port)/pluto_asset/" # cdn = nothing PlutoSliderServer.run_directory( test_dir; SliderServer_port=port, SliderServer_host="127.0.0.1", SliderServer_watch_dir=true, Export_pluto_cdn_root=cdn, )
{ "repo_name": "JuliaPluto/PlutoSliderServer.jl", "stars": "114", "repo_language": "Julia", "file_name": "c.plutojl", "mime_type": "text/plain" }
using Test import PlutoSliderServer: plutohash, base64urlencode, base64urldecode import HTTP @testset "plutohash" begin @test plutohash("Hannes") == "OI48wVWerxEEnz5lIj6CPPRB8NOwwba-LkFYTDp4aUU" @test base64urlencode(UInt8[0, 0, 63, 0, 0, 62, 42]) == "AAA_AAA-Kg" dir = mktempdir() for _ = 1:50 data = rand(UInt8, rand(60:80)) e = base64urlencode(data) @test base64urldecode(e) == data # URI escaping/unescaping should have no effect @test HTTP.unescapeuri(e) == e @test HTTP.escapeuri(e) == e # it should be a legal filename p = joinpath(dir, e) write(p, "123") @test read(p, String) == "123" isfile(p) rm(p) @assert !isfile(p) end end
{ "repo_name": "JuliaPluto/PlutoSliderServer.jl", "stars": "114", "repo_language": "Julia", "file_name": "c.plutojl", "mime_type": "text/plain" }
import PlutoSliderServer ENV["JULIA_DEBUG"] = PlutoSliderServer # To run a test server: # - Change the line below to `true`. just_run_test_server = false # - Pretend to run the tests: `pkg> test PlutoSliderServer` # # This will: # - Create a temporary folder with some notebook files. # - Run the slider server on that folder on a random port. (Check the terminal for the locahost URL.) # - Use your local copy of Pluto for the JS assets, instead of getting them from the CDN. This means that you can edit Pluto's JS files, refresh, and see the changes! if just_run_test_server include("./runtestserver.jl") else cache_dir = tempname(cleanup=false) ENV["HIDE_PLUTO_EXACT_VERSION_WARNING"] = "true" include("./plutohash.jl") include("./configuration.jl") include("./static export.jl") include("./HTTP requests.jl") include("./Folder watching.jl") include("./Bond connections.jl") end
{ "repo_name": "JuliaPluto/PlutoSliderServer.jl", "stars": "114", "repo_language": "Julia", "file_name": "c.plutojl", "mime_type": "text/plain" }
import PlutoSliderServer import PlutoSliderServer: bound_variable_connections_graph import PlutoSliderServer.Pluto using Test @testset "Bond connections" begin file = joinpath(@__DIR__, "parallelpaths4.jl") newpath = tempname() Pluto.readwrite(file, newpath) notebook = Pluto.load_notebook(newpath) s = Pluto.ServerSession() s.options.evaluation.workspace_use_distributed = false Pluto.update_run!(s, notebook, notebook.cells) # notebook.topology = Pluto.updated_topology(notebook.topology, notebook, notebook.cells) # bound_variables = (map(notebook.cells) do cell # MoreAnalysis.find_bound_variables(cell.parsedcode) # end) # @show bound_variables connections = bound_variable_connections_graph(s, notebook) # @show connections @test !isempty(connections) wanted_connections = Dict( :x => [:y, :x], :y => [:y, :x], :show_dogs => [:show_dogs], :b => [:b], :c => [:c], :five1 => [:five1], :five2 => [:five2], :six1 => [:six2, :six1], :six2 => [:six3, :six2, :six1], :six3 => [:six3, :six2], :cool1 => [:cool1, :cool2], :cool2 => [:cool1, :cool2], :world => [:world], :boring => [:boring], :custom_macro => [:custom_macro], ) transform(d) = Dict(k => sort(v) for (k, v) in d) @test transform(connections) == transform(wanted_connections) end
{ "repo_name": "JuliaPluto/PlutoSliderServer.jl", "stars": "114", "repo_language": "Julia", "file_name": "c.plutojl", "mime_type": "text/plain" }
### A Pluto.jl notebook ### # v0.12.20 using Markdown using InteractiveUtils # This Pluto notebook uses @bind for interactivity. When running this notebook outside of Pluto, the following 'mock version' of @bind gives bound variables a default value (instead of an error). macro bind(def, element) quote local el = $(esc(element)) global $(esc(def)) = Core.applicable(Base.get, el) ? Base.get(el) : missing el end end # ╔═╡ bcc82d12-6566-11eb-325f-8b6c4ac362a0 begin import Pkg Pkg.activate(mktempdir()) Pkg.add("PlutoUI") using PlutoUI end # ╔═╡ 7f2c6b8a-6be9-4c64-b0b5-7fc4435153ee @bind x Slider(50:100) # ╔═╡ 2995d591-0f74-44e8-9c06-c42c2f9c68f8 @bind y Slider(x:200) # ╔═╡ 6bc11e12-3bdb-4ca4-a36d-f8067af95ca5 x # ╔═╡ 80789650-d01f-4d75-8091-6117a66402cb y # ╔═╡ Cell order: # ╠═bcc82d12-6566-11eb-325f-8b6c4ac362a0 # ╠═7f2c6b8a-6be9-4c64-b0b5-7fc4435153ee # ╠═2995d591-0f74-44e8-9c06-c42c2f9c68f8 # ╠═6bc11e12-3bdb-4ca4-a36d-f8067af95ca5 # ╠═80789650-d01f-4d75-8091-6117a66402cb
{ "repo_name": "JuliaPluto/PlutoSliderServer.jl", "stars": "114", "repo_language": "Julia", "file_name": "c.plutojl", "mime_type": "text/plain" }
using PlutoSliderServer using FromFile using Test toml_path = tempname() write(toml_path, PlutoSliderServer.sample_config_toml_file) result = PlutoSliderServer.get_configuration(toml_path; Export_binder_url="yayy") @test result.Export.binder_url == "yayy" @test result.Pluto.compiler.threads == 1
{ "repo_name": "JuliaPluto/PlutoSliderServer.jl", "stars": "114", "repo_language": "Julia", "file_name": "c.plutojl", "mime_type": "text/plain" }
### A Pluto.jl notebook ### # v0.12.20 #> [frontmatter] #> title = "Pancakes" #> description = "are yummy 🥞" #> tags = ["pancake", "hungry", "snack"] using Markdown using InteractiveUtils # This Pluto notebook uses @bind for interactivity. When running this notebook outside of Pluto, the following 'mock version' of @bind gives bound variables a default value (instead of an error). macro bind(def, element) quote local el = $(esc(element)) global $(esc(def)) = Core.applicable(Base.get, el) ? Base.get(el) : missing el end end # ╔═╡ 635e5ebc-6567-11eb-1d9d-f98bfca7ec99 # ╔═╡ 635e5ebc-6567-11eb-1d9d-f98bfca7ec27 @bind x html"<input type=range>" # ╔═╡ ad853ac9-a8a0-44ef-8d41-c8cea165ad57 @bind y html"<input type=range>" # ╔═╡ 26025270-9b5e-4841-b295-0c47437bc7db x + y # ╔═╡ 1352da54-e567-4f59-a3da-19ed3f4bb7c7 # ╔═╡ 09ae27fa-525a-4211-b252-960cdbaf1c1e b = @bind s html"<input>" # ╔═╡ ed78a6f1-d282-4d80-8f42-40701aeadb52 b # ╔═╡ cca2f726-0c25-43c6-85e4-c16ec192d464 s # ╔═╡ c4f51980-3c30-4d3f-a76a-fc0f0fe16944 # ╔═╡ 8f0bd329-36b8-45ed-b80d-24661242129a b2 = @bind s2 html"<input>" # ╔═╡ c55a107f-5d7d-4396-b597-8c1ae07c35be b2 # ╔═╡ a524ff27-a6a3-4f14-8ed4-f55700647bc4 sleep(1); s2 # ╔═╡ Cell order: # ╠═635e5ebc-6567-11eb-1d9d-f98bfca7ec99 # ╠═635e5ebc-6567-11eb-1d9d-f98bfca7ec27 # ╠═ad853ac9-a8a0-44ef-8d41-c8cea165ad57 # ╠═26025270-9b5e-4841-b295-0c47437bc7db # ╟─1352da54-e567-4f59-a3da-19ed3f4bb7c7 # ╠═09ae27fa-525a-4211-b252-960cdbaf1c1e # ╠═ed78a6f1-d282-4d80-8f42-40701aeadb52 # ╠═cca2f726-0c25-43c6-85e4-c16ec192d464 # ╟─c4f51980-3c30-4d3f-a76a-fc0f0fe16944 # ╠═8f0bd329-36b8-45ed-b80d-24661242129a # ╠═c55a107f-5d7d-4396-b597-8c1ae07c35be # ╠═a524ff27-a6a3-4f14-8ed4-f55700647bc4
{ "repo_name": "JuliaPluto/PlutoSliderServer.jl", "stars": "114", "repo_language": "Julia", "file_name": "c.plutojl", "mime_type": "text/plain" }
using PlutoSliderServer using PlutoSliderServer: list_files_recursive using Test using Logging import JSON import Pluto: without_pluto_file_extension import Random original_dir1 = joinpath(@__DIR__, "dir1") make_test_dir() = let Random.seed!(time_ns()) new = tempname(cleanup=false) cp(original_dir1, new) new end @testset "static - Basic github action" begin test_dir = make_test_dir() @show test_dir cache_dir cd(test_dir) @test sort(list_files_recursive()) == sort(["a.jl", "b.pluto.jl", "notanotebook.jl", "subdir/c.plutojl"]) github_action(Export_cache_dir=cache_dir, Export_baked_state=true) @test sort(list_files_recursive()) == sort([ "index.html", "pluto_export.json", # "a.jl", "a.html", "b.pluto.jl", "b.html", "notanotebook.jl", "subdir/c.plutojl", "subdir/c.html", ]) # Test whether the notebook file did not get changed @test read(joinpath(original_dir1, "a.jl"), String) == read(joinpath(test_dir, "a.jl"), String) # Test cache @test isdir(cache_dir) # should be created @show list_files_recursive(cache_dir) @test length(list_files_recursive(cache_dir)) >= 2 # Test runtime to check that the cache works second_runtime = with_logger(NullLogger()) do 0.1 * @elapsed for i = 1:10 github_action(Export_cache_dir=cache_dir) end end @show second_runtime @test second_runtime < 1.0 @test occursin("slider_server_url = undefined", read("a.html", String)) end @testset "static - Separate state & notebook files" begin test_dir = make_test_dir() @show test_dir cd(test_dir) @test sort(list_files_recursive()) == sort(["a.jl", "b.pluto.jl", "notanotebook.jl", "subdir/c.plutojl"]) config_contents = """ [Export] baked_state = false baked_notebookfile = false binder_url = "pannenkoek" exclude = [ "subdir/*", "sadfsadfdfsadsf", ] """ config_path = tempname() write(config_path, config_contents) github_action( Export_offer_binder=true, Export_slider_server_url="appelsap", config_toml_path=config_path, ) c = PlutoSliderServer.get_configuration( config_path; Export_slider_server_url="appelsap", ) @test c.Export.slider_server_url == "appelsap" @test sort(list_files_recursive()) == sort([ "index.html", "pluto_export.json", # "a.jl", "a.html", "a.plutostate", "b.pluto.jl", "b.html", "b.plutostate", "notanotebook.jl", "subdir/c.plutojl", ]) htmlstr_a = replace(read("a.html", String), '\'' => '\"') htmlstr_b = read("b.html", String) # test that export settings were used in the HTML file @test occursin("a.jl", htmlstr_a) @test occursin("a.plutostate", htmlstr_a) @test occursin("pannenkoek", htmlstr_a) @test occursin("appelsap", htmlstr_a) # test that frontmatter is used in the HTML @test occursin("<title>My&lt;Title</title>", htmlstr_a) @test occursin("""<meta name="description" content="ccc">""", htmlstr_a) @test occursin("""<meta property="og:description" content="ccc">""", htmlstr_a) @test occursin("""<meta property="og:article:tag" content="aaa">""", htmlstr_a) @test occursin("""<meta property="og:article:tag" content="bbb">""", htmlstr_a) @test occursin("""<meta property="og:type" content="article">""", htmlstr_a) @test !occursin("<title>", htmlstr_b) end @testset "static - Single notebook" begin test_dir = make_test_dir() # @show test_dir cache_dir cd(test_dir) @test sort(list_files_recursive()) == sort(["a.jl", "b.pluto.jl", "notanotebook.jl", "subdir/c.plutojl"]) export_notebook("a.jl"; Export_cache_dir=cache_dir, Export_baked_state=true) @test sort(list_files_recursive()) == sort([ # no index for single notebooks # "index.html", "a.jl", "a.html", "b.pluto.jl", "notanotebook.jl", "subdir/c.plutojl", ]) end @testset "static - Index HTML and JSON – fancy=$(fancy)" for fancy ∈ (false, true) test_dir = make_test_dir() @show test_dir cache_dir cd(test_dir) @test sort(list_files_recursive()) == sort(["a.jl", "b.pluto.jl", "notanotebook.jl", "subdir/c.plutojl"]) export_directory( Export_cache_dir=cache_dir, Export_baked_state=false, Export_create_pluto_featured_index=fancy, ) @test sort(list_files_recursive()) == sort([ "index.html", "pluto_export.json", # "a.jl", "a.html", "a.plutostate", "b.pluto.jl", "b.html", "b.plutostate", "notanotebook.jl", "subdir/c.plutojl", "subdir/c.html", "subdir/c.plutostate", ]) htmlstr = read("index.html", String) jsonstr = read("pluto_export.json", String) json = JSON.parse(jsonstr) if fancy @test occursin("</html>", htmlstr) @test occursin("pluto_export.json", htmlstr) end nbs = ["subdir/c.plutojl", "b.pluto.jl", "a.jl"] for (i, p) in enumerate(nbs) @test occursin(p, jsonstr) if !fancy @test occursin(p |> without_pluto_file_extension, htmlstr) end @test !isempty(json["notebooks"][p]["frontmatter"]["title"]) without_pluto_file_extension end # TODO: use frontmatter.title here instead of the filename? or make the switch to PlutoPages? end
{ "repo_name": "JuliaPluto/PlutoSliderServer.jl", "stars": "114", "repo_language": "Julia", "file_name": "c.plutojl", "mime_type": "text/plain" }
import PlutoSliderServer import PlutoSliderServer.Pluto import PlutoSliderServer.HTTP using Test using UUIDs, Random @testset "HTTP requests: dynamic" begin Random.seed!(time_ns()) test_dir = tempname(cleanup=false) cp(@__DIR__, test_dir) notebook_paths = ["basic2.jl", "parallelpaths4.jl"] port = rand(12345:65000) still_booting = Ref(true) ready_result = Ref{Any}(nothing) function on_ready(result) ready_result[] = result still_booting[] = false end t = Pluto.@asynclog begin PlutoSliderServer.run_directory( test_dir; Export_enabled=false, Export_cache_dir=cache_dir, SliderServer_port=port, notebook_paths, on_ready, ) end while still_booting[] sleep(0.1) end notebook_sessions = ready_result[].notebook_sessions @show notebook_paths [ (s.path, typeof(s.run), s.current_hash) for s in notebook_sessions ] @testset "Bond connections - $(name)" for (i, name) in enumerate(notebook_paths) s = notebook_sessions[i] response = HTTP.get("http://localhost:$(port)/bondconnections/$(s.current_hash)/") result = Pluto.unpack(response.body) @test result == Dict(String(k) => String.(v) for (k, v) in s.run.bond_connections) end @testset "State request - basic2.jl" begin i = 1 s = notebook_sessions[i] @testset "Method $(method)" for method in ["GET", "POST"], x = 30:33 v(x) = Dict("value" => x) bonds = Dict("x" => v(x), "y" => v(42)) state = Pluto.unpack(Pluto.pack(s.run.original_state)) sum_cell_id = "26025270-9b5e-4841-b295-0c47437bc7db" response = if method == "GET" arg = Pluto.pack(bonds) |> PlutoSliderServer.base64urlencode # escaping should have no effect @test HTTP.URIs.escapeuri(arg) == arg HTTP.request( method, "http://localhost:$(port)/staterequest/$(s.current_hash)/$(arg)", ) else HTTP.request( method, "http://localhost:$(port)/staterequest/$(s.current_hash)/", [], Pluto.pack(bonds), ) end result = Pluto.unpack(response.body) @test sum_cell_id ∈ result["ids_of_cells_that_ran"] for patch in result["patches"] Pluto.Firebasey.applypatch!( state, convert(Pluto.Firebasey.JSONPatch, patch), ) end @test state["cell_results"][sum_cell_id]["output"]["body"] == string(bonds["x"]["value"] + bonds["y"]["value"]) end end close(ready_result[].http_server) try wait(t) catch e if !(e isa TaskFailedException) rethrow(e) end end # schedule(t, InterruptException(), error=true) @info "DONEZO" end find(f, xs) = xs[findfirst(f, xs)] original_dir1 = joinpath(@__DIR__, "dir1") make_test_dir() = let Random.seed!(time_ns()) new = tempname(cleanup=false) cp(original_dir1, new) new end @testset "HTTP requests: static" begin test_dir = make_test_dir() let # add one more file that we will only export, but not run in the slider server old = read(joinpath(test_dir, "a.jl"), String) new = replace(old, "Hello" => "Hello again") @assert old != new mkpath(joinpath(test_dir, "x", "y", "z")) write(joinpath(test_dir, "x", "y", "z", "export_only.jl"), new) end port = rand(12345:65000) still_booting = Ref(true) ready_result = Ref{Any}(nothing) function on_ready(result) ready_result[] = result still_booting[] = false end t = Pluto.@asynclog begin PlutoSliderServer.run_directory( test_dir; Export_enabled=true, Export_baked_notebookfile=false, Export_baked_state=false, Export_cache_dir=cache_dir, SliderServer_port=port, SliderServer_exclude=["*/export_only*"], on_ready, ) end while still_booting[] sleep(0.1) end notebook_sessions = ready_result[].notebook_sessions s_a = find(s -> occursin("a.jl", s.path), notebook_sessions) s_export_only = find(s -> occursin("export_only", s.path), notebook_sessions) response = HTTP.request( "GET", "http://localhost:$(port)/bondconnections/$(s_a.current_hash)/"; status_exception=false, ) @test response.status == 422 # notebook is no longer running since it has no bonds @test s_export_only.run isa PlutoSliderServer.var"../Types.jl".FinishedNotebook response_export_only = HTTP.request( "GET", "http://localhost:$(port)/bondconnections/$(s_export_only.current_hash)/"; status_exception=false, ) @test response_export_only.status == 422 # this notebook is not in the slider server but was exported response_no_notebook = HTTP.request( "GET", "http://localhost:$(port)/bondconnections/$(plutohash("abc"))/"; status_exception=false, ) @test response_no_notebook.status == 404 # this notebook is not in the slider server asset_urls = [ "" "pluto_export.json" # "a.html" "a.jl" "a.plutostate" "b.html" "b.pluto.jl" "b.plutostate" "subdir/c.html" "subdir/c.plutojl" "subdir/c.plutostate" ] @testset "Static asset - $(name)" for (i, name) in enumerate(asset_urls) response = HTTP.request("GET", "http://localhost:$(port)/$(name)") @show response.headers @test response.status == 200 if endswith(name, "html") @test HTTP.hasheader(response, "Content-Type", "text/html; charset=utf-8") end @test HTTP.hasheader(response, "Access-Control-Allow-Origin", "*") @test HTTP.hasheader(response, "Referrer-Policy", "origin-when-cross-origin") if i > 2 @test HTTP.hasheader(response, "Content-Length") end end close(ready_result[].http_server) try wait(t) catch e if !(e isa TaskFailedException) rethrow(e) end end # schedule(t, InterruptException(), error=true) @info "DONEZO" end
{ "repo_name": "JuliaPluto/PlutoSliderServer.jl", "stars": "114", "repo_language": "Julia", "file_name": "c.plutojl", "mime_type": "text/plain" }
### A Pluto.jl notebook ### # v0.19.15 using Markdown using InteractiveUtils # This Pluto notebook uses @bind for interactivity. When running this notebook outside of Pluto, the following 'mock version' of @bind gives bound variables a default value (instead of an error). macro bind(def, element) quote local iv = try Base.loaded_modules[Base.PkgId(Base.UUID("6e696c72-6542-2067-7265-42206c756150"), "AbstractPlutoDingetjes")].Bonds.initial_value catch; b -> missing; end local el = $(esc(element)) global $(esc(def)) = Core.applicable(Base.get, el) ? Base.get(el) : iv(el) el end end # ╔═╡ 8aac8df3-1551-4c9f-a8bd-a62751a29b2a md""" ## Path 1 """ # ╔═╡ 03307e43-cb61-4321-95ac-7bbb16e0cfc6 @bind x html"<input type=range max=10000>" # ╔═╡ 692746e0-7c96-47ac-b1ee-5ff34ee66751 @bind y html"<input type=range max=10000>" # ╔═╡ b18c2329-18d7-4041-962c-0ef98f8aa591 (x,y) # ╔═╡ 399ef117-2085-4c9d-9d8d-d03a03baf5ef md""" ## Path 2 """ # ╔═╡ a3a04d5f-b0f0-4740-9b37-92570864f142 high_res = true # ╔═╡ a822ac82-5691-4f8e-a60a-1a4582cf59e7 dog_file = if high_res download("https://upload.wikimedia.org/wikipedia/commons/e/ef/Pluto_in_True_Color_-_High-Res.jpg") else download("https://upload.wikimedia.org/wikipedia/commons/thumb/e/ef/Pluto_in_True_Color_-_High-Res.jpg/240px-Pluto_in_True_Color_-_High-Res.jpg") end # ╔═╡ 627461ce-9e80-4707-b0ba-ddc6bb9b4269 begin struct Dog end function Base.show(io::IO, ::MIME"image/jpg", ::Dog) write(io, read(dog_file)) end end # ╔═╡ 5ce8ebc6-b509-42f0-acd5-8008673b04ab md"Downloaded image is $(filesize(dog_file) / 1000) kB" # ╔═╡ 1f48fe19-3ee8-44ac-a591-7b4df2d2f93a md""" This cell will have very large Uint8Arrays in the output body """ # ╔═╡ 5539db10-b0d2-48b6-8985-ef437b8ae0b5 @bind show_dogs html"<input type=checkbox>" # ╔═╡ 74329553-ab9b-4b6c-a77b-9c24ac48490b show_dogs === true && Dog() # ╔═╡ 8811b8bd-7d44-4f4a-b52d-0948dad39a51 md""" ## Path 3 """ # ╔═╡ f0575125-d7dd-4cf5-bfd3-3275d6bdd0ce a = 100 # ╔═╡ 802be3da-6f24-4693-b48e-0479aec4a02c begin a @bind b html"<input>" end # ╔═╡ 997f714f-1df4-4b2c-ab0a-50dd52cb4a82 md""" ## Path 4 """ # ╔═╡ a504bc18-fff4-4cd7-b74c-173abcf1ef68 begin a @bind c html"<input>" end # ╔═╡ 00a0169e-fd93-4dfe-b45b-f088312e24ab md""" ## Path 5 """ # ╔═╡ e7b5f5ec-d673-4397-ac0c-7a4bfc364fde f(x) = x # ╔═╡ 0a61c092-e61a-4219-a57f-172a8c8c4117 begin f(1) @bind five1 html"<input>" end # ╔═╡ 31695eb5-7d77-4d6b-a32a-ff31cf1fd8e9 md""" ## Path 6 """ # ╔═╡ 25774e63-94fb-4b03-a43a-72a62f08f0c5 begin f(1) @bind five2 html"<input>" end # ╔═╡ a04672a7-3d3f-4b0b-8c8f-f8b73f208de4 md""" ## Path 7 """ # ╔═╡ 2d0f4354-20e0-47a0-8a86-99cd50bca80f @bind six1 html"<input>" # ╔═╡ fee1f75a-5f3e-4213-8ba3-872871cf7f68 @bind six2 html"<input>" # ╔═╡ 20ebc38b-24c1-4e39-97b0-dcd53a9a5bf7 @bind six3 html"<input>" # ╔═╡ 5545ac33-82e9-4994-be49-5776d512e2c1 (six1, six2) # ╔═╡ b776eba5-60bc-4d0f-8e93-e2baf2f695bc md""" ## Path 8 """ # ╔═╡ a56b8b24-bf38-49bf-b19e-d8feadd55db3 (six2, six3) # ╔═╡ 6784f0e5-5108-48ca-99ac-9d154b1d3c55 md""" ## Path 9 """ # ╔═╡ aacc0632-797f-49d6-b9ce-3e728fade6c3 begin @bind cool1 let @bind cool2 html"nothing" html"<input type=range>" end end # ╔═╡ 6357cfaf-e6d6-49f2-b3bb-5a27b28cf0fa cool1 # ╔═╡ 4fd43c26-dd05-4520-93e9-901760ef49b4 cool2 # ╔═╡ 357762fc-52fe-4727-b0a8-af55eea466b7 md""" ## Path 10 """ # ╔═╡ 0024288a-9ee3-42b5-82c6-d996f45be9ed let md""" Hello $(@bind world html"<input value=world>") """ end # ╔═╡ ef732ddf-b034-42cf-815f-1c8b53da6401 world # ╔═╡ c143b2de-78c5-46ad-852f-3c2a9115cb72 md""" ## Path 11 """ # ╔═╡ cf628a57-933b-4984-a317-63360c345534 @bind boring html"<input>" # ╔═╡ 22659c85-700f-4dad-a22a-7aafa71225c0 # boring is never referenced # ╔═╡ 5bea1d75-a8d5-4285-b0cf-234fcfe8122f md""" ## Path 12 """ # ╔═╡ 3d96cc48-9c73-46e9-869b-eb72231f283e macro bindname(name::Symbol, ex::Expr) name_str = "$name: " quote Markdown.MD([Markdown.Paragraph([Markdown.Bold($name_str), (@bind $name html"<input>")])]) end end # ╔═╡ dc5d8314-d2d1-477c-9ccd-882069ee4210 @bindname custom_macro html"<input>" # ╔═╡ Cell order: # ╟─8aac8df3-1551-4c9f-a8bd-a62751a29b2a # ╠═03307e43-cb61-4321-95ac-7bbb16e0cfc6 # ╠═692746e0-7c96-47ac-b1ee-5ff34ee66751 # ╠═b18c2329-18d7-4041-962c-0ef98f8aa591 # ╟─399ef117-2085-4c9d-9d8d-d03a03baf5ef # ╠═627461ce-9e80-4707-b0ba-ddc6bb9b4269 # ╠═a3a04d5f-b0f0-4740-9b37-92570864f142 # ╠═a822ac82-5691-4f8e-a60a-1a4582cf59e7 # ╟─5ce8ebc6-b509-42f0-acd5-8008673b04ab # ╟─1f48fe19-3ee8-44ac-a591-7b4df2d2f93a # ╠═5539db10-b0d2-48b6-8985-ef437b8ae0b5 # ╠═74329553-ab9b-4b6c-a77b-9c24ac48490b # ╟─8811b8bd-7d44-4f4a-b52d-0948dad39a51 # ╠═f0575125-d7dd-4cf5-bfd3-3275d6bdd0ce # ╠═802be3da-6f24-4693-b48e-0479aec4a02c # ╟─997f714f-1df4-4b2c-ab0a-50dd52cb4a82 # ╠═a504bc18-fff4-4cd7-b74c-173abcf1ef68 # ╟─00a0169e-fd93-4dfe-b45b-f088312e24ab # ╠═e7b5f5ec-d673-4397-ac0c-7a4bfc364fde # ╠═0a61c092-e61a-4219-a57f-172a8c8c4117 # ╟─31695eb5-7d77-4d6b-a32a-ff31cf1fd8e9 # ╠═25774e63-94fb-4b03-a43a-72a62f08f0c5 # ╟─a04672a7-3d3f-4b0b-8c8f-f8b73f208de4 # ╠═2d0f4354-20e0-47a0-8a86-99cd50bca80f # ╠═fee1f75a-5f3e-4213-8ba3-872871cf7f68 # ╠═20ebc38b-24c1-4e39-97b0-dcd53a9a5bf7 # ╠═5545ac33-82e9-4994-be49-5776d512e2c1 # ╟─b776eba5-60bc-4d0f-8e93-e2baf2f695bc # ╠═a56b8b24-bf38-49bf-b19e-d8feadd55db3 # ╟─6784f0e5-5108-48ca-99ac-9d154b1d3c55 # ╠═aacc0632-797f-49d6-b9ce-3e728fade6c3 # ╠═6357cfaf-e6d6-49f2-b3bb-5a27b28cf0fa # ╠═4fd43c26-dd05-4520-93e9-901760ef49b4 # ╟─357762fc-52fe-4727-b0a8-af55eea466b7 # ╠═0024288a-9ee3-42b5-82c6-d996f45be9ed # ╠═ef732ddf-b034-42cf-815f-1c8b53da6401 # ╟─c143b2de-78c5-46ad-852f-3c2a9115cb72 # ╠═cf628a57-933b-4984-a317-63360c345534 # ╠═22659c85-700f-4dad-a22a-7aafa71225c0 # ╟─5bea1d75-a8d5-4285-b0cf-234fcfe8122f # ╠═3d96cc48-9c73-46e9-869b-eb72231f283e # ╠═dc5d8314-d2d1-477c-9ccd-882069ee4210
{ "repo_name": "JuliaPluto/PlutoSliderServer.jl", "stars": "114", "repo_language": "Julia", "file_name": "c.plutojl", "mime_type": "text/plain" }
import PlutoSliderServer import PlutoSliderServer.Pluto import PlutoSliderServer.HTTP using Test using UUIDs using Base64 import JSON function poll(query::Function, timeout::Real=Inf64, interval::Real=1 / 20) start = time() while time() < start + timeout if query() return true end sleep(interval) end return false end select(f::Function, xs) = for x in xs if f(x) return x end end """ Like [`Base.cp`](@ref), but it slightly tweaks the file contents (a Julia comment is inserted into the header) to make it unique. """ function cp_nb_with_tweaks(from::String, to::String) contents = read(from, String) key = "using Markdown" @assert occursin(key, contents) write(to, replace(contents, key => key * " # " * string(uuid1()), count=1)) end @testset "Folder watching" begin test_dir = mktempdir(cleanup=false) try # open the folder on macos: run(`open $(test_dir)`) catch end notebook_paths_to_copy = ["basic2.jl"] for p in notebook_paths_to_copy cp_nb_with_tweaks(joinpath(@__DIR__, p), joinpath(test_dir, p)) end Random.seed!(time_ns()) port = rand(12345:65000) still_booting = Ref(true) ready_result = Ref{Any}(nothing) function on_ready(result) ready_result[] = result still_booting[] = false end t = Pluto.@asynclog PlutoSliderServer.run_directory( test_dir; Export_enabled=false, Export_output_dir=test_dir, SliderServer_port=port, SliderServer_watch_dir=true, on_ready, ) while still_booting[] sleep(0.1) end notebook_sessions = ready_result[].notebook_sessions index_json() = JSON.parse(String(HTTP.get("http://localhost:$(port)/pluto_export.json").body)) json_nbs() = index_json()["notebooks"] |> keys |> collect @test length(notebook_sessions) == 1 @test json_nbs() == ["basic2.jl"] @test index_json()["notebooks"]["basic2.jl"]["frontmatter"]["title"] == "Pancakes" @test index_json()["notebooks"]["basic2.jl"]["frontmatter"]["description"] == "are yummy 🥞" @testset "Adding a file" begin cp_nb_with_tweaks( joinpath(test_dir, "basic2.jl"), joinpath(test_dir, "basic2 copy.jl"), ) @test poll(10, 1 / 20) do length(notebook_sessions) == length(notebook_paths_to_copy) + 1 end newsesh = () -> select(s -> s.path == "basic2 copy.jl", notebook_sessions) @test !isnothing(newsesh()) @test newsesh().current_hash != newsesh().desired_hash @test poll(60, 1 / 20) do newsesh().current_hash == newsesh().desired_hash end @test isfile(joinpath(test_dir, "basic2 copy.html")) @test !occursin( "slider_server_url = undefined", read(joinpath(test_dir, "basic2 copy.html"), String), ) @test occursin( "slider_server_url = \".\"", read(joinpath(test_dir, "basic2 copy.html"), String), ) @test json_nbs() == ["basic2.jl", "basic2 copy.jl"] end @testset "Removing a file" begin rm(joinpath(test_dir, "basic2 copy.jl")) @test !isfile(joinpath(test_dir, "basic2 copy.jl")) @test poll(30, 1 / 20) do length(notebook_sessions) == length(notebook_paths_to_copy) end @test !isfile(joinpath(test_dir, "basic2 copy.html")) end coolsesh = () -> select(s -> s.path == "subdir/cool.jl", notebook_sessions) coolcontents() = read(joinpath(test_dir, "subdir", "cool.html"), String) @testset "Adding a file (again)" begin mkdir(joinpath(test_dir, "subdir")) cp_nb_with_tweaks( joinpath(test_dir, "basic2.jl"), joinpath(test_dir, "subdir", "cool.jl"), ) @test poll(60, 1 / 20) do isfile(joinpath(test_dir, "subdir", "cool.html")) end @test poll(5, 1 / 20) do coolsesh().current_hash == coolsesh().desired_hash end @test !occursin("slider_server_url = undefined", coolcontents()) @test occursin("slider_server_url = \"..\"", coolcontents()) end @testset "Update an existing file" begin coolconnectionurl(file_hash) = "http://localhost:$(port)/bondconnections/$(file_hash)/" coolbondsurl(file_hash) = "http://localhost:$(port)/staterequest/$(file_hash)/asdf" function coolconnectionkeys() response = HTTP.get(coolconnectionurl(coolsesh().current_hash)) result = Pluto.unpack(response.body) keys(result) |> collect |> sort end @test coolconnectionkeys() == sort(["x", "y", "s", "s2"]) old_html_contents = coolcontents() old_hash = coolsesh().current_hash Pluto.readwrite( joinpath(@__DIR__, "parallelpaths4.jl"), joinpath(test_dir, "subdir", "cool.jl"), ) @test poll(5, 1 / 60) do coolsesh().current_hash != coolsesh().desired_hash end @test coolsesh().current_hash == old_hash @test HTTP.get( coolconnectionurl(old_hash); retry=false, status_exception=false, ).status == 404 @test HTTP.get( coolbondsurl(old_hash); retry=false, status_exception=false, ).status == 404 @test HTTP.get( coolconnectionurl(coolsesh().desired_hash); retry=false, status_exception=false, ).status == 503 @test isfile(joinpath(test_dir, "subdir", "cool.html")) @test poll(60, 1 / 60) do coolsesh().current_hash == coolsesh().desired_hash end @test HTTP.get( coolconnectionurl(coolsesh().current_hash); retry=false, status_exception=false, ).status == 200 @test isfile(joinpath(test_dir, "subdir", "cool.html")) @test coolcontents() != old_html_contents @test coolconnectionkeys() == sort([ "x", "y", "show_dogs", "b", "c", "five1", "five2", "six1", "six2", "six3", "cool1", "cool2", "world", "boring", "custom_macro", ]) end sleep(2) close(ready_result[].http_server) try wait(t) catch e if !(e isa TaskFailedException) rethrow(e) end end # schedule(t, InterruptException(), error=true) @info "DONEZO" end
{ "repo_name": "JuliaPluto/PlutoSliderServer.jl", "stars": "114", "repo_language": "Julia", "file_name": "c.plutojl", "mime_type": "text/plain" }
### A Pluto.jl notebook ### # v0.12.20 using Markdown using InteractiveUtils # ╔═╡ ec425e6e-6b9c-11eb-0c63-97fade58f6b4 md"Hello!" # ╔═╡ Cell order: # ╠═ec425e6e-6b9c-11eb-0c63-97fade58f6b4
{ "repo_name": "JuliaPluto/PlutoSliderServer.jl", "stars": "114", "repo_language": "Julia", "file_name": "c.plutojl", "mime_type": "text/plain" }
### A Pluto.jl notebook ### # v0.12.123123 #> [frontmatter] #> title = "My<Title" #> author_url = "https://github.com/JuliaPluto" #> image = "https://upload.wikimedia.org/wikipedia/commons/9/99/Unofficial_JavaScript_logo_2.svg" #> tags = ["aaa", "bbb"] #> author_name = "Pluto.jl" #> description = "ccc" #> license = "Unlicense" using Markdown using InteractiveUtils # ╔═╡ ec425e6e-6b9c-11eb-0c63-97fade58f6b4 md"Hello!" # ╔═╡ Cell order: # ╠═ec425e6e-6b9c-11eb-0c63-97fade58f6b4
{ "repo_name": "JuliaPluto/PlutoSliderServer.jl", "stars": "114", "repo_language": "Julia", "file_name": "c.plutojl", "mime_type": "text/plain" }
### A Pluto.jl notebook ### # v0.12.20 using Markdown using InteractiveUtils # ╔═╡ ec425e6e-6b9c-11eb-0c63-97fade58f6b4 md"Hello!" # ╔═╡ Cell order: # ╠═ec425e6e-6b9c-11eb-0c63-97fade58f6b4
{ "repo_name": "JuliaPluto/PlutoSliderServer.jl", "stars": "114", "repo_language": "Julia", "file_name": "c.plutojl", "mime_type": "text/plain" }
# Project-wide Gradle settings. # IDE (e.g. Android Studio) users: # Gradle settings configured through the IDE *will override* # any settings specified in this file. # For more details on how to configure your build environment visit # http://www.gradle.org/docs/current/userguide/build_environment.html # Specifies the JVM arguments used for the daemon process. # The setting is particularly useful for tweaking memory settings. org.gradle.jvmargs=-Xmx2048m -Dfile.encoding=UTF-8 # When configured, Gradle will run in incubating parallel mode. # This option should only be used with decoupled projects. More details, visit # http://www.gradle.org/docs/current/userguide/multi_project_builds.html#sec:decoupled_projects # org.gradle.parallel=true # AndroidX package structure to make it clearer which packages are bundled with the # Android operating system, and which are packaged with your app"s APK # https://developer.android.com/topic/libraries/support-library/androidx-rn android.useAndroidX=true # Automatically convert third-party libraries to use AndroidX android.enableJetifier=true # Kotlin code style for this project: "official" or "obsolete": kotlin.code.style=official
{ "repo_name": "brian-norman/RandomKolor", "stars": "46", "repo_language": "Kotlin", "file_name": "gradle-wrapper.properties", "mime_type": "text/plain" }
@if "%DEBUG%" == "" @echo off @rem ########################################################################## @rem @rem Gradle startup script for Windows @rem @rem ########################################################################## @rem Set local scope for the variables with windows NT shell if "%OS%"=="Windows_NT" setlocal set DIRNAME=%~dp0 if "%DIRNAME%" == "" set DIRNAME=. set APP_BASE_NAME=%~n0 set APP_HOME=%DIRNAME% @rem Add default JVM options here. You can also use JAVA_OPTS and GRADLE_OPTS to pass JVM options to this script. set DEFAULT_JVM_OPTS= @rem Find java.exe if defined JAVA_HOME goto findJavaFromJavaHome set JAVA_EXE=java.exe %JAVA_EXE% -version >NUL 2>&1 if "%ERRORLEVEL%" == "0" goto init echo. echo ERROR: JAVA_HOME is not set and no 'java' command could be found in your PATH. echo. echo Please set the JAVA_HOME variable in your environment to match the echo location of your Java installation. goto fail :findJavaFromJavaHome set JAVA_HOME=%JAVA_HOME:"=% set JAVA_EXE=%JAVA_HOME%/bin/java.exe if exist "%JAVA_EXE%" goto init echo. echo ERROR: JAVA_HOME is set to an invalid directory: %JAVA_HOME% echo. echo Please set the JAVA_HOME variable in your environment to match the echo location of your Java installation. goto fail :init @rem Get command-line arguments, handling Windows variants if not "%OS%" == "Windows_NT" goto win9xME_args :win9xME_args @rem Slurp the command line arguments. set CMD_LINE_ARGS= set _SKIP=2 :win9xME_args_slurp if "x%~1" == "x" goto execute set CMD_LINE_ARGS=%* :execute @rem Setup the command line set CLASSPATH=%APP_HOME%\gradle\wrapper\gradle-wrapper.jar @rem Execute Gradle "%JAVA_EXE%" %DEFAULT_JVM_OPTS% %JAVA_OPTS% %GRADLE_OPTS% "-Dorg.gradle.appname=%APP_BASE_NAME%" -classpath "%CLASSPATH%" org.gradle.wrapper.GradleWrapperMain %CMD_LINE_ARGS% :end @rem End local scope for the variables with windows NT shell if "%ERRORLEVEL%"=="0" goto mainEnd :fail rem Set variable GRADLE_EXIT_CONSOLE if you need the _script_ return code instead of rem the _cmd.exe /c_ return code! if not "" == "%GRADLE_EXIT_CONSOLE%" exit 1 exit /b 1 :mainEnd if "%OS%"=="Windows_NT" endlocal :omega
{ "repo_name": "brian-norman/RandomKolor", "stars": "46", "repo_language": "Kotlin", "file_name": "gradle-wrapper.properties", "mime_type": "text/plain" }
#!/usr/bin/env sh ############################################################################## ## ## Gradle start up script for UN*X ## ############################################################################## # Attempt to set APP_HOME # Resolve links: $0 may be a link PRG="$0" # Need this for relative symlinks. while [ -h "$PRG" ] ; do ls=`ls -ld "$PRG"` link=`expr "$ls" : '.*-> \(.*\)$'` if expr "$link" : '/.*' > /dev/null; then PRG="$link" else PRG=`dirname "$PRG"`"/$link" fi done SAVED="`pwd`" cd "`dirname \"$PRG\"`/" >/dev/null APP_HOME="`pwd -P`" cd "$SAVED" >/dev/null APP_NAME="Gradle" APP_BASE_NAME=`basename "$0"` # Add default JVM options here. You can also use JAVA_OPTS and GRADLE_OPTS to pass JVM options to this script. DEFAULT_JVM_OPTS="" # Use the maximum available, or set MAX_FD != -1 to use that value. MAX_FD="maximum" warn () { echo "$*" } die () { echo echo "$*" echo exit 1 } # OS specific support (must be 'true' or 'false'). cygwin=false msys=false darwin=false nonstop=false case "`uname`" in CYGWIN* ) cygwin=true ;; Darwin* ) darwin=true ;; MINGW* ) msys=true ;; NONSTOP* ) nonstop=true ;; esac CLASSPATH=$APP_HOME/gradle/wrapper/gradle-wrapper.jar # Determine the Java command to use to start the JVM. if [ -n "$JAVA_HOME" ] ; then if [ -x "$JAVA_HOME/jre/sh/java" ] ; then # IBM's JDK on AIX uses strange locations for the executables JAVACMD="$JAVA_HOME/jre/sh/java" else JAVACMD="$JAVA_HOME/bin/java" fi if [ ! -x "$JAVACMD" ] ; then die "ERROR: JAVA_HOME is set to an invalid directory: $JAVA_HOME Please set the JAVA_HOME variable in your environment to match the location of your Java installation." fi else JAVACMD="java" which java >/dev/null 2>&1 || die "ERROR: JAVA_HOME is not set and no 'java' command could be found in your PATH. Please set the JAVA_HOME variable in your environment to match the location of your Java installation." fi # Increase the maximum file descriptors if we can. if [ "$cygwin" = "false" -a "$darwin" = "false" -a "$nonstop" = "false" ] ; then MAX_FD_LIMIT=`ulimit -H -n` if [ $? -eq 0 ] ; then if [ "$MAX_FD" = "maximum" -o "$MAX_FD" = "max" ] ; then MAX_FD="$MAX_FD_LIMIT" fi ulimit -n $MAX_FD if [ $? -ne 0 ] ; then warn "Could not set maximum file descriptor limit: $MAX_FD" fi else warn "Could not query maximum file descriptor limit: $MAX_FD_LIMIT" fi fi # For Darwin, add options to specify how the application appears in the dock if $darwin; then GRADLE_OPTS="$GRADLE_OPTS \"-Xdock:name=$APP_NAME\" \"-Xdock:icon=$APP_HOME/media/gradle.icns\"" fi # For Cygwin, switch paths to Windows format before running java if $cygwin ; then APP_HOME=`cygpath --path --mixed "$APP_HOME"` CLASSPATH=`cygpath --path --mixed "$CLASSPATH"` JAVACMD=`cygpath --unix "$JAVACMD"` # We build the pattern for arguments to be converted via cygpath ROOTDIRSRAW=`find -L / -maxdepth 1 -mindepth 1 -type d 2>/dev/null` SEP="" for dir in $ROOTDIRSRAW ; do ROOTDIRS="$ROOTDIRS$SEP$dir" SEP="|" done OURCYGPATTERN="(^($ROOTDIRS))" # Add a user-defined pattern to the cygpath arguments if [ "$GRADLE_CYGPATTERN" != "" ] ; then OURCYGPATTERN="$OURCYGPATTERN|($GRADLE_CYGPATTERN)" fi # Now convert the arguments - kludge to limit ourselves to /bin/sh i=0 for arg in "$@" ; do CHECK=`echo "$arg"|egrep -c "$OURCYGPATTERN" -` CHECK2=`echo "$arg"|egrep -c "^-"` ### Determine if an option if [ $CHECK -ne 0 ] && [ $CHECK2 -eq 0 ] ; then ### Added a condition eval `echo args$i`=`cygpath --path --ignore --mixed "$arg"` else eval `echo args$i`="\"$arg\"" fi i=$((i+1)) done case $i in (0) set -- ;; (1) set -- "$args0" ;; (2) set -- "$args0" "$args1" ;; (3) set -- "$args0" "$args1" "$args2" ;; (4) set -- "$args0" "$args1" "$args2" "$args3" ;; (5) set -- "$args0" "$args1" "$args2" "$args3" "$args4" ;; (6) set -- "$args0" "$args1" "$args2" "$args3" "$args4" "$args5" ;; (7) set -- "$args0" "$args1" "$args2" "$args3" "$args4" "$args5" "$args6" ;; (8) set -- "$args0" "$args1" "$args2" "$args3" "$args4" "$args5" "$args6" "$args7" ;; (9) set -- "$args0" "$args1" "$args2" "$args3" "$args4" "$args5" "$args6" "$args7" "$args8" ;; esac fi # Escape application args save () { for i do printf %s\\n "$i" | sed "s/'/'\\\\''/g;1s/^/'/;\$s/\$/' \\\\/" ; done echo " " } APP_ARGS=$(save "$@") # Collect all arguments for the java command, following the shell quoting and substitution rules eval set -- $DEFAULT_JVM_OPTS $JAVA_OPTS $GRADLE_OPTS "\"-Dorg.gradle.appname=$APP_BASE_NAME\"" -classpath "\"$CLASSPATH\"" org.gradle.wrapper.GradleWrapperMain "$APP_ARGS" # by default we should be in the correct project dir, but when run from Finder on Mac, the cwd is wrong if [ "$(uname)" = "Darwin" ] && [ "$HOME" = "$PWD" ]; then cd "$(dirname "$0")" fi exec "$JAVACMD" "$@"
{ "repo_name": "brian-norman/RandomKolor", "stars": "46", "repo_language": "Kotlin", "file_name": "gradle-wrapper.properties", "mime_type": "text/plain" }
# RandomKolor 🎨 A tiny (Kotlin) library for generating attractive colors [![](https://jitpack.io/v/brian-norman/RandomKolor.svg)](https://jitpack.io/#brian-norman/RandomKolor) Inspired by David Merfield's [randomColor.js](https://github.com/davidmerfield/randomColor). You can use the library to generate attractive random colors on Android apps. This project comes with a sample app which demonstrates some of the functionality: <img src="screenshot.png" width="300"/> ## Installation The library is [hosted on JitPack](https://jitpack.io/#brian-norman/RandomKolor/1.0) 1. Add it in your root `build.gradle` at the end of repositories: ``` allprojects { repositories { ... maven { url 'https://jitpack.io' } } } ``` 2. Add the dependency ``` dependencies { implementation 'com.github.brian-norman:RandomKolor:1.0' } ``` ## Options - ```hue``` – Controls the hue of the generated color. You can pass in a color: `Color.RED`, `Color.ORANGE`, `Color.YELLOW`, `Color.Green`, `Color.BLUE`, `Color.Purple`, `Color.PINK` and `Color.MONOCHROME` are currently supported. - ```luminosity``` – Controls the luminosity of the generated color. Library supports: `Luminosity.RANDOM`, `Luminosity.BRIGHT`, `Luminosity.LIGHT`, `Luminosity.DARK` - ```count``` – An integer which specifies the number of colors to generate. ## API ```kotlin fun randomColor(hue: Hue = RandomHue, luminosity: Luminosity = Luminosity.RANDOM, format: Format = Format.RGB): String ``` ```kotlin fun randomColors(count: Int, hue: Hue = RandomHue, luminosity: Luminosity = Luminosity.RANDOM, format: Format = Format.RGB): List<String> ``` ## Examples ```kotlin // Generate an attractive color (RGB) randomly val rgbString = RandomKolor().randomColor() ``` ```kotlin // Generate a light blue (HEX) val rgbString = RandomKolor().randomColor(hue = ColorHue(BLUE), luminosity = Luminosity.LIGHT, format = Format.HEX) ``` ```kotlin // Generate three light oranges (HSL) val rgbStrings = RandomKolor().randomColors(count = 3, hue = ColorHue(ORANGE), luminosity = Luminosity.LIGHT, format = Format.HSL) ``` ## Acknowledgements - David Merfield's original [library](https://github.com/davidmerfield/randomColor) - Wei Wang's swift port [RandomColorSwift](https://github.com/onevcat/RandomColorSwift) ## TODO - Add alpha to RGB
{ "repo_name": "brian-norman/RandomKolor", "stars": "46", "repo_language": "Kotlin", "file_name": "gradle-wrapper.properties", "mime_type": "text/plain" }
<?xml version="1.0" encoding="UTF-8"?> <project version="4"> <component name="VcsDirectoryMappings"> <mapping directory="$PROJECT_DIR$" vcs="Git" /> </component> </project>
{ "repo_name": "brian-norman/RandomKolor", "stars": "46", "repo_language": "Kotlin", "file_name": "gradle-wrapper.properties", "mime_type": "text/plain" }
<?xml version="1.0" encoding="UTF-8"?> <project version="4"> <component name="ProjectRootManager" version="2" languageLevel="JDK_1_8" default="true" project-jdk-name="1.8" project-jdk-type="JavaSDK"> <output url="file://$PROJECT_DIR$/build/classes" /> </component> <component name="ProjectType"> <option name="id" value="Android" /> </component> </project>
{ "repo_name": "brian-norman/RandomKolor", "stars": "46", "repo_language": "Kotlin", "file_name": "gradle-wrapper.properties", "mime_type": "text/plain" }
<?xml version="1.0" encoding="UTF-8"?> <project version="4"> <component name="GradleMigrationSettings" migrationVersion="1" /> <component name="GradleSettings"> <option name="linkedExternalProjectsSettings"> <GradleProjectSettings> <option name="testRunner" value="PLATFORM" /> <option name="distributionType" value="DEFAULT_WRAPPED" /> <option name="externalProjectPath" value="$PROJECT_DIR$" /> <option name="gradleJvm" value="1.8" /> <option name="modules"> <set> <option value="$PROJECT_DIR$" /> <option value="$PROJECT_DIR$/RandomKolor" /> <option value="$PROJECT_DIR$/app" /> </set> </option> <option name="resolveModulePerSourceSet" value="false" /> <option name="useQualifiedModuleNames" value="true" /> </GradleProjectSettings> </option> </component> </project>
{ "repo_name": "brian-norman/RandomKolor", "stars": "46", "repo_language": "Kotlin", "file_name": "gradle-wrapper.properties", "mime_type": "text/plain" }
<?xml version="1.0" encoding="UTF-8"?> <project version="4"> <component name="RemoteRepositoriesConfiguration"> <remote-repository> <option name="id" value="central" /> <option name="name" value="Maven Central repository" /> <option name="url" value="https://repo1.maven.org/maven2" /> </remote-repository> <remote-repository> <option name="id" value="jboss.community" /> <option name="name" value="JBoss Community repository" /> <option name="url" value="https://repository.jboss.org/nexus/content/repositories/public/" /> </remote-repository> <remote-repository> <option name="id" value="BintrayJCenter" /> <option name="name" value="BintrayJCenter" /> <option name="url" value="https://jcenter.bintray.com/" /> </remote-repository> <remote-repository> <option name="id" value="Google" /> <option name="name" value="Google" /> <option name="url" value="https://dl.google.com/dl/android/maven2/" /> </remote-repository> </component> </project>
{ "repo_name": "brian-norman/RandomKolor", "stars": "46", "repo_language": "Kotlin", "file_name": "gradle-wrapper.properties", "mime_type": "text/plain" }
<?xml version="1.0" encoding="UTF-8"?> <project version="4"> <component name="CompilerConfiguration"> <bytecodeTargetLevel target="1.8" /> </component> </project>
{ "repo_name": "brian-norman/RandomKolor", "stars": "46", "repo_language": "Kotlin", "file_name": "gradle-wrapper.properties", "mime_type": "text/plain" }
<component name="ProjectCodeStyleConfiguration"> <state> <option name="USE_PER_PROJECT_SETTINGS" value="true" /> </state> </component>
{ "repo_name": "brian-norman/RandomKolor", "stars": "46", "repo_language": "Kotlin", "file_name": "gradle-wrapper.properties", "mime_type": "text/plain" }
<component name="ProjectCodeStyleConfiguration"> <code_scheme name="Project" version="173"> <JetCodeStyleSettings> <option name="PACKAGES_TO_USE_STAR_IMPORTS"> <value> <package name="java.util" alias="false" withSubpackages="false" /> <package name="kotlinx.android.synthetic" alias="false" withSubpackages="true" /> <package name="io.ktor" alias="false" withSubpackages="true" /> </value> </option> <option name="PACKAGES_IMPORT_LAYOUT"> <value> <package name="" alias="false" withSubpackages="true" /> <package name="java" alias="false" withSubpackages="true" /> <package name="javax" alias="false" withSubpackages="true" /> <package name="kotlin" alias="false" withSubpackages="true" /> <package name="" alias="true" withSubpackages="true" /> </value> </option> <option name="CODE_STYLE_DEFAULTS" value="KOTLIN_OFFICIAL" /> </JetCodeStyleSettings> <codeStyleSettings language="XML"> <option name="FORCE_REARRANGE_MODE" value="1" /> <indentOptions> <option name="CONTINUATION_INDENT_SIZE" value="4" /> </indentOptions> <arrangement> <rules> <section> <rule> <match> <AND> <NAME>xmlns:android</NAME> <XML_ATTRIBUTE /> <XML_NAMESPACE>^$</XML_NAMESPACE> </AND> </match> </rule> </section> <section> <rule> <match> <AND> <NAME>xmlns:.*</NAME> <XML_ATTRIBUTE /> <XML_NAMESPACE>^$</XML_NAMESPACE> </AND> </match> <order>BY_NAME</order> </rule> </section> <section> <rule> <match> <AND> <NAME>.*:id</NAME> <XML_ATTRIBUTE /> <XML_NAMESPACE>http://schemas.android.com/apk/res/android</XML_NAMESPACE> </AND> </match> </rule> </section> <section> <rule> <match> <AND> <NAME>.*:name</NAME> <XML_ATTRIBUTE /> <XML_NAMESPACE>http://schemas.android.com/apk/res/android</XML_NAMESPACE> </AND> </match> </rule> </section> <section> <rule> <match> <AND> <NAME>name</NAME> <XML_ATTRIBUTE /> <XML_NAMESPACE>^$</XML_NAMESPACE> </AND> </match> </rule> </section> <section> <rule> <match> <AND> <NAME>style</NAME> <XML_ATTRIBUTE /> <XML_NAMESPACE>^$</XML_NAMESPACE> </AND> </match> </rule> </section> <section> <rule> <match> <AND> <NAME>.*</NAME> <XML_ATTRIBUTE /> <XML_NAMESPACE>^$</XML_NAMESPACE> </AND> </match> <order>BY_NAME</order> </rule> </section> <section> <rule> <match> <AND> <NAME>.*</NAME> <XML_ATTRIBUTE /> <XML_NAMESPACE>http://schemas.android.com/apk/res/android</XML_NAMESPACE> </AND> </match> <order>ANDROID_ATTRIBUTE_ORDER</order> </rule> </section> <section> <rule> <match> <AND> <NAME>.*</NAME> <XML_ATTRIBUTE /> <XML_NAMESPACE>.*</XML_NAMESPACE> </AND> </match> <order>BY_NAME</order> </rule> </section> </rules> </arrangement> </codeStyleSettings> <codeStyleSettings language="kotlin"> <option name="CODE_STYLE_DEFAULTS" value="KOTLIN_OFFICIAL" /> </codeStyleSettings> </code_scheme> </component>
{ "repo_name": "brian-norman/RandomKolor", "stars": "46", "repo_language": "Kotlin", "file_name": "gradle-wrapper.properties", "mime_type": "text/plain" }
<component name="ProjectDictionaryState"> <dictionary name="brian"> <words> <w>bnor</w> <w>kolor</w> <w>randomkolor</w> </words> </dictionary> </component>
{ "repo_name": "brian-norman/RandomKolor", "stars": "46", "repo_language": "Kotlin", "file_name": "gradle-wrapper.properties", "mime_type": "text/plain" }
# Add project specific ProGuard rules here. # You can control the set of applied configuration files using the # proguardFiles setting in build.gradle. # # For more details, see # http://developer.android.com/guide/developing/tools/proguard.html # If your project uses WebView with JS, uncomment the following # and specify the fully qualified class name to the JavaScript interface # class: #-keepclassmembers class fqcn.of.javascript.interface.for.webview { # public *; #} # Uncomment this to preserve the line number information for # debugging stack traces. #-keepattributes SourceFile,LineNumberTable # If you keep the line number information, uncomment this to # hide the original source file name. #-renamesourcefileattribute SourceFile
{ "repo_name": "brian-norman/RandomKolor", "stars": "46", "repo_language": "Kotlin", "file_name": "gradle-wrapper.properties", "mime_type": "text/plain" }
package com.example.randomkolor import androidx.test.platform.app.InstrumentationRegistry import androidx.test.ext.junit.runners.AndroidJUnit4 import org.junit.Test import org.junit.runner.RunWith import org.junit.Assert.* /** * Instrumented test, which will execute on an Android device. * * See [testing documentation](http://d.android.com/tools/testing). */ @RunWith(AndroidJUnit4::class) class ExampleInstrumentedTest { @Test fun useAppContext() { // Context of the app under test. val appContext = InstrumentationRegistry.getInstrumentation().targetContext assertEquals("com.example.randomkolor", appContext.packageName) } }
{ "repo_name": "brian-norman/RandomKolor", "stars": "46", "repo_language": "Kotlin", "file_name": "gradle-wrapper.properties", "mime_type": "text/plain" }
package com.example.randomkolor import org.junit.Test import org.junit.Assert.* /** * Example local unit test, which will execute on the development machine (host). * * See [testing documentation](http://d.android.com/tools/testing). */ class ExampleUnitTest { @Test fun addition_isCorrect() { assertEquals(4, 2 + 2) } }
{ "repo_name": "brian-norman/RandomKolor", "stars": "46", "repo_language": "Kotlin", "file_name": "gradle-wrapper.properties", "mime_type": "text/plain" }
<?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.example.randomkolor"> <application android:allowBackup="true" android:icon="@mipmap/ic_launcher" android:label="@string/app_name" android:roundIcon="@mipmap/ic_launcher_round" android:supportsRtl="true" android:theme="@style/Theme.RandomKolor"> <activity android:name=".MainActivity"> <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> </application> </manifest>
{ "repo_name": "brian-norman/RandomKolor", "stars": "46", "repo_language": "Kotlin", "file_name": "gradle-wrapper.properties", "mime_type": "text/plain" }
<?xml version="1.0" encoding="utf-8"?> <resources> <color name="purple_200">#FFBB86FC</color> <color name="purple_500">#FF6200EE</color> <color name="purple_700">#FF3700B3</color> <color name="teal_200">#FF03DAC5</color> <color name="teal_700">#FF018786</color> <color name="black">#FF000000</color> <color name="white">#FFFFFFFF</color> </resources>
{ "repo_name": "brian-norman/RandomKolor", "stars": "46", "repo_language": "Kotlin", "file_name": "gradle-wrapper.properties", "mime_type": "text/plain" }
<resources xmlns:tools="http://schemas.android.com/tools"> <!-- Base application theme. --> <style name="Theme.RandomKolor" parent="Theme.MaterialComponents.DayNight.DarkActionBar"> <!-- Primary brand color. --> <item name="colorPrimary">@color/purple_500</item> <item name="colorPrimaryVariant">@color/purple_700</item> <item name="colorOnPrimary">@color/white</item> <!-- Secondary brand color. --> <item name="colorSecondary">@color/teal_200</item> <item name="colorSecondaryVariant">@color/teal_700</item> <item name="colorOnSecondary">@color/black</item> <!-- Status bar color. --> <item name="android:statusBarColor" tools:targetApi="l">?attr/colorPrimaryVariant</item> <!-- Customize your theme here. --> </style> </resources>
{ "repo_name": "brian-norman/RandomKolor", "stars": "46", "repo_language": "Kotlin", "file_name": "gradle-wrapper.properties", "mime_type": "text/plain" }
<resources> <string name="app_name">Random Kolor</string> <string name="randomkolor">RandomKolor</string> <string name="generate_random_color">Generate Random Color (RGB)</string> <string name="one_randomly_colored_circle">One Randomly Colored Circle</string> <string name="generate_random_light_blue">Generate Random Light Blue (HEX)</string> </resources>
{ "repo_name": "brian-norman/RandomKolor", "stars": "46", "repo_language": "Kotlin", "file_name": "gradle-wrapper.properties", "mime_type": "text/plain" }
<resources xmlns:tools="http://schemas.android.com/tools"> <!-- Base application theme. --> <style name="Theme.RandomKolor" parent="Theme.MaterialComponents.DayNight.DarkActionBar"> <!-- Primary brand color. --> <item name="colorPrimary">@color/purple_200</item> <item name="colorPrimaryVariant">@color/purple_700</item> <item name="colorOnPrimary">@color/black</item> <!-- Secondary brand color. --> <item name="colorSecondary">@color/teal_200</item> <item name="colorSecondaryVariant">@color/teal_200</item> <item name="colorOnSecondary">@color/black</item> <!-- Status bar color. --> <item name="android:statusBarColor" tools:targetApi="l">?attr/colorPrimaryVariant</item> <!-- Customize your theme here. --> </style> </resources>
{ "repo_name": "brian-norman/RandomKolor", "stars": "46", "repo_language": "Kotlin", "file_name": "gradle-wrapper.properties", "mime_type": "text/plain" }
<?xml version="1.0" encoding="utf-8"?> <adaptive-icon xmlns:android="http://schemas.android.com/apk/res/android"> <background android:drawable="@drawable/ic_launcher_background" /> <foreground android:drawable="@drawable/ic_launcher_foreground" /> </adaptive-icon>
{ "repo_name": "brian-norman/RandomKolor", "stars": "46", "repo_language": "Kotlin", "file_name": "gradle-wrapper.properties", "mime_type": "text/plain" }
<?xml version="1.0" encoding="utf-8"?> <adaptive-icon xmlns:android="http://schemas.android.com/apk/res/android"> <background android:drawable="@drawable/ic_launcher_background" /> <foreground android:drawable="@drawable/ic_launcher_foreground" /> </adaptive-icon>
{ "repo_name": "brian-norman/RandomKolor", "stars": "46", "repo_language": "Kotlin", "file_name": "gradle-wrapper.properties", "mime_type": "text/plain" }
<?xml version="1.0" encoding="utf-8"?> <FrameLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" android:id="@+id/container" android:layout_width="match_parent" android:layout_height="match_parent" tools:context=".MainActivity" />
{ "repo_name": "brian-norman/RandomKolor", "stars": "46", "repo_language": "Kotlin", "file_name": "gradle-wrapper.properties", "mime_type": "text/plain" }
<?xml version="1.0" encoding="utf-8"?> <androidx.constraintlayout.widget.ConstraintLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto" xmlns:tools="http://schemas.android.com/tools" android:id="@+id/main" android:layout_width="match_parent" android:layout_height="match_parent" tools:context=".ui.main.MainFragment"> <Button android:id="@+id/one_random_color_btn" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_marginTop="8dp" android:text="@string/generate_random_color" app:layout_constraintEnd_toEndOf="parent" app:layout_constraintStart_toStartOf="parent" app:layout_constraintTop_toTopOf="parent" /> <ImageView android:id="@+id/one_random_color_circle" android:layout_width="100dp" android:layout_height="100dp" android:layout_marginTop="8dp" android:contentDescription="@string/one_randomly_colored_circle" android:src="@drawable/circle" app:layout_constraintEnd_toEndOf="parent" app:layout_constraintStart_toStartOf="parent" app:layout_constraintTop_toBottomOf="@id/one_random_color_btn" /> <Button android:id="@+id/one_random_blue_btn" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_marginTop="8dp" android:text="@string/generate_random_light_blue" app:layout_constraintEnd_toEndOf="parent" app:layout_constraintStart_toStartOf="parent" app:layout_constraintTop_toBottomOf="@id/one_random_color_circle" /> <ImageView android:id="@+id/one_random_blue_circle" android:layout_width="100dp" android:layout_height="100dp" android:layout_marginTop="8dp" android:contentDescription="@string/one_randomly_colored_circle" android:src="@drawable/circle" app:layout_constraintEnd_toEndOf="parent" app:layout_constraintStart_toStartOf="parent" app:layout_constraintTop_toBottomOf="@id/one_random_blue_btn" /> <Button android:id="@+id/three_random_orange_button" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_marginTop="8dp" android:text="Generate 3 Random Oranges (HSL)" app:layout_constraintEnd_toEndOf="parent" app:layout_constraintStart_toStartOf="parent" app:layout_constraintTop_toBottomOf="@id/one_random_blue_circle" /> <androidx.constraintlayout.widget.ConstraintLayout android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_marginTop="8dp" app:layout_constraintEnd_toEndOf="parent" app:layout_constraintStart_toStartOf="parent" app:layout_constraintTop_toBottomOf="@+id/three_random_orange_button"> <ImageView android:id="@+id/first_orange" android:layout_width="100dp" android:layout_height="100dp" android:src="@drawable/circle" app:layout_constraintEnd_toStartOf="@+id/second_orange" app:layout_constraintHorizontal_bias="0.5" app:layout_constraintStart_toStartOf="parent" app:layout_constraintTop_toTopOf="parent" /> <ImageView android:id="@+id/second_orange" android:layout_width="100dp" android:layout_height="100dp" android:src="@drawable/circle" app:layout_constraintEnd_toStartOf="@+id/third_orange" app:layout_constraintHorizontal_bias="0.5" app:layout_constraintStart_toEndOf="@+id/first_orange" app:layout_constraintTop_toTopOf="parent" /> <ImageView android:id="@+id/third_orange" android:layout_width="100dp" android:layout_height="100dp" android:src="@drawable/circle" app:layout_constraintEnd_toEndOf="parent" app:layout_constraintHorizontal_bias="0.5" app:layout_constraintStart_toEndOf="@+id/second_orange" app:layout_constraintTop_toTopOf="parent" /> </androidx.constraintlayout.widget.ConstraintLayout> </androidx.constraintlayout.widget.ConstraintLayout>
{ "repo_name": "brian-norman/RandomKolor", "stars": "46", "repo_language": "Kotlin", "file_name": "gradle-wrapper.properties", "mime_type": "text/plain" }
<?xml version="1.0" encoding="utf-8"?> <vector xmlns:android="http://schemas.android.com/apk/res/android" android:width="108dp" android:height="108dp" android:viewportWidth="108" android:viewportHeight="108"> <path android:fillColor="#3DDC84" android:pathData="M0,0h108v108h-108z" /> <path android:fillColor="#00000000" android:pathData="M9,0L9,108" android:strokeWidth="0.8" android:strokeColor="#33FFFFFF" /> <path android:fillColor="#00000000" android:pathData="M19,0L19,108" android:strokeWidth="0.8" android:strokeColor="#33FFFFFF" /> <path android:fillColor="#00000000" android:pathData="M29,0L29,108" android:strokeWidth="0.8" android:strokeColor="#33FFFFFF" /> <path android:fillColor="#00000000" android:pathData="M39,0L39,108" android:strokeWidth="0.8" android:strokeColor="#33FFFFFF" /> <path android:fillColor="#00000000" android:pathData="M49,0L49,108" android:strokeWidth="0.8" android:strokeColor="#33FFFFFF" /> <path android:fillColor="#00000000" android:pathData="M59,0L59,108" android:strokeWidth="0.8" android:strokeColor="#33FFFFFF" /> <path android:fillColor="#00000000" android:pathData="M69,0L69,108" android:strokeWidth="0.8" android:strokeColor="#33FFFFFF" /> <path android:fillColor="#00000000" android:pathData="M79,0L79,108" android:strokeWidth="0.8" android:strokeColor="#33FFFFFF" /> <path android:fillColor="#00000000" android:pathData="M89,0L89,108" android:strokeWidth="0.8" android:strokeColor="#33FFFFFF" /> <path android:fillColor="#00000000" android:pathData="M99,0L99,108" android:strokeWidth="0.8" android:strokeColor="#33FFFFFF" /> <path android:fillColor="#00000000" android:pathData="M0,9L108,9" android:strokeWidth="0.8" android:strokeColor="#33FFFFFF" /> <path android:fillColor="#00000000" android:pathData="M0,19L108,19" android:strokeWidth="0.8" android:strokeColor="#33FFFFFF" /> <path android:fillColor="#00000000" android:pathData="M0,29L108,29" android:strokeWidth="0.8" android:strokeColor="#33FFFFFF" /> <path android:fillColor="#00000000" android:pathData="M0,39L108,39" android:strokeWidth="0.8" android:strokeColor="#33FFFFFF" /> <path android:fillColor="#00000000" android:pathData="M0,49L108,49" android:strokeWidth="0.8" android:strokeColor="#33FFFFFF" /> <path android:fillColor="#00000000" android:pathData="M0,59L108,59" android:strokeWidth="0.8" android:strokeColor="#33FFFFFF" /> <path android:fillColor="#00000000" android:pathData="M0,69L108,69" android:strokeWidth="0.8" android:strokeColor="#33FFFFFF" /> <path android:fillColor="#00000000" android:pathData="M0,79L108,79" android:strokeWidth="0.8" android:strokeColor="#33FFFFFF" /> <path android:fillColor="#00000000" android:pathData="M0,89L108,89" android:strokeWidth="0.8" android:strokeColor="#33FFFFFF" /> <path android:fillColor="#00000000" android:pathData="M0,99L108,99" android:strokeWidth="0.8" android:strokeColor="#33FFFFFF" /> <path android:fillColor="#00000000" android:pathData="M19,29L89,29" android:strokeWidth="0.8" android:strokeColor="#33FFFFFF" /> <path android:fillColor="#00000000" android:pathData="M19,39L89,39" android:strokeWidth="0.8" android:strokeColor="#33FFFFFF" /> <path android:fillColor="#00000000" android:pathData="M19,49L89,49" android:strokeWidth="0.8" android:strokeColor="#33FFFFFF" /> <path android:fillColor="#00000000" android:pathData="M19,59L89,59" android:strokeWidth="0.8" android:strokeColor="#33FFFFFF" /> <path android:fillColor="#00000000" android:pathData="M19,69L89,69" android:strokeWidth="0.8" android:strokeColor="#33FFFFFF" /> <path android:fillColor="#00000000" android:pathData="M19,79L89,79" android:strokeWidth="0.8" android:strokeColor="#33FFFFFF" /> <path android:fillColor="#00000000" android:pathData="M29,19L29,89" android:strokeWidth="0.8" android:strokeColor="#33FFFFFF" /> <path android:fillColor="#00000000" android:pathData="M39,19L39,89" android:strokeWidth="0.8" android:strokeColor="#33FFFFFF" /> <path android:fillColor="#00000000" android:pathData="M49,19L49,89" android:strokeWidth="0.8" android:strokeColor="#33FFFFFF" /> <path android:fillColor="#00000000" android:pathData="M59,19L59,89" android:strokeWidth="0.8" android:strokeColor="#33FFFFFF" /> <path android:fillColor="#00000000" android:pathData="M69,19L69,89" android:strokeWidth="0.8" android:strokeColor="#33FFFFFF" /> <path android:fillColor="#00000000" android:pathData="M79,19L79,89" android:strokeWidth="0.8" android:strokeColor="#33FFFFFF" /> </vector>
{ "repo_name": "brian-norman/RandomKolor", "stars": "46", "repo_language": "Kotlin", "file_name": "gradle-wrapper.properties", "mime_type": "text/plain" }
<?xml version="1.0" encoding="utf-8"?> <shape xmlns:android="http://schemas.android.com/apk/res/android" android:shape="oval"> <solid android:color="#ffffff"/> </shape>
{ "repo_name": "brian-norman/RandomKolor", "stars": "46", "repo_language": "Kotlin", "file_name": "gradle-wrapper.properties", "mime_type": "text/plain" }
<vector xmlns:android="http://schemas.android.com/apk/res/android" xmlns:aapt="http://schemas.android.com/aapt" android:width="108dp" android:height="108dp" android:viewportWidth="108" android:viewportHeight="108"> <path android:pathData="M31,63.928c0,0 6.4,-11 12.1,-13.1c7.2,-2.6 26,-1.4 26,-1.4l38.1,38.1L107,108.928l-32,-1L31,63.928z"> <aapt:attr name="android:fillColor"> <gradient android:endX="85.84757" android:endY="92.4963" android:startX="42.9492" android:startY="49.59793" android:type="linear"> <item android:color="#44000000" android:offset="0.0" /> <item android:color="#00000000" android:offset="1.0" /> </gradient> </aapt:attr> </path> <path android:fillColor="#FFFFFF" android:fillType="nonZero" android:pathData="M65.3,45.828l3.8,-6.6c0.2,-0.4 0.1,-0.9 -0.3,-1.1c-0.4,-0.2 -0.9,-0.1 -1.1,0.3l-3.9,6.7c-6.3,-2.8 -13.4,-2.8 -19.7,0l-3.9,-6.7c-0.2,-0.4 -0.7,-0.5 -1.1,-0.3C38.8,38.328 38.7,38.828 38.9,39.228l3.8,6.6C36.2,49.428 31.7,56.028 31,63.928h46C76.3,56.028 71.8,49.428 65.3,45.828zM43.4,57.328c-0.8,0 -1.5,-0.5 -1.8,-1.2c-0.3,-0.7 -0.1,-1.5 0.4,-2.1c0.5,-0.5 1.4,-0.7 2.1,-0.4c0.7,0.3 1.2,1 1.2,1.8C45.3,56.528 44.5,57.328 43.4,57.328L43.4,57.328zM64.6,57.328c-0.8,0 -1.5,-0.5 -1.8,-1.2s-0.1,-1.5 0.4,-2.1c0.5,-0.5 1.4,-0.7 2.1,-0.4c0.7,0.3 1.2,1 1.2,1.8C66.5,56.528 65.6,57.328 64.6,57.328L64.6,57.328z" android:strokeWidth="1" android:strokeColor="#00000000" /> </vector>
{ "repo_name": "brian-norman/RandomKolor", "stars": "46", "repo_language": "Kotlin", "file_name": "gradle-wrapper.properties", "mime_type": "text/plain" }
package com.example.randomkolor import androidx.appcompat.app.AppCompatActivity import android.os.Bundle import com.example.randomkolor.ui.main.MainFragment class MainActivity : AppCompatActivity() { override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) setContentView(R.layout.main_activity) if (savedInstanceState == null) { supportFragmentManager.beginTransaction() .replace(R.id.container, MainFragment.newInstance()) .commitNow() } } }
{ "repo_name": "brian-norman/RandomKolor", "stars": "46", "repo_language": "Kotlin", "file_name": "gradle-wrapper.properties", "mime_type": "text/plain" }
package com.example.randomkolor.ui.main import android.graphics.Color import android.os.Bundle import android.view.LayoutInflater import android.view.View import android.view.ViewGroup import androidx.core.graphics.ColorUtils import androidx.fragment.app.Fragment import com.example.randomkolor.databinding.MainFragmentBinding import com.example.randomkolor.src.Color.BLUE import com.example.randomkolor.src.Color.ORANGE import com.example.randomkolor.src.ColorHue import com.example.randomkolor.src.Format import com.example.randomkolor.src.Luminosity import com.example.randomkolor.src.RandomKolor class MainFragment : Fragment() { companion object { fun newInstance() = MainFragment() } private var _binding: MainFragmentBinding? = null // This property is only valid between onCreateView and onDestroyView. private val binding get() = _binding!! override fun onCreateView(inflater: LayoutInflater, container: ViewGroup?, savedInstanceState: Bundle?): View { _binding = MainFragmentBinding.inflate(inflater, container, false) return binding.root } override fun onActivityCreated(savedInstanceState: Bundle?) { super.onActivityCreated(savedInstanceState) _binding?.oneRandomColorBtn?.setOnClickListener { var rgbString = RandomKolor().randomColor() rgbString = rgbString.removePrefix("(") rgbString = rgbString.removeSuffix(")") rgbString = rgbString.replace("\\s".toRegex(), "") val rgbStringSplit = rgbString.split(',') val color = Color.rgb(rgbStringSplit[0].toInt(), rgbStringSplit[1].toInt(), rgbStringSplit[2].toInt()) _binding?.oneRandomColorCircle?.setColorFilter(color) } _binding?.oneRandomBlueBtn?.setOnClickListener { val hexString = RandomKolor().randomColor(hue = ColorHue(BLUE), luminosity = Luminosity.LIGHT, format = Format.HEX) val color = Color.parseColor(hexString) _binding?.oneRandomBlueCircle?.setColorFilter(color) } _binding?.threeRandomOrangeButton?.setOnClickListener { val hslStrings = RandomKolor().randomColors(count = 3, hue = ColorHue(ORANGE), luminosity = Luminosity.LIGHT, format = Format.HSL) for (i in 0..2) { var hslString = hslStrings[i] hslString = hslString.removePrefix("(") hslString = hslString.removeSuffix(")") hslString = hslString.replace("\\s".toRegex(), "") val hslStringSplit = hslString.split(',') val hsl: FloatArray = floatArrayOf(hslStringSplit[0].toFloat(), hslStringSplit[1].toFloat(), hslStringSplit[2].toFloat()) val color = ColorUtils.HSLToColor(hsl) if (i == 0) { _binding?.firstOrange?.setColorFilter(color) } if (i == 1) { _binding?.secondOrange?.setColorFilter(color) } if (i == 2) { _binding?.thirdOrange?.setColorFilter(color) } } } } override fun onDestroyView() { super.onDestroyView() _binding = null } }
{ "repo_name": "brian-norman/RandomKolor", "stars": "46", "repo_language": "Kotlin", "file_name": "gradle-wrapper.properties", "mime_type": "text/plain" }
# Add project specific ProGuard rules here. # You can control the set of applied configuration files using the # proguardFiles setting in build.gradle. # # For more details, see # http://developer.android.com/guide/developing/tools/proguard.html # If your project uses WebView with JS, uncomment the following # and specify the fully qualified class name to the JavaScript interface # class: #-keepclassmembers class fqcn.of.javascript.interface.for.webview { # public *; #} # Uncomment this to preserve the line number information for # debugging stack traces. #-keepattributes SourceFile,LineNumberTable # If you keep the line number information, uncomment this to # hide the original source file name. #-renamesourcefileattribute SourceFile
{ "repo_name": "brian-norman/RandomKolor", "stars": "46", "repo_language": "Kotlin", "file_name": "gradle-wrapper.properties", "mime_type": "text/plain" }
package com.example.randomkolor import androidx.test.platform.app.InstrumentationRegistry import androidx.test.ext.junit.runners.AndroidJUnit4 import org.junit.Test import org.junit.runner.RunWith import org.junit.Assert.* /** * Instrumented test, which will execute on an Android device. * * See [testing documentation](http://d.android.com/tools/testing). */ @RunWith(AndroidJUnit4::class) class ExampleInstrumentedTest { @Test fun useAppContext() { // Context of the app under test. val appContext = InstrumentationRegistry.getInstrumentation().targetContext assertEquals("com.example.randomkolor.test", appContext.packageName) } }
{ "repo_name": "brian-norman/RandomKolor", "stars": "46", "repo_language": "Kotlin", "file_name": "gradle-wrapper.properties", "mime_type": "text/plain" }
package com.example.randomkolor import org.junit.Test import org.junit.Assert.* /** * Example local unit test, which will execute on the development machine (host). * * See [testing documentation](http://d.android.com/tools/testing). */ class ExampleUnitTest { @Test fun addition_isCorrect() { assertEquals(4, 2 + 2) } }
{ "repo_name": "brian-norman/RandomKolor", "stars": "46", "repo_language": "Kotlin", "file_name": "gradle-wrapper.properties", "mime_type": "text/plain" }
<?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.example.randomkolor"> </manifest>
{ "repo_name": "brian-norman/RandomKolor", "stars": "46", "repo_language": "Kotlin", "file_name": "gradle-wrapper.properties", "mime_type": "text/plain" }
package com.example.randomkolor.src import androidx.core.graphics.ColorUtils import kotlin.math.floor import kotlin.random.Random class RandomKolor { /** * Generate a single random color with specified (or random) hue and luminosity. */ fun randomColor( hue: Hue = RandomHue, luminosity: Luminosity = Luminosity.RANDOM, format: Format = Format.RGB ): String { // First we pick a hue (H) val hueValue = pickHue(hue) // Then use H to determine saturation (S) val saturation = pickSaturation(hueValue, hue, luminosity) // Then use S and H to determine brightness (B) val brightness = pickBrightness(hueValue, hue, saturation, luminosity) // Then we return the HSB color in the desired format return setFormat(hueValue, saturation, brightness, format) } /** * Generate a list of colors given the desired count. */ fun randomColors( count: Int, hue: Hue = RandomHue, luminosity: Luminosity = Luminosity.RANDOM, format: Format = Format.RGB ): List<String> { val colors = mutableListOf<String>() for (i in 1..count) { colors.add(randomColor(hue, luminosity, format)) } return colors } private fun pickHue(hue: Hue): Int { val hueRange = hue.getHueRange() var hueValue = randomWithin(hueRange) // Instead of storing red as two separate ranges, // we group them, using negative numbers if (hueValue < 0) { hueValue += 360 } return hueValue } private fun pickSaturation(hueValue: Int, hue: Hue, luminosity: Luminosity): Int { if (hue == ColorHue(Color.MONOCHROME)) { return 0 } val color: Color = matchColor(hueValue, hue) val sMin = color.saturationRange().first val sMax = color.saturationRange().second return when (luminosity) { Luminosity.RANDOM -> randomWithin(Pair(0, 100)) Luminosity.BRIGHT -> randomWithin(Pair(55, sMax)) Luminosity.LIGHT -> randomWithin(Pair(sMin, 55)) Luminosity.DARK -> randomWithin(Pair(sMax - 10, sMax)) } } private fun pickBrightness(hueValue: Int, hue: Hue, saturation: Int, luminosity: Luminosity): Int { val color: Color = matchColor(hueValue, hue) val bMin = color.brightnessRange(saturation).first val bMax = color.brightnessRange(saturation).second return when(luminosity) { Luminosity.RANDOM -> randomWithin(Pair(50, 100)) // I set this to 50 arbitrarily, they look more attractive Luminosity.BRIGHT -> randomWithin(Pair(bMin, bMax)) Luminosity.LIGHT -> randomWithin(Pair((bMax + bMin)/2, bMax)) Luminosity.DARK -> randomWithin(Pair(bMin, bMin + 20)) } } private fun setFormat(hueValue: Int, saturation: Int, brightness: Int, format: Format): String { return when (format) { Format.HSL -> HSVtoHSL(hueValue, saturation, brightness).toString() Format.RGB -> HSVtoRGB(hueValue, saturation, brightness).toString() Format.HEX -> HSVtoHEX(hueValue, saturation, brightness) } } private fun HSVtoRGB(hueValue: Int, saturation: Int, brightness: Int): Triple<Int, Int, Int> { // This doesn't work for the values of 0 and 360 // Here's the hacky fix // Rebase the h,s,v values val h: Float = hueValue.coerceIn(1, 359) / 360f val s = saturation/100f val v = brightness/100f val hI = floor(h * 6f).toInt() val f = h * 6f - hI val p = v * (1f - s) val q = v * (1f - f*s) val t = v * (1f - (1f - f)*s) var r = 256f var g = 256f var b = 256f when (hI) { 0 -> { r = v; g = t; b = p } 1 -> { r = q; g = v; b = p } 2 -> { r = p; g = v; b = t } 3 -> { r = p; g = q; b = v } 4 -> { r = t; g = p; b = v } 5 -> { r = v; g = p; b = q } } return Triple(floor(r*255f).toInt(), floor(g*255f).toInt(), floor(b*255f).toInt()) } private fun HSVtoHSL(hueValue: Int, saturation: Int, brightness: Int): Triple<Float, Float, Float> { val rgb: Triple<Int, Int, Int> = HSVtoRGB(hueValue, saturation, brightness) val r = rgb.first val g = rgb.second val b = rgb.third val hsl: FloatArray = FloatArray(3) ColorUtils.RGBToHSL(r, g, b, hsl) return Triple(hsl[0], hsl[1], hsl[2]) } private fun HSVtoHEX(hueValue: Int, saturation: Int, brightness: Int): String { val rgb: Triple<Int, Int, Int> = HSVtoRGB(hueValue, saturation, brightness) val r = rgb.first val g = rgb.second val b = rgb.third // Convert r, g and b values to 2 digit hex strings and concat them to color code // Format: #RRGGBB return String.format("#%02X%02X%02X", r, g, b) } /** * Turns hue into a color if it isn't already one. * First we check if hue was passed in as a color, and just return that if it is. * If not, we iterate through every color to see which one the given hueValue fits in. * For some reason if a matching hue is not found, just return Monochrome. */ private fun matchColor(hueValue: Int, hue: Hue): Color { return when (hue) { is ColorHue -> hue.color else -> { // Maps red colors to make picking hue easier var hueVal = hueValue if (hueVal in 334..360) { hueVal -= 360 } for (color in Color.values()) { if (hueVal in color.hueRange.first .. color.hueRange.second) { return color } } // Returning Monochrome if we can't find a value, but this should never happen return Color.MONOCHROME } } } private fun randomWithin(range: Pair<Int, Int>): Int { // Generate random evenly distinct number from: // https://martin.ankerl.com/2009/12/09/how-to-create-random-colors-programmatically/ val goldenRatio = 0.618033988749895 var r = Random.nextDouble() r += goldenRatio r %= 1 return floor(range.first + r*(range.second + 1 - range.first)).toInt() } }
{ "repo_name": "brian-norman/RandomKolor", "stars": "46", "repo_language": "Kotlin", "file_name": "gradle-wrapper.properties", "mime_type": "text/plain" }
package com.example.randomkolor.src enum class Luminosity { RANDOM, BRIGHT, LIGHT, DARK }
{ "repo_name": "brian-norman/RandomKolor", "stars": "46", "repo_language": "Kotlin", "file_name": "gradle-wrapper.properties", "mime_type": "text/plain" }
package com.example.randomkolor.src enum class Format { HSL, RGB, HEX }
{ "repo_name": "brian-norman/RandomKolor", "stars": "46", "repo_language": "Kotlin", "file_name": "gradle-wrapper.properties", "mime_type": "text/plain" }
package com.example.randomkolor.src sealed class Hue object RandomHue: Hue() data class NumberHue(val value: Int): Hue() data class ColorHue(val color: Color): Hue() fun Hue.getHueRange(): Pair<Int, Int> { return when (this) { is ColorHue -> color.hueRange is NumberHue -> if (value in 1..359) Pair(value, value) else Pair(0, 360) RandomHue -> Pair(0, 360) } }
{ "repo_name": "brian-norman/RandomKolor", "stars": "46", "repo_language": "Kotlin", "file_name": "gradle-wrapper.properties", "mime_type": "text/plain" }
package com.example.randomkolor.src import kotlin.math.floor enum class Color(val hueRange: Pair<Int, Int>, val lowerBounds:List<Pair<Int, Int>>) { MONOCHROME( Pair(-1, -1), listOf(Pair(0, 0), Pair(100, 0)) ), RED( Pair(-26, 18), listOf(Pair(20, 100), Pair(30, 92), Pair(40, 89), Pair(50, 85), Pair(60, 78), Pair(70, 70), Pair(80, 60), Pair(90, 55), Pair(100, 50)) ), ORANGE( Pair(18, 46), listOf(Pair(20, 100), Pair(30, 93), Pair(40, 88), Pair(50, 86), Pair(60, 85), Pair(70, 70), Pair(100, 70)) ), YELLOW( Pair(46, 62), listOf(Pair(25, 100), Pair(40, 94), Pair(50, 89), Pair(60, 86), Pair(70, 84), Pair(80, 82), Pair(90, 80), Pair(100, 75)) ), GREEN( Pair(62, 178), listOf(Pair(30, 100), Pair(40, 90), Pair(50, 85), Pair(60, 81), Pair(70, 74), Pair(80, 64), Pair(90, 50), Pair(100, 40)) ), BLUE( Pair(178, 257), listOf(Pair(20, 100), Pair(30, 86), Pair(40, 80), Pair(50, 74), Pair(60, 60), Pair(70, 52), Pair(80, 44), Pair(90, 39), Pair(100, 35)) ), PURPLE( Pair(257, 282), listOf(Pair(20, 100), Pair(30, 87), Pair(40, 79), Pair(50, 70), Pair(60, 65), Pair(70, 59), Pair(80, 52), Pair(90, 45), Pair(100, 42)) ), PINK( Pair(282, 334), listOf(Pair(20, 100), Pair(30, 90), Pair(40, 86), Pair(60, 84), Pair(80, 80), Pair(90, 75), Pair(100, 73)) ) } fun Color.saturationRange(): Pair<Int, Int> { return Pair(lowerBounds.first().first, lowerBounds.last().first) } fun Color.brightnessRange(saturation: Int): Pair<Int, Int> { for (i in 0 until lowerBounds.size - 1) { val s1 = lowerBounds[i].first.toFloat() val v1 = lowerBounds[i].second.toFloat() val s2 = lowerBounds[i+1].first.toFloat() val v2 = lowerBounds[i+1].second.toFloat() if (saturation.toFloat() in s1..s2) { val m = (v2 - v1)/(s2 - s1) val b = v1 - m*s1 val minBrightness = m*saturation + b return Pair(floor(minBrightness).toInt(), 100) } } return Pair(0, 100) }
{ "repo_name": "brian-norman/RandomKolor", "stars": "46", "repo_language": "Kotlin", "file_name": "gradle-wrapper.properties", "mime_type": "text/plain" }
#Wed Feb 24 22:03:10 MST 2021 distributionBase=GRADLE_USER_HOME distributionPath=wrapper/dists zipStoreBase=GRADLE_USER_HOME zipStorePath=wrapper/dists distributionUrl=https\://services.gradle.org/distributions/gradle-6.5-all.zip
{ "repo_name": "brian-norman/RandomKolor", "stars": "46", "repo_language": "Kotlin", "file_name": "gradle-wrapper.properties", "mime_type": "text/plain" }
******************************** lmdb++: a C++11 wrapper for LMDB ******************************** .. image:: https://api.travis-ci.org/bendiken/lmdbxx.svg?branch=master :target: https://travis-ci.org/bendiken/lmdbxx :alt: Travis CI build status .. image:: https://scan.coverity.com/projects/4900/badge.svg :target: https://scan.coverity.com/projects/4900 :alt: Coverity Scan build status This is a comprehensive C++ wrapper for the LMDB_ embedded database library, offering both an error-checked procedural interface and an object-oriented resource interface with RAII_ semantics. .. _LMDB: http://symas.com/mdb/ .. _RAII: http://en.wikipedia.org/wiki/Resource_Acquisition_Is_Initialization Example ======= Here follows a simple motivating example_ demonstrating basic use of the object-oriented resource interface:: #include <cstdio> #include <cstdlib> #include <lmdb++.h> int main() { /* Create and open the LMDB environment: */ auto env = lmdb::env::create(); env.set_mapsize(1UL * 1024UL * 1024UL * 1024UL); /* 1 GiB */ env.open("./example.mdb", 0, 0664); /* Insert some key/value pairs in a write transaction: */ auto wtxn = lmdb::txn::begin(env); auto dbi = lmdb::dbi::open(wtxn, nullptr); dbi.put(wtxn, "username", "jhacker"); dbi.put(wtxn, "email", "jhacker@example.org"); dbi.put(wtxn, "fullname", "J. Random Hacker"); wtxn.commit(); /* Fetch key/value pairs in a read-only transaction: */ auto rtxn = lmdb::txn::begin(env, nullptr, MDB_RDONLY); auto cursor = lmdb::cursor::open(rtxn, dbi); std::string key, value; while (cursor.get(key, value, MDB_NEXT)) { std::printf("key: '%s', value: '%s'\n", key.c_str(), value.c_str()); } cursor.close(); rtxn.abort(); /* The enviroment is closed automatically. */ return EXIT_SUCCESS; } Should any operation in the above fail, an ``lmdb::error`` exception will be thrown and terminate the program since we don't specify an exception handler. All resources will regardless get automatically cleaned up due to RAII semantics. .. note:: In order to run this example, you must first manually create the ``./example.mdb`` directory. This is a basic characteristic of LMDB: the given environment path must already exist, as LMDB will not attempt to automatically create it. .. _example: https://github.com/bendiken/lmdbxx/blob/master/example.cc#L1 Features ======== * Designed to be entirely self-contained as a single ``<lmdb++.h>`` header file that can be dropped into a project. * Implements a straightforward mapping from C to C++, with consistent naming. * Provides both a procedural interface and an object-oriented RAII interface. * Simplifies error handling by translating error codes into C++ exceptions. * Carefully differentiates logic errors, runtime errors, and fatal errors. * Exception strings include the name of the LMDB function that failed. * Plays nice with others: all symbols are placed into the ``lmdb`` namespace. * 100% free and unencumbered `public domain <http://unlicense.org/>`_ software, usable in any context and for any purpose. Requirements ============ The ``<lmdb++.h>`` header file requires a C++11 compiler and standard library. Recent releases of Clang_ or GCC_ will work fine. In addition, for your application to build and run, the underlying ``<lmdb.h>`` header file shipped with LMDB must be available in the preprocessor's include path, and you must link with the ``liblmdb`` native library. On Ubuntu Linux 14.04 and newer, these prerequisites can be satisfied by installing the ``liblmdb-dev`` package. .. _Clang: http://clang.llvm.org/ .. _GCC: http://gcc.gnu.org/ Overview ======== This wrapper offers both an error-checked procedural interface and an object-oriented resource interface with RAII semantics. The former will be useful for easily retrofitting existing projects that currently use the raw C interface, but we recommend the latter for all new projects due to the exception safety afforded by RAII semantics. Resource Interface ------------------ The high-level resource interface wraps LMDB handles in a loving RAII embrace. This way, you can ensure e.g. that a transaction will get automatically aborted when exiting a lexical scope, regardless of whether the escape happened normally or by throwing an exception. ============================ =================================================== C handle C++ wrapper class ============================ =================================================== ``MDB_env*`` ``lmdb::env`` ``MDB_txn*`` ``lmdb::txn`` ``MDB_dbi`` ``lmdb::dbi`` ``MDB_cursor*`` ``lmdb::cursor`` ``MDB_val`` ``lmdb::val`` ============================ =================================================== The methods available on these C++ classes are named consistently with the procedural interface, below, with the obvious difference of omitting the handle type prefix which is already implied by the class in question. Procedural Interface -------------------- The low-level procedural interface wraps LMDB functions with error-checking code that will throw an instance of a corresponding C++ exception class in case of failure. This interface doesn't offer any convenience overloads as does the resource interface; the parameter types are exactly the same as for the raw C interface offered by LMDB itself. The return type is generally ``void`` for these functions since the wrapper eats the error code returned by the underlying C function, throwing an exception in case of failure and otherwise returning values in the same output parameters as the C interface. This interface is implemented entirely using static inline functions, so there are no hidden extra costs to using these wrapper functions so long as you have a decent compiler capable of basic inlining optimization. ============================ =================================================== C function C++ wrapper function ============================ =================================================== ``mdb_version()`` N/A ``mdb_strerror()`` N/A ``mdb_env_create()`` ``lmdb::env_create()`` ``mdb_env_open()`` ``lmdb::env_open()`` ``mdb_env_copy()`` ``lmdb::env_copy()`` [1]_ ``mdb_env_copyfd()`` ``lmdb::env_copy_fd()`` [1]_ ``mdb_env_copy2()`` ``lmdb::env_copy()`` [1]_ ``mdb_env_copyfd2()`` ``lmdb::env_copy_fd()`` [1]_ ``mdb_env_stat()`` ``lmdb::env_stat()`` ``mdb_env_info()`` ``lmdb::env_info()`` ``mdb_env_sync()`` ``lmdb::env_sync()`` ``mdb_env_close()`` ``lmdb::env_close()`` ``mdb_env_set_flags()`` ``lmdb::env_set_flags()`` ``mdb_env_get_flags()`` ``lmdb::env_get_flags()`` ``mdb_env_get_path()`` ``lmdb::env_get_path()`` ``mdb_env_get_fd()`` ``lmdb::env_get_fd()`` ``mdb_env_set_mapsize()`` ``lmdb::env_set_mapsize()`` ``mdb_env_set_maxreaders()`` ``lmdb::env_set_max_readers()`` ``mdb_env_get_maxreaders()`` ``lmdb::env_get_max_readers()`` ``mdb_env_set_maxdbs()`` ``lmdb::env_set_max_dbs()`` ``mdb_env_get_maxkeysize()`` ``lmdb::env_get_max_keysize()`` ``mdb_env_set_userctx()`` ``lmdb::env_set_userctx()`` [2]_ ``mdb_env_get_userctx()`` ``lmdb::env_get_userctx()`` [2]_ ``mdb_env_set_assert()`` N/A ``mdb_txn_begin()`` ``lmdb::txn_begin()`` ``mdb_txn_env()`` ``lmdb::txn_env()`` ``mdb_txn_id()`` ``lmdb::txn_id()`` [3]_ ``mdb_txn_commit()`` ``lmdb::txn_commit()`` ``mdb_txn_abort()`` ``lmdb::txn_abort()`` ``mdb_txn_reset()`` ``lmdb::txn_reset()`` ``mdb_txn_renew()`` ``lmdb::txn_renew()`` ``mdb_dbi_open()`` ``lmdb::dbi_open()`` ``mdb_stat()`` ``lmdb::dbi_stat()`` [4]_ ``mdb_dbi_flags()`` ``lmdb::dbi_flags()`` ``mdb_dbi_close()`` ``lmdb::dbi_close()`` ``mdb_drop()`` ``lmdb::dbi_drop()`` [4]_ ``mdb_set_compare()`` ``lmdb::dbi_set_compare()`` [4]_ ``mdb_set_dupsort()`` ``lmdb::dbi_set_dupsort()`` [4]_ ``mdb_set_relfunc()`` ``lmdb::dbi_set_relfunc()`` [4]_ ``mdb_set_relctx()`` ``lmdb::dbi_set_relctx()`` [4]_ ``mdb_get()`` ``lmdb::dbi_get()`` [4]_ ``mdb_put()`` ``lmdb::dbi_put()`` [4]_ ``mdb_del()`` ``lmdb::dbi_del()`` [4]_ ``mdb_cursor_open()`` ``lmdb::cursor_open()`` ``mdb_cursor_close()`` ``lmdb::cursor_close()`` ``mdb_cursor_renew()`` ``lmdb::cursor_renew()`` ``mdb_cursor_txn()`` ``lmdb::cursor_txn()`` ``mdb_cursor_dbi()`` ``lmdb::cursor_dbi()`` ``mdb_cursor_get()`` ``lmdb::cursor_get()`` ``mdb_cursor_put()`` ``lmdb::cursor_put()`` ``mdb_cursor_del()`` ``lmdb::cursor_del()`` ``mdb_cursor_count()`` ``lmdb::cursor_count()`` ``mdb_cmp()`` N/A ``mdb_dcmp()`` N/A ``mdb_reader_list()`` TODO ``mdb_reader_check()`` TODO ============================ =================================================== .. rubric:: Footnotes .. [1] Three-parameter signature available since LMDB 0.9.14 (2014/09/20). .. [2] Only available since LMDB 0.9.11 (2014/01/15). .. [3] Only available in LMDB HEAD, not yet in any 0.9.x release (as of 0.9.16). Define the ``LMDBXX_TXN_ID`` preprocessor symbol to unhide this. .. [4] Note the difference in naming. (See below.) Caveats ^^^^^^^ * The C++ procedural interface is more strictly and consistently grouped by handle type than is the LMDB native interface. For instance, ``mdb_put()`` is wrapped as the C++ function ``lmdb::dbi_put()``, not ``lmdb::put()``. These differences--a handful in number--all concern operations on database handles. * The C++ interface takes some care to be const-correct for input-only parameters, something the original C interface largely ignores. Hence occasional use of ``const_cast`` in the wrapper code base. * ``lmdb::dbi_put()`` does not throw an exception if LMDB returns the ``MDB_KEYEXIST`` error code; it instead just returns ``false``. This is intended to simplify common usage patterns. * ``lmdb::dbi_get()``, ``lmdb::dbi_del()``, and ``lmdb::cursor_get()`` do not throw an exception if LMDB returns the ``MDB_NOTFOUND`` error code; they instead just return ``false``. This is intended to simplify common usage patterns. * ``lmdb::env_get_max_keysize()`` returns an unsigned integer, instead of a signed integer as the underlying ``mdb_env_get_maxkeysize()`` function does. This conversion is done since the return value cannot in fact be negative. Error Handling -------------- This wrapper draws a careful distinction between three different classes of possible LMDB error conditions: * **Logic errors**, represented by ``lmdb::logic_error``. Errors of this class are thrown due to programming errors where the function interfaces are used in violation of documented preconditions. A common strategy for handling this class of error conditions is to abort the program with a core dump, facilitating introspection to locate and remedy the bug. * **Fatal errors**, represented by ``lmdb::fatal_error``. Errors of this class are thrown due to the exhaustion of critical system resources, in particular available memory (``ENOMEM``), or due to attempts to exceed applicable system resource limits. A typical strategy for handling this class of error conditions is to terminate the program with a descriptive error message. More robust programs and shared libraries may wish to implement another strategy, such as retrying the operation after first letting most of the call stack unwind in order to free up scarce resources. * **Runtime errors**, represented by ``lmdb::runtime_error``. Errors of this class are thrown as a matter of course to indicate various exceptional conditions. These conditions are generally recoverable, and robust programs will take care to correctly handle them. .. note:: The distinction between logic errors and runtime errors mirrors that found in the C++11 standard library, where the ``<stdexcept>`` header defines the standard exception base classes ``std::logic_error`` and ``std::runtime_error``. The standard exception class ``std::bad_alloc``, on the other hand, is a representative example of a fatal error. ======================== ================================ ====================== Error code Exception class Exception type ======================== ================================ ====================== ``MDB_KEYEXIST`` ``lmdb::key_exist_error`` runtime ``MDB_NOTFOUND`` ``lmdb::not_found_error`` runtime ``MDB_CORRUPTED`` ``lmdb::corrupted_error`` fatal ``MDB_PANIC`` ``lmdb::panic_error`` fatal ``MDB_VERSION_MISMATCH`` ``lmdb::version_mismatch_error`` fatal ``MDB_MAP_FULL`` ``lmdb::map_full_error`` runtime ``MDB_BAD_DBI`` ``lmdb::bad_dbi_error`` runtime [4]_ (others) ``lmdb::runtime_error`` runtime ======================== ================================ ====================== .. rubric:: Footnotes .. [4] Available since LMDB 0.9.14 (2014/09/20). .. note:: ``MDB_KEYEXIST`` and ``MDB_NOTFOUND`` are handled specially by some functions. Versioning Policy ----------------- The lmdb++ version tracks the upstream LMDB release (x.y.z) that it is compatible with, and appends a sub-patch-level version (x.y.z.N) to indicate changes to the wrapper itself. For example, an lmdb++ release of 0.9.14.2 would indicate that it is designed for compatibility with LMDB 0.9.14, and is the third wrapper release (the first being .0, and the second .1) for that upstream target. .. note:: To the extent that LMDB will preserve API and ABI compatibility going forward, older versions of the wrapper should work with newer versions of LMDB; and newer versions of the wrapper will generally work with older versions of LMDB by using the preprocessor to conditionalize the visibility of newer symbols--see, for example, the preprocessor guards around the definition of ``lmdb::env_set_userctx()``. Installation ============ lmdb++ is currently available as a package/port in the following operating system distributions and package management systems: ================= ============== =============================================== Distribution Package Name Installation Hint ================= ============== =============================================== `Arch Linux AUR`_ liblmdb++ ``yaourt -Sa liblmdb++`` Fink_ [5]_ lmdb++ ``sudo fink install lmdb++`` MacPorts_ lmdbxx ``sudo port install lmdbxx`` Portage_ [6]_ lmdb++ ``sudo emerge --ask lmdb++`` ================= ============== =============================================== .. rubric:: Footnotes .. [5] Still pending review. .. [6] Compatible with Gentoo Linux, Funtoo Linux, and Sabayon Linux. .. _Arch Linux AUR: https://aur.archlinux.org/packages/liblmdb%2B%2B/ .. _Fink: https://sourceforge.net/p/fink/package-submissions/4487/ .. _MacPorts: https://www.macports.org/ports.php?by=name&substr=lmdbxx .. _Portage: https://packages.gentoo.org/package/dev-db/lmdb++ Support ======= To report a bug or submit a patch for lmdb++, please file an issue in the `issue tracker on GitHub <https://github.com/bendiken/lmdbxx/issues>`__. Questions and discussions about LMDB itself should be directed to the `OpenLDAP mailing lists <http://www.openldap.org/lists/>`__. Elsewhere ========= Find this project at: GitHub_, Bitbucket_, `Open Hub`_, SourceForge_, `Travis CI`_, and `Coverity Scan`_. .. _GitHub: https://github.com/bendiken/lmdbxx .. _Bitbucket: https://bitbucket.org/bendiken/lmdbxx .. _Open Hub: https://www.openhub.net/p/lmdbxx .. _SourceForge: https://sourceforge.net/projects/lmdbxx/ .. _Travis CI: https://travis-ci.org/bendiken/lmdbxx .. _Coverity Scan: https://scan.coverity.com/projects/4900 The API documentation is published at: http://lmdbxx.sourceforge.net/ Author ====== `Arto Bendiken <https://github.com/bendiken>`_ - http://ar.to/ License ======= This is free and unencumbered public domain software. For more information, see http://unlicense.org/ or the accompanying ``UNLICENSE`` file.
{ "repo_name": "drycpp/lmdbxx", "stars": "261", "repo_language": "C++", "file_name": "lmdb.info", "mime_type": "text/plain" }
/* This is free and unencumbered software released into the public domain. */ #ifndef LMDBXX_H #define LMDBXX_H /** * <lmdb++.h> - C++11 wrapper for LMDB. * * @author Arto Bendiken <arto@bendiken.net> * @see https://sourceforge.net/projects/lmdbxx/ */ #ifndef __cplusplus #error "<lmdb++.h> requires a C++ compiler" #endif #if __cplusplus < 201103L #if !defined(_MSC_VER) || _MSC_VER < 1900 #error "<lmdb++.h> requires a C++11 compiler (CXXFLAGS='-std=c++11')" #endif // _MSC_VER check #endif //////////////////////////////////////////////////////////////////////////////// #include <lmdb.h> /* for MDB_*, mdb_*() */ #ifdef LMDBXX_DEBUG #include <cassert> /* for assert() */ #endif #include <cstddef> /* for std::size_t */ #include <cstdio> /* for std::snprintf() */ #include <cstring> /* for std::strlen() */ #include <stdexcept> /* for std::runtime_error */ #include <string> /* for std::string */ #include <type_traits> /* for std::is_pod<> */ namespace lmdb { using mode = mdb_mode_t; } //////////////////////////////////////////////////////////////////////////////// /* Error Handling */ namespace lmdb { class error; class logic_error; class fatal_error; class runtime_error; class key_exist_error; class not_found_error; class corrupted_error; class panic_error; class version_mismatch_error; class map_full_error; class bad_dbi_error; } /** * Base class for LMDB exception conditions. * * @see http://symas.com/mdb/doc/group__errors.html */ class lmdb::error : public std::runtime_error { protected: const int _code; public: /** * Throws an error based on the given LMDB return code. */ [[noreturn]] static inline void raise(const char* origin, int rc); /** * Constructor. */ error(const char* const origin, const int rc) noexcept : runtime_error{origin}, _code{rc} {} /** * Returns the underlying LMDB error code. */ int code() const noexcept { return _code; } /** * Returns the origin of the LMDB error. */ const char* origin() const noexcept { return runtime_error::what(); } /** * Returns the underlying LMDB error code. */ virtual const char* what() const noexcept { static thread_local char buffer[1024]; std::snprintf(buffer, sizeof(buffer), "%s: %s", origin(), ::mdb_strerror(code())); return buffer; } }; /** * Base class for logic error conditions. */ class lmdb::logic_error : public lmdb::error { public: using error::error; }; /** * Base class for fatal error conditions. */ class lmdb::fatal_error : public lmdb::error { public: using error::error; }; /** * Base class for runtime error conditions. */ class lmdb::runtime_error : public lmdb::error { public: using error::error; }; /** * Exception class for `MDB_KEYEXIST` errors. * * @see http://symas.com/mdb/doc/group__errors.html#ga05dc5bbcc7da81a7345bd8676e8e0e3b */ class lmdb::key_exist_error final : public lmdb::runtime_error { public: using runtime_error::runtime_error; }; /** * Exception class for `MDB_NOTFOUND` errors. * * @see http://symas.com/mdb/doc/group__errors.html#gabeb52e4c4be21b329e31c4add1b71926 */ class lmdb::not_found_error final : public lmdb::runtime_error { public: using runtime_error::runtime_error; }; /** * Exception class for `MDB_CORRUPTED` errors. * * @see http://symas.com/mdb/doc/group__errors.html#gaf8148bf1b85f58e264e57194bafb03ef */ class lmdb::corrupted_error final : public lmdb::fatal_error { public: using fatal_error::fatal_error; }; /** * Exception class for `MDB_PANIC` errors. * * @see http://symas.com/mdb/doc/group__errors.html#gae37b9aedcb3767faba3de8c1cf6d3473 */ class lmdb::panic_error final : public lmdb::fatal_error { public: using fatal_error::fatal_error; }; /** * Exception class for `MDB_VERSION_MISMATCH` errors. * * @see http://symas.com/mdb/doc/group__errors.html#ga909b2db047fa90fb0d37a78f86a6f99b */ class lmdb::version_mismatch_error final : public lmdb::fatal_error { public: using fatal_error::fatal_error; }; /** * Exception class for `MDB_MAP_FULL` errors. * * @see http://symas.com/mdb/doc/group__errors.html#ga0a83370402a060c9175100d4bbfb9f25 */ class lmdb::map_full_error final : public lmdb::runtime_error { public: using runtime_error::runtime_error; }; /** * Exception class for `MDB_BAD_DBI` errors. * * @since 0.9.14 (2014/09/20) * @see http://symas.com/mdb/doc/group__errors.html#gab4c82e050391b60a18a5df08d22a7083 */ class lmdb::bad_dbi_error final : public lmdb::runtime_error { public: using runtime_error::runtime_error; }; inline void lmdb::error::raise(const char* const origin, const int rc) { switch (rc) { case MDB_KEYEXIST: throw key_exist_error{origin, rc}; case MDB_NOTFOUND: throw not_found_error{origin, rc}; case MDB_CORRUPTED: throw corrupted_error{origin, rc}; case MDB_PANIC: throw panic_error{origin, rc}; case MDB_VERSION_MISMATCH: throw version_mismatch_error{origin, rc}; case MDB_MAP_FULL: throw map_full_error{origin, rc}; #ifdef MDB_BAD_DBI case MDB_BAD_DBI: throw bad_dbi_error{origin, rc}; #endif default: throw lmdb::runtime_error{origin, rc}; } } //////////////////////////////////////////////////////////////////////////////// /* Procedural Interface: Metadata */ namespace lmdb { // TODO: mdb_version() // TODO: mdb_strerror() } //////////////////////////////////////////////////////////////////////////////// /* Procedural Interface: Environment */ namespace lmdb { static inline void env_create(MDB_env** env); static inline void env_open(MDB_env* env, const char* path, unsigned int flags, mode mode); #if MDB_VERSION_FULL >= MDB_VERINT(0, 9, 14) static inline void env_copy(MDB_env* env, const char* path, unsigned int flags); static inline void env_copy_fd(MDB_env* env, mdb_filehandle_t fd, unsigned int flags); #else static inline void env_copy(MDB_env* env, const char* path); static inline void env_copy_fd(MDB_env* env, mdb_filehandle_t fd); #endif static inline void env_stat(MDB_env* env, MDB_stat* stat); static inline void env_info(MDB_env* env, MDB_envinfo* stat); static inline void env_sync(MDB_env* env, bool force); static inline void env_close(MDB_env* env) noexcept; static inline void env_set_flags(MDB_env* env, unsigned int flags, bool onoff); static inline void env_get_flags(MDB_env* env, unsigned int* flags); static inline void env_get_path(MDB_env* env, const char** path); static inline void env_get_fd(MDB_env* env, mdb_filehandle_t* fd); static inline void env_set_mapsize(MDB_env* env, std::size_t size); static inline void env_set_max_readers(MDB_env* env, unsigned int count); static inline void env_get_max_readers(MDB_env* env, unsigned int* count); static inline void env_set_max_dbs(MDB_env* env, MDB_dbi count); static inline unsigned int env_get_max_keysize(MDB_env* env); #if MDB_VERSION_FULL >= MDB_VERINT(0, 9, 11) static inline void env_set_userctx(MDB_env* env, void* ctx); static inline void* env_get_userctx(MDB_env* env); #endif // TODO: mdb_env_set_assert() // TODO: mdb_reader_list() // TODO: mdb_reader_check() } /** * @throws lmdb::error on failure * @see http://symas.com/mdb/doc/group__mdb.html#gaad6be3d8dcd4ea01f8df436f41d158d4 */ static inline void lmdb::env_create(MDB_env** env) { const int rc = ::mdb_env_create(env); if (rc != MDB_SUCCESS) { error::raise("mdb_env_create", rc); } } /** * @throws lmdb::error on failure * @see http://symas.com/mdb/doc/group__mdb.html#ga32a193c6bf4d7d5c5d579e71f22e9340 */ static inline void lmdb::env_open(MDB_env* const env, const char* const path, const unsigned int flags, const mode mode) { const int rc = ::mdb_env_open(env, path, flags, mode); if (rc != MDB_SUCCESS) { error::raise("mdb_env_open", rc); } } /** * @throws lmdb::error on failure * @see http://symas.com/mdb/doc/group__mdb.html#ga3bf50d7793b36aaddf6b481a44e24244 * @see http://symas.com/mdb/doc/group__mdb.html#ga5d51d6130325f7353db0955dbedbc378 */ static inline void lmdb::env_copy(MDB_env* const env, #if MDB_VERSION_FULL >= MDB_VERINT(0, 9, 14) const char* const path, const unsigned int flags = 0) { const int rc = ::mdb_env_copy2(env, path, flags); #else const char* const path) { const int rc = ::mdb_env_copy(env, path); #endif if (rc != MDB_SUCCESS) { error::raise("mdb_env_copy2", rc); } } /** * @throws lmdb::error on failure * @see http://symas.com/mdb/doc/group__mdb.html#ga5040d0de1f14000fa01fc0b522ff1f86 * @see http://symas.com/mdb/doc/group__mdb.html#ga470b0bcc64ac417de5de5930f20b1a28 */ static inline void lmdb::env_copy_fd(MDB_env* const env, #if MDB_VERSION_FULL >= MDB_VERINT(0, 9, 14) const mdb_filehandle_t fd, const unsigned int flags = 0) { const int rc = ::mdb_env_copyfd2(env, fd, flags); #else const mdb_filehandle_t fd) { const int rc = ::mdb_env_copyfd(env, fd); #endif if (rc != MDB_SUCCESS) { error::raise("mdb_env_copyfd2", rc); } } /** * @throws lmdb::error on failure * @see http://symas.com/mdb/doc/group__mdb.html#gaf881dca452050efbd434cd16e4bae255 */ static inline void lmdb::env_stat(MDB_env* const env, MDB_stat* const stat) { const int rc = ::mdb_env_stat(env, stat); if (rc != MDB_SUCCESS) { error::raise("mdb_env_stat", rc); } } /** * @throws lmdb::error on failure * @see http://symas.com/mdb/doc/group__mdb.html#ga18769362c7e7d6cf91889a028a5c5947 */ static inline void lmdb::env_info(MDB_env* const env, MDB_envinfo* const stat) { const int rc = ::mdb_env_info(env, stat); if (rc != MDB_SUCCESS) { error::raise("mdb_env_info", rc); } } /** * @throws lmdb::error on failure * @see http://symas.com/mdb/doc/group__mdb.html#ga85e61f05aa68b520cc6c3b981dba5037 */ static inline void lmdb::env_sync(MDB_env* const env, const bool force = true) { const int rc = ::mdb_env_sync(env, force); if (rc != MDB_SUCCESS) { error::raise("mdb_env_sync", rc); } } /** * @see http://symas.com/mdb/doc/group__mdb.html#ga4366c43ada8874588b6a62fbda2d1e95 */ static inline void lmdb::env_close(MDB_env* const env) noexcept { ::mdb_env_close(env); } /** * @throws lmdb::error on failure * @see http://symas.com/mdb/doc/group__mdb.html#ga83f66cf02bfd42119451e9468dc58445 */ static inline void lmdb::env_set_flags(MDB_env* const env, const unsigned int flags, const bool onoff = true) { const int rc = ::mdb_env_set_flags(env, flags, onoff ? 1 : 0); if (rc != MDB_SUCCESS) { error::raise("mdb_env_set_flags", rc); } } /** * @throws lmdb::error on failure * @see http://symas.com/mdb/doc/group__mdb.html#ga2733aefc6f50beb49dd0c6eb19b067d9 */ static inline void lmdb::env_get_flags(MDB_env* const env, unsigned int* const flags) { const int rc = ::mdb_env_get_flags(env, flags); if (rc != MDB_SUCCESS) { error::raise("mdb_env_get_flags", rc); } } /** * @throws lmdb::error on failure * @see http://symas.com/mdb/doc/group__mdb.html#gac699fdd8c4f8013577cb933fb6a757fe */ static inline void lmdb::env_get_path(MDB_env* const env, const char** path) { const int rc = ::mdb_env_get_path(env, path); if (rc != MDB_SUCCESS) { error::raise("mdb_env_get_path", rc); } } /** * @throws lmdb::error on failure * @see http://symas.com/mdb/doc/group__mdb.html#gaf1570e7c0e5a5d860fef1032cec7d5f2 */ static inline void lmdb::env_get_fd(MDB_env* const env, mdb_filehandle_t* const fd) { const int rc = ::mdb_env_get_fd(env, fd); if (rc != MDB_SUCCESS) { error::raise("mdb_env_get_fd", rc); } } /** * @throws lmdb::error on failure * @see http://symas.com/mdb/doc/group__mdb.html#gaa2506ec8dab3d969b0e609cd82e619e5 */ static inline void lmdb::env_set_mapsize(MDB_env* const env, const std::size_t size) { const int rc = ::mdb_env_set_mapsize(env, size); if (rc != MDB_SUCCESS) { error::raise("mdb_env_set_mapsize", rc); } } /** * @throws lmdb::error on failure * @see http://symas.com/mdb/doc/group__mdb.html#gae687966c24b790630be2a41573fe40e2 */ static inline void lmdb::env_set_max_readers(MDB_env* const env, const unsigned int count) { const int rc = ::mdb_env_set_maxreaders(env, count); if (rc != MDB_SUCCESS) { error::raise("mdb_env_set_maxreaders", rc); } } /** * @throws lmdb::error on failure * @see http://symas.com/mdb/doc/group__mdb.html#ga70e143cf11760d869f754c9c9956e6cc */ static inline void lmdb::env_get_max_readers(MDB_env* const env, unsigned int* const count) { const int rc = ::mdb_env_get_maxreaders(env, count); if (rc != MDB_SUCCESS) { error::raise("mdb_env_get_maxreaders", rc); } } /** * @throws lmdb::error on failure * @see http://symas.com/mdb/doc/group__mdb.html#gaa2fc2f1f37cb1115e733b62cab2fcdbc */ static inline void lmdb::env_set_max_dbs(MDB_env* const env, const MDB_dbi count) { const int rc = ::mdb_env_set_maxdbs(env, count); if (rc != MDB_SUCCESS) { error::raise("mdb_env_set_maxdbs", rc); } } /** * @see http://symas.com/mdb/doc/group__mdb.html#gaaf0be004f33828bf2fb09d77eb3cef94 */ static inline unsigned int lmdb::env_get_max_keysize(MDB_env* const env) { const int rc = ::mdb_env_get_maxkeysize(env); #ifdef LMDBXX_DEBUG assert(rc >= 0); #endif return static_cast<unsigned int>(rc); } #if MDB_VERSION_FULL >= MDB_VERINT(0, 9, 11) /** * @throws lmdb::error on failure * @since 0.9.11 (2014/01/15) * @see http://symas.com/mdb/doc/group__mdb.html#gaf2fe09eb9c96eeb915a76bf713eecc46 */ static inline void lmdb::env_set_userctx(MDB_env* const env, void* const ctx) { const int rc = ::mdb_env_set_userctx(env, ctx); if (rc != MDB_SUCCESS) { error::raise("mdb_env_set_userctx", rc); } } #endif #if MDB_VERSION_FULL >= MDB_VERINT(0, 9, 11) /** * @since 0.9.11 (2014/01/15) * @see http://symas.com/mdb/doc/group__mdb.html#ga45df6a4fb150cda2316b5ae224ba52f1 */ static inline void* lmdb::env_get_userctx(MDB_env* const env) { return ::mdb_env_get_userctx(env); } #endif //////////////////////////////////////////////////////////////////////////////// /* Procedural Interface: Transactions */ namespace lmdb { static inline void txn_begin( MDB_env* env, MDB_txn* parent, unsigned int flags, MDB_txn** txn); static inline MDB_env* txn_env(MDB_txn* txn) noexcept; #ifdef LMDBXX_TXN_ID static inline std::size_t txn_id(MDB_txn* txn) noexcept; #endif static inline void txn_commit(MDB_txn* txn); static inline void txn_abort(MDB_txn* txn) noexcept; static inline void txn_reset(MDB_txn* txn) noexcept; static inline void txn_renew(MDB_txn* txn); } /** * @throws lmdb::error on failure * @see http://symas.com/mdb/doc/group__mdb.html#gad7ea55da06b77513609efebd44b26920 */ static inline void lmdb::txn_begin(MDB_env* const env, MDB_txn* const parent, const unsigned int flags, MDB_txn** txn) { const int rc = ::mdb_txn_begin(env, parent, flags, txn); if (rc != MDB_SUCCESS) { error::raise("mdb_txn_begin", rc); } } /** * @see http://symas.com/mdb/doc/group__mdb.html#gaeb17735b8aaa2938a78a45cab85c06a0 */ static inline MDB_env* lmdb::txn_env(MDB_txn* const txn) noexcept { return ::mdb_txn_env(txn); } #ifdef LMDBXX_TXN_ID /** * @note Only available in HEAD, not yet in any 0.9.x release (as of 0.9.16). */ static inline std::size_t lmdb::txn_id(MDB_txn* const txn) noexcept { return ::mdb_txn_id(txn); } #endif /** * @throws lmdb::error on failure * @see http://symas.com/mdb/doc/group__mdb.html#ga846fbd6f46105617ac9f4d76476f6597 */ static inline void lmdb::txn_commit(MDB_txn* const txn) { const int rc = ::mdb_txn_commit(txn); if (rc != MDB_SUCCESS) { error::raise("mdb_txn_commit", rc); } } /** * @see http://symas.com/mdb/doc/group__mdb.html#ga73a5938ae4c3239ee11efa07eb22b882 */ static inline void lmdb::txn_abort(MDB_txn* const txn) noexcept { ::mdb_txn_abort(txn); } /** * @see http://symas.com/mdb/doc/group__mdb.html#ga02b06706f8a66249769503c4e88c56cd */ static inline void lmdb::txn_reset(MDB_txn* const txn) noexcept { ::mdb_txn_reset(txn); } /** * @throws lmdb::error on failure * @see http://symas.com/mdb/doc/group__mdb.html#ga6c6f917959517ede1c504cf7c720ce6d */ static inline void lmdb::txn_renew(MDB_txn* const txn) { const int rc = ::mdb_txn_renew(txn); if (rc != MDB_SUCCESS) { error::raise("mdb_txn_renew", rc); } } //////////////////////////////////////////////////////////////////////////////// /* Procedural Interface: Databases */ namespace lmdb { static inline void dbi_open( MDB_txn* txn, const char* name, unsigned int flags, MDB_dbi* dbi); static inline void dbi_stat(MDB_txn* txn, MDB_dbi dbi, MDB_stat* stat); static inline void dbi_flags(MDB_txn* txn, MDB_dbi dbi, unsigned int* flags); static inline void dbi_close(MDB_env* env, MDB_dbi dbi) noexcept; static inline void dbi_drop(MDB_txn* txn, MDB_dbi dbi, bool del); static inline void dbi_set_compare(MDB_txn* txn, MDB_dbi dbi, MDB_cmp_func* cmp); static inline void dbi_set_dupsort(MDB_txn* txn, MDB_dbi dbi, MDB_cmp_func* cmp); static inline void dbi_set_relfunc(MDB_txn* txn, MDB_dbi dbi, MDB_rel_func* rel); static inline void dbi_set_relctx(MDB_txn* txn, MDB_dbi dbi, void* ctx); static inline bool dbi_get(MDB_txn* txn, MDB_dbi dbi, const MDB_val* key, MDB_val* data); static inline bool dbi_put(MDB_txn* txn, MDB_dbi dbi, const MDB_val* key, MDB_val* data, unsigned int flags); static inline bool dbi_del(MDB_txn* txn, MDB_dbi dbi, const MDB_val* key, const MDB_val* data); // TODO: mdb_cmp() // TODO: mdb_dcmp() } /** * @throws lmdb::error on failure * @see http://symas.com/mdb/doc/group__mdb.html#gac08cad5b096925642ca359a6d6f0562a */ static inline void lmdb::dbi_open(MDB_txn* const txn, const char* const name, const unsigned int flags, MDB_dbi* const dbi) { const int rc = ::mdb_dbi_open(txn, name, flags, dbi); if (rc != MDB_SUCCESS) { error::raise("mdb_dbi_open", rc); } } /** * @throws lmdb::error on failure * @see http://symas.com/mdb/doc/group__mdb.html#gae6c1069febe94299769dbdd032fadef6 */ static inline void lmdb::dbi_stat(MDB_txn* const txn, const MDB_dbi dbi, MDB_stat* const result) { const int rc = ::mdb_stat(txn, dbi, result); if (rc != MDB_SUCCESS) { error::raise("mdb_stat", rc); } } /** * @throws lmdb::error on failure * @see http://symas.com/mdb/doc/group__mdb.html#ga95ba4cb721035478a8705e57b91ae4d4 */ static inline void lmdb::dbi_flags(MDB_txn* const txn, const MDB_dbi dbi, unsigned int* const flags) { const int rc = ::mdb_dbi_flags(txn, dbi, flags); if (rc != MDB_SUCCESS) { error::raise("mdb_dbi_flags", rc); } } /** * @see http://symas.com/mdb/doc/group__mdb.html#ga52dd98d0c542378370cd6b712ff961b5 */ static inline void lmdb::dbi_close(MDB_env* const env, const MDB_dbi dbi) noexcept { ::mdb_dbi_close(env, dbi); } /** * @see http://symas.com/mdb/doc/group__mdb.html#gab966fab3840fc54a6571dfb32b00f2db */ static inline void lmdb::dbi_drop(MDB_txn* const txn, const MDB_dbi dbi, const bool del = false) { const int rc = ::mdb_drop(txn, dbi, del ? 1 : 0); if (rc != MDB_SUCCESS) { error::raise("mdb_drop", rc); } } /** * @throws lmdb::error on failure * @see http://symas.com/mdb/doc/group__mdb.html#ga68e47ffcf72eceec553c72b1784ee0fe */ static inline void lmdb::dbi_set_compare(MDB_txn* const txn, const MDB_dbi dbi, MDB_cmp_func* const cmp = nullptr) { const int rc = ::mdb_set_compare(txn, dbi, cmp); if (rc != MDB_SUCCESS) { error::raise("mdb_set_compare", rc); } } /** * @throws lmdb::error on failure * @see http://symas.com/mdb/doc/group__mdb.html#gacef4ec3dab0bbd9bc978b73c19c879ae */ static inline void lmdb::dbi_set_dupsort(MDB_txn* const txn, const MDB_dbi dbi, MDB_cmp_func* const cmp = nullptr) { const int rc = ::mdb_set_dupsort(txn, dbi, cmp); if (rc != MDB_SUCCESS) { error::raise("mdb_set_dupsort", rc); } } /** * @throws lmdb::error on failure * @see http://symas.com/mdb/doc/group__mdb.html#ga697d82c7afe79f142207ad5adcdebfeb */ static inline void lmdb::dbi_set_relfunc(MDB_txn* const txn, const MDB_dbi dbi, MDB_rel_func* const rel) { const int rc = ::mdb_set_relfunc(txn, dbi, rel); if (rc != MDB_SUCCESS) { error::raise("mdb_set_relfunc", rc); } } /** * @throws lmdb::error on failure * @see http://symas.com/mdb/doc/group__mdb.html#ga7c34246308cee01724a1839a8f5cc594 */ static inline void lmdb::dbi_set_relctx(MDB_txn* const txn, const MDB_dbi dbi, void* const ctx) { const int rc = ::mdb_set_relctx(txn, dbi, ctx); if (rc != MDB_SUCCESS) { error::raise("mdb_set_relctx", rc); } } /** * @retval true if the key/value pair was retrieved * @retval false if the key wasn't found * @see http://symas.com/mdb/doc/group__mdb.html#ga8bf10cd91d3f3a83a34d04ce6b07992d */ static inline bool lmdb::dbi_get(MDB_txn* const txn, const MDB_dbi dbi, const MDB_val* const key, MDB_val* const data) { const int rc = ::mdb_get(txn, dbi, const_cast<MDB_val*>(key), data); if (rc != MDB_SUCCESS && rc != MDB_NOTFOUND) { error::raise("mdb_get", rc); } return (rc == MDB_SUCCESS); } /** * @retval true if the key/value pair was inserted * @retval false if the key already existed * @see http://symas.com/mdb/doc/group__mdb.html#ga4fa8573d9236d54687c61827ebf8cac0 */ static inline bool lmdb::dbi_put(MDB_txn* const txn, const MDB_dbi dbi, const MDB_val* const key, MDB_val* const data, const unsigned int flags = 0) { const int rc = ::mdb_put(txn, dbi, const_cast<MDB_val*>(key), data, flags); if (rc != MDB_SUCCESS && rc != MDB_KEYEXIST) { error::raise("mdb_put", rc); } return (rc == MDB_SUCCESS); } /** * @retval true if the key/value pair was removed * @retval false if the key wasn't found * @see http://symas.com/mdb/doc/group__mdb.html#gab8182f9360ea69ac0afd4a4eaab1ddb0 */ static inline bool lmdb::dbi_del(MDB_txn* const txn, const MDB_dbi dbi, const MDB_val* const key, const MDB_val* const data = nullptr) { const int rc = ::mdb_del(txn, dbi, const_cast<MDB_val*>(key), const_cast<MDB_val*>(data)); if (rc != MDB_SUCCESS && rc != MDB_NOTFOUND) { error::raise("mdb_del", rc); } return (rc == MDB_SUCCESS); } //////////////////////////////////////////////////////////////////////////////// /* Procedural Interface: Cursors */ namespace lmdb { static inline void cursor_open(MDB_txn* txn, MDB_dbi dbi, MDB_cursor** cursor); static inline void cursor_close(MDB_cursor* cursor) noexcept; static inline void cursor_renew(MDB_txn* txn, MDB_cursor* cursor); static inline MDB_txn* cursor_txn(MDB_cursor* cursor) noexcept; static inline MDB_dbi cursor_dbi(MDB_cursor* cursor) noexcept; static inline bool cursor_get(MDB_cursor* cursor, MDB_val* key, MDB_val* data, MDB_cursor_op op); static inline void cursor_put(MDB_cursor* cursor, MDB_val* key, MDB_val* data, unsigned int flags); static inline void cursor_del(MDB_cursor* cursor, unsigned int flags); static inline void cursor_count(MDB_cursor* cursor, std::size_t& count); } /** * @throws lmdb::error on failure * @see http://symas.com/mdb/doc/group__mdb.html#ga9ff5d7bd42557fd5ee235dc1d62613aa */ static inline void lmdb::cursor_open(MDB_txn* const txn, const MDB_dbi dbi, MDB_cursor** const cursor) { const int rc = ::mdb_cursor_open(txn, dbi, cursor); if (rc != MDB_SUCCESS) { error::raise("mdb_cursor_open", rc); } } /** * @see http://symas.com/mdb/doc/group__mdb.html#gad685f5d73c052715c7bd859cc4c05188 */ static inline void lmdb::cursor_close(MDB_cursor* const cursor) noexcept { ::mdb_cursor_close(cursor); } /** * @throws lmdb::error on failure * @see http://symas.com/mdb/doc/group__mdb.html#gac8b57befb68793070c85ea813df481af */ static inline void lmdb::cursor_renew(MDB_txn* const txn, MDB_cursor* const cursor) { const int rc = ::mdb_cursor_renew(txn, cursor); if (rc != MDB_SUCCESS) { error::raise("mdb_cursor_renew", rc); } } /** * @see http://symas.com/mdb/doc/group__mdb.html#ga7bf0d458f7f36b5232fcb368ebda79e0 */ static inline MDB_txn* lmdb::cursor_txn(MDB_cursor* const cursor) noexcept { return ::mdb_cursor_txn(cursor); } /** * @see http://symas.com/mdb/doc/group__mdb.html#ga2f7092cf70ee816fb3d2c3267a732372 */ static inline MDB_dbi lmdb::cursor_dbi(MDB_cursor* const cursor) noexcept { return ::mdb_cursor_dbi(cursor); } /** * @throws lmdb::error on failure * @see http://symas.com/mdb/doc/group__mdb.html#ga48df35fb102536b32dfbb801a47b4cb0 */ static inline bool lmdb::cursor_get(MDB_cursor* const cursor, MDB_val* const key, MDB_val* const data, const MDB_cursor_op op) { const int rc = ::mdb_cursor_get(cursor, key, data, op); if (rc != MDB_SUCCESS && rc != MDB_NOTFOUND) { error::raise("mdb_cursor_get", rc); } return (rc == MDB_SUCCESS); } /** * @throws lmdb::error on failure * @see http://symas.com/mdb/doc/group__mdb.html#ga1f83ccb40011837ff37cc32be01ad91e */ static inline void lmdb::cursor_put(MDB_cursor* const cursor, MDB_val* const key, MDB_val* const data, const unsigned int flags = 0) { const int rc = ::mdb_cursor_put(cursor, key, data, flags); if (rc != MDB_SUCCESS) { error::raise("mdb_cursor_put", rc); } } /** * @throws lmdb::error on failure * @see http://symas.com/mdb/doc/group__mdb.html#ga26a52d3efcfd72e5bf6bd6960bf75f95 */ static inline void lmdb::cursor_del(MDB_cursor* const cursor, const unsigned int flags = 0) { const int rc = ::mdb_cursor_del(cursor, flags); if (rc != MDB_SUCCESS) { error::raise("mdb_cursor_del", rc); } } /** * @throws lmdb::error on failure * @see http://symas.com/mdb/doc/group__mdb.html#ga4041fd1e1862c6b7d5f10590b86ffbe2 */ static inline void lmdb::cursor_count(MDB_cursor* const cursor, std::size_t& count) { const int rc = ::mdb_cursor_count(cursor, &count); if (rc != MDB_SUCCESS) { error::raise("mdb_cursor_count", rc); } } //////////////////////////////////////////////////////////////////////////////// /* Resource Interface: Values */ namespace lmdb { class val; } /** * Wrapper class for `MDB_val` structures. * * @note Instances of this class are movable and copyable both. * @see http://symas.com/mdb/doc/group__mdb.html#structMDB__val */ class lmdb::val { protected: MDB_val _val; public: /** * Default constructor. */ val() noexcept = default; /** * Constructor. */ val(const std::string& data) noexcept : val{data.data(), data.size()} {} /** * Constructor. */ val(const char* const data) noexcept : val{data, std::strlen(data)} {} /** * Constructor. */ val(const void* const data, const std::size_t size) noexcept : _val{size, const_cast<void*>(data)} {} /** * Move constructor. */ val(val&& other) noexcept = default; /** * Move assignment operator. */ val& operator=(val&& other) noexcept = default; /** * Destructor. */ ~val() noexcept = default; /** * Returns an `MDB_val*` pointer. */ operator MDB_val*() noexcept { return &_val; } /** * Returns an `MDB_val*` pointer. */ operator const MDB_val*() const noexcept { return &_val; } /** * Determines whether this value is empty. */ bool empty() const noexcept { return size() == 0; } /** * Returns the size of the data. */ std::size_t size() const noexcept { return _val.mv_size; } /** * Returns a pointer to the data. */ template<typename T> T* data() noexcept { return reinterpret_cast<T*>(_val.mv_data); } /** * Returns a pointer to the data. */ template<typename T> const T* data() const noexcept { return reinterpret_cast<T*>(_val.mv_data); } /** * Returns a pointer to the data. */ char* data() noexcept { return reinterpret_cast<char*>(_val.mv_data); } /** * Returns a pointer to the data. */ const char* data() const noexcept { return reinterpret_cast<char*>(_val.mv_data); } /** * Assigns the value. */ template<typename T> val& assign(const T* const data, const std::size_t size) noexcept { _val.mv_size = size; _val.mv_data = const_cast<void*>(reinterpret_cast<const void*>(data)); return *this; } /** * Assigns the value. */ val& assign(const char* const data) noexcept { return assign(data, std::strlen(data)); } /** * Assigns the value. */ val& assign(const std::string& data) noexcept { return assign(data.data(), data.size()); } }; #if !(defined(__COVERITY__) || defined(_MSC_VER)) static_assert(std::is_pod<lmdb::val>::value, "lmdb::val must be a POD type"); static_assert(sizeof(lmdb::val) == sizeof(MDB_val), "sizeof(lmdb::val) != sizeof(MDB_val)"); #endif //////////////////////////////////////////////////////////////////////////////// /* Resource Interface: Environment */ namespace lmdb { class env; } /** * Resource class for `MDB_env*` handles. * * @note Instances of this class are movable, but not copyable. * @see http://symas.com/mdb/doc/group__internal.html#structMDB__env */ class lmdb::env { protected: MDB_env* _handle{nullptr}; public: static constexpr unsigned int default_flags = 0; static constexpr mode default_mode = 0644; /* -rw-r--r-- */ /** * Creates a new LMDB environment. * * @param flags * @throws lmdb::error on failure */ static env create(const unsigned int flags = default_flags) { MDB_env* handle{nullptr}; lmdb::env_create(&handle); #ifdef LMDBXX_DEBUG assert(handle != nullptr); #endif if (flags) { try { lmdb::env_set_flags(handle, flags); } catch (const lmdb::error&) { lmdb::env_close(handle); throw; } } return env{handle}; } /** * Constructor. * * @param handle a valid `MDB_env*` handle */ env(MDB_env* const handle) noexcept : _handle{handle} {} /** * Move constructor. */ env(env&& other) noexcept { std::swap(_handle, other._handle); } /** * Move assignment operator. */ env& operator=(env&& other) noexcept { if (this != &other) { std::swap(_handle, other._handle); } return *this; } /** * Destructor. */ ~env() noexcept { try { close(); } catch (...) {} } /** * Returns the underlying `MDB_env*` handle. */ operator MDB_env*() const noexcept { return _handle; } /** * Returns the underlying `MDB_env*` handle. */ MDB_env* handle() const noexcept { return _handle; } /** * Flushes data buffers to disk. * * @param force * @throws lmdb::error on failure */ void sync(const bool force = true) { lmdb::env_sync(handle(), force); } /** * Closes this environment, releasing the memory map. * * @note this method is idempotent * @post `handle() == nullptr` */ void close() noexcept { if (handle()) { lmdb::env_close(handle()); _handle = nullptr; } } /** * Opens this environment. * * @param path * @param flags * @param mode * @throws lmdb::error on failure */ env& open(const char* const path, const unsigned int flags = default_flags, const mode mode = default_mode) { lmdb::env_open(handle(), path, flags, mode); return *this; } /** * @param flags * @param onoff * @throws lmdb::error on failure */ env& set_flags(const unsigned int flags, const bool onoff = true) { lmdb::env_set_flags(handle(), flags, onoff); return *this; } /** * @param size * @throws lmdb::error on failure */ env& set_mapsize(const std::size_t size) { lmdb::env_set_mapsize(handle(), size); return *this; } /** * @param count * @throws lmdb::error on failure */ env& set_max_readers(const unsigned int count) { lmdb::env_set_max_readers(handle(), count); return *this; } /** * @param count * @throws lmdb::error on failure */ env& set_max_dbs(const MDB_dbi count) { lmdb::env_set_max_dbs(handle(), count); return *this; } }; //////////////////////////////////////////////////////////////////////////////// /* Resource Interface: Transactions */ namespace lmdb { class txn; } /** * Resource class for `MDB_txn*` handles. * * @note Instances of this class are movable, but not copyable. * @see http://symas.com/mdb/doc/group__internal.html#structMDB__txn */ class lmdb::txn { protected: MDB_txn* _handle{nullptr}; public: static constexpr unsigned int default_flags = 0; /** * Creates a new LMDB transaction. * * @param env the environment handle * @param parent * @param flags * @throws lmdb::error on failure */ static txn begin(MDB_env* const env, MDB_txn* const parent = nullptr, const unsigned int flags = default_flags) { MDB_txn* handle{nullptr}; lmdb::txn_begin(env, parent, flags, &handle); #ifdef LMDBXX_DEBUG assert(handle != nullptr); #endif return txn{handle}; } /** * Constructor. * * @param handle a valid `MDB_txn*` handle */ txn(MDB_txn* const handle) noexcept : _handle{handle} {} /** * Move constructor. */ txn(txn&& other) noexcept { std::swap(_handle, other._handle); } /** * Move assignment operator. */ txn& operator=(txn&& other) noexcept { if (this != &other) { std::swap(_handle, other._handle); } return *this; } /** * Destructor. */ ~txn() noexcept { if (_handle) { try { abort(); } catch (...) {} _handle = nullptr; } } /** * Returns the underlying `MDB_txn*` handle. */ operator MDB_txn*() const noexcept { return _handle; } /** * Returns the underlying `MDB_txn*` handle. */ MDB_txn* handle() const noexcept { return _handle; } /** * Returns the transaction's `MDB_env*` handle. */ MDB_env* env() const noexcept { return lmdb::txn_env(handle()); } /** * Commits this transaction. * * @throws lmdb::error on failure * @post `handle() == nullptr` */ void commit() { lmdb::txn_commit(_handle); _handle = nullptr; } /** * Aborts this transaction. * * @post `handle() == nullptr` */ void abort() noexcept { lmdb::txn_abort(_handle); _handle = nullptr; } /** * Resets this read-only transaction. */ void reset() noexcept { lmdb::txn_reset(_handle); } /** * Renews this read-only transaction. * * @throws lmdb::error on failure */ void renew() { lmdb::txn_renew(_handle); } }; //////////////////////////////////////////////////////////////////////////////// /* Resource Interface: Databases */ namespace lmdb { class dbi; } /** * Resource class for `MDB_dbi` handles. * * @note Instances of this class are movable, but not copyable. * @see http://symas.com/mdb/doc/group__mdb.html#gadbe68a06c448dfb62da16443d251a78b */ class lmdb::dbi { protected: MDB_dbi _handle{0}; public: static constexpr unsigned int default_flags = 0; static constexpr unsigned int default_put_flags = 0; /** * Opens a database handle. * * @param txn the transaction handle * @param name * @param flags * @throws lmdb::error on failure */ static dbi open(MDB_txn* const txn, const char* const name = nullptr, const unsigned int flags = default_flags) { MDB_dbi handle{}; lmdb::dbi_open(txn, name, flags, &handle); return dbi{handle}; } /** * Constructor. * * @param handle a valid `MDB_dbi` handle */ dbi(const MDB_dbi handle) noexcept : _handle{handle} {} /** * Move constructor. */ dbi(dbi&& other) noexcept { std::swap(_handle, other._handle); } /** * Move assignment operator. */ dbi& operator=(dbi&& other) noexcept { if (this != &other) { std::swap(_handle, other._handle); } return *this; } /** * Destructor. */ ~dbi() noexcept { if (_handle) { /* No need to call close() here. */ } } /** * Returns the underlying `MDB_dbi` handle. */ operator MDB_dbi() const noexcept { return _handle; } /** * Returns the underlying `MDB_dbi` handle. */ MDB_dbi handle() const noexcept { return _handle; } /** * Returns statistics for this database. * * @param txn a transaction handle * @throws lmdb::error on failure */ MDB_stat stat(MDB_txn* const txn) const { MDB_stat result; lmdb::dbi_stat(txn, handle(), &result); return result; } /** * Retrieves the flags for this database handle. * * @param txn a transaction handle * @throws lmdb::error on failure */ unsigned int flags(MDB_txn* const txn) const { unsigned int result{}; lmdb::dbi_flags(txn, handle(), &result); return result; } /** * Returns the number of records in this database. * * @param txn a transaction handle * @throws lmdb::error on failure */ std::size_t size(MDB_txn* const txn) const { return stat(txn).ms_entries; } /** * @param txn a transaction handle * @param del * @throws lmdb::error on failure */ void drop(MDB_txn* const txn, const bool del = false) { lmdb::dbi_drop(txn, handle(), del); } /** * Sets a custom key comparison function for this database. * * @param txn a transaction handle * @param cmp the comparison function * @throws lmdb::error on failure */ dbi& set_compare(MDB_txn* const txn, MDB_cmp_func* const cmp = nullptr) { lmdb::dbi_set_compare(txn, handle(), cmp); return *this; } /** * Retrieves a key/value pair from this database. * * @param txn a transaction handle * @param key * @param data * @throws lmdb::error on failure */ bool get(MDB_txn* const txn, const val& key, val& data) { return lmdb::dbi_get(txn, handle(), key, data); } /** * Retrieves a key from this database. * * @param txn a transaction handle * @param key * @throws lmdb::error on failure */ template<typename K> bool get(MDB_txn* const txn, const K& key) const { const lmdb::val k{&key, sizeof(K)}; lmdb::val v{}; return lmdb::dbi_get(txn, handle(), k, v); } /** * Retrieves a key/value pair from this database. * * @param txn a transaction handle * @param key * @param val * @throws lmdb::error on failure */ template<typename K, typename V> bool get(MDB_txn* const txn, const K& key, V& val) const { const lmdb::val k{&key, sizeof(K)}; lmdb::val v{}; const bool result = lmdb::dbi_get(txn, handle(), k, v); if (result) { val = *v.data<const V>(); } return result; } /** * Retrieves a key/value pair from this database. * * @param txn a transaction handle * @param key a NUL-terminated string key * @param val * @throws lmdb::error on failure */ template<typename V> bool get(MDB_txn* const txn, const char* const key, V& val) const { const lmdb::val k{key, std::strlen(key)}; lmdb::val v{}; const bool result = lmdb::dbi_get(txn, handle(), k, v); if (result) { val = *v.data<const V>(); } return result; } /** * Stores a key/value pair into this database. * * @param txn a transaction handle * @param key * @param data * @param flags * @throws lmdb::error on failure */ bool put(MDB_txn* const txn, const val& key, val& data, const unsigned int flags = default_put_flags) { return lmdb::dbi_put(txn, handle(), key, data, flags); } /** * Stores a key into this database. * * @param txn a transaction handle * @param key * @param flags * @throws lmdb::error on failure */ template<typename K> bool put(MDB_txn* const txn, const K& key, const unsigned int flags = default_put_flags) { const lmdb::val k{&key, sizeof(K)}; lmdb::val v{}; return lmdb::dbi_put(txn, handle(), k, v, flags); } /** * Stores a key/value pair into this database. * * @param txn a transaction handle * @param key * @param val * @param flags * @throws lmdb::error on failure */ template<typename K, typename V> bool put(MDB_txn* const txn, const K& key, const V& val, const unsigned int flags = default_put_flags) { const lmdb::val k{&key, sizeof(K)}; lmdb::val v{&val, sizeof(V)}; return lmdb::dbi_put(txn, handle(), k, v, flags); } /** * Stores a key/value pair into this database. * * @param txn a transaction handle * @param key a NUL-terminated string key * @param val * @param flags * @throws lmdb::error on failure */ template<typename V> bool put(MDB_txn* const txn, const char* const key, const V& val, const unsigned int flags = default_put_flags) { const lmdb::val k{key, std::strlen(key)}; lmdb::val v{&val, sizeof(V)}; return lmdb::dbi_put(txn, handle(), k, v, flags); } /** * Stores a key/value pair into this database. * * @param txn a transaction handle * @param key a NUL-terminated string key * @param val a NUL-terminated string key * @param flags * @throws lmdb::error on failure */ bool put(MDB_txn* const txn, const char* const key, const char* const val, const unsigned int flags = default_put_flags) { const lmdb::val k{key, std::strlen(key)}; lmdb::val v{val, std::strlen(val)}; return lmdb::dbi_put(txn, handle(), k, v, flags); } /** * Removes a key/value pair from this database. * * @param txn a transaction handle * @param key * @throws lmdb::error on failure */ bool del(MDB_txn* const txn, const val& key) { return lmdb::dbi_del(txn, handle(), key); } /** * Removes a key/value pair from this database. * * @param txn a transaction handle * @param key * @throws lmdb::error on failure */ template<typename K> bool del(MDB_txn* const txn, const K& key) { const lmdb::val k{&key, sizeof(K)}; return lmdb::dbi_del(txn, handle(), k); } }; //////////////////////////////////////////////////////////////////////////////// /* Resource Interface: Cursors */ namespace lmdb { class cursor; } /** * Resource class for `MDB_cursor*` handles. * * @note Instances of this class are movable, but not copyable. * @see http://symas.com/mdb/doc/group__internal.html#structMDB__cursor */ class lmdb::cursor { protected: MDB_cursor* _handle{nullptr}; public: static constexpr unsigned int default_flags = 0; /** * Creates an LMDB cursor. * * @param txn the transaction handle * @param dbi the database handle * @throws lmdb::error on failure */ static cursor open(MDB_txn* const txn, const MDB_dbi dbi) { MDB_cursor* handle{}; lmdb::cursor_open(txn, dbi, &handle); #ifdef LMDBXX_DEBUG assert(handle != nullptr); #endif return cursor{handle}; } /** * Constructor. * * @param handle a valid `MDB_cursor*` handle */ cursor(MDB_cursor* const handle) noexcept : _handle{handle} {} /** * Move constructor. */ cursor(cursor&& other) noexcept { std::swap(_handle, other._handle); } /** * Move assignment operator. */ cursor& operator=(cursor&& other) noexcept { if (this != &other) { std::swap(_handle, other._handle); } return *this; } /** * Destructor. */ ~cursor() noexcept { try { close(); } catch (...) {} } /** * Returns the underlying `MDB_cursor*` handle. */ operator MDB_cursor*() const noexcept { return _handle; } /** * Returns the underlying `MDB_cursor*` handle. */ MDB_cursor* handle() const noexcept { return _handle; } /** * Closes this cursor. * * @note this method is idempotent * @post `handle() == nullptr` */ void close() noexcept { if (_handle) { lmdb::cursor_close(_handle); _handle = nullptr; } } /** * Renews this cursor. * * @param txn the transaction scope * @throws lmdb::error on failure */ void renew(MDB_txn* const txn) { lmdb::cursor_renew(txn, handle()); } /** * Returns the cursor's transaction handle. */ MDB_txn* txn() const noexcept { return lmdb::cursor_txn(handle()); } /** * Returns the cursor's database handle. */ MDB_dbi dbi() const noexcept { return lmdb::cursor_dbi(handle()); } /** * Retrieves a key from the database. * * @param key * @param op * @throws lmdb::error on failure */ bool get(MDB_val* const key, const MDB_cursor_op op) { return get(key, nullptr, op); } /** * Retrieves a key from the database. * * @param key * @param op * @throws lmdb::error on failure */ bool get(lmdb::val& key, const MDB_cursor_op op) { return get(key, nullptr, op); } /** * Retrieves a key/value pair from the database. * * @param key * @param val (may be `nullptr`) * @param op * @throws lmdb::error on failure */ bool get(MDB_val* const key, MDB_val* const val, const MDB_cursor_op op) { return lmdb::cursor_get(handle(), key, val, op); } /** * Retrieves a key/value pair from the database. * * @param key * @param val * @param op * @throws lmdb::error on failure */ bool get(lmdb::val& key, lmdb::val& val, const MDB_cursor_op op) { return lmdb::cursor_get(handle(), key, val, op); } /** * Retrieves a key/value pair from the database. * * @param key * @param val * @param op * @throws lmdb::error on failure */ bool get(std::string& key, std::string& val, const MDB_cursor_op op) { lmdb::val k{}, v{}; const bool found = get(k, v, op); if (found) { key.assign(k.data(), k.size()); val.assign(v.data(), v.size()); } return found; } /** * Positions this cursor at the given key. * * @param key * @param op * @throws lmdb::error on failure */ template<typename K> bool find(const K& key, const MDB_cursor_op op = MDB_SET) { lmdb::val k{&key, sizeof(K)}; return get(k, nullptr, op); } }; //////////////////////////////////////////////////////////////////////////////// #endif /* LMDBXX_H */
{ "repo_name": "drycpp/lmdbxx", "stars": "261", "repo_language": "C++", "file_name": "lmdb.info", "mime_type": "text/plain" }
Kudos to Howard Chu <hyc@openldap.org> and Symas Corp <http://symas.com/> for developing LMDB and making it freely available as open-source software.
{ "repo_name": "drycpp/lmdbxx", "stars": "261", "repo_language": "C++", "file_name": "lmdb.info", "mime_type": "text/plain" }
# Doxyfile 1.8.9.1 # This file describes the settings to be used by the documentation system # doxygen (www.doxygen.org) for a project. # # All text after a double hash (##) is considered a comment and is placed in # front of the TAG it is preceding. # # All text after a single hash (#) is considered a comment and will be ignored. # The format is: # TAG = value [value, ...] # For lists, items can also be appended using: # TAG += value [value, ...] # Values that contain spaces should be placed between quotes (\" \"). #--------------------------------------------------------------------------- # Project related configuration options #--------------------------------------------------------------------------- # This tag specifies the encoding used for all characters in the config file # that follow. The default is UTF-8 which is also the encoding used for all text # before the first occurrence of this tag. Doxygen uses libiconv (or the iconv # built into libc) for the transcoding. See http://www.gnu.org/software/libiconv # for the list of possible encodings. # The default value is: UTF-8. DOXYFILE_ENCODING = UTF-8 # The PROJECT_NAME tag is a single word (or a sequence of words surrounded by # double-quotes, unless you are using Doxywizard) that should identify the # project for which the documentation is generated. This name is used in the # title of most generated pages and in a few other places. # The default value is: My Project. PROJECT_NAME = "lmdb++" # The PROJECT_NUMBER tag can be used to enter a project or revision number. This # could be handy for archiving the generated documentation or if some version # control system is used. PROJECT_NUMBER = # Using the PROJECT_BRIEF tag one can provide an optional one line description # for a project that appears at the top of each page and should give viewer a # quick idea about the purpose of the project. Keep the description short. PROJECT_BRIEF = # With the PROJECT_LOGO tag one can specify a logo or an icon that is included # in the documentation. The maximum height of the logo should not exceed 55 # pixels and the maximum width should not exceed 200 pixels. Doxygen will copy # the logo to the output directory. PROJECT_LOGO = # The OUTPUT_DIRECTORY tag is used to specify the (relative or absolute) path # into which the generated documentation will be written. If a relative path is # entered, it will be relative to the location where doxygen was started. If # left blank the current directory will be used. OUTPUT_DIRECTORY = .doxygen # If the CREATE_SUBDIRS tag is set to YES then doxygen will create 4096 sub- # directories (in 2 levels) under the output directory of each output format and # will distribute the generated files over these directories. Enabling this # option can be useful when feeding doxygen a huge amount of source files, where # putting all generated files in the same directory would otherwise causes # performance problems for the file system. # The default value is: NO. CREATE_SUBDIRS = NO # If the ALLOW_UNICODE_NAMES tag is set to YES, doxygen will allow non-ASCII # characters to appear in the names of generated files. If set to NO, non-ASCII # characters will be escaped, for example _xE3_x81_x84 will be used for Unicode # U+3044. # The default value is: NO. ALLOW_UNICODE_NAMES = NO # The OUTPUT_LANGUAGE tag is used to specify the language in which all # documentation generated by doxygen is written. Doxygen will use this # information to generate all constant output in the proper language. # Possible values are: Afrikaans, Arabic, Armenian, Brazilian, Catalan, Chinese, # Chinese-Traditional, Croatian, Czech, Danish, Dutch, English (United States), # Esperanto, Farsi (Persian), Finnish, French, German, Greek, Hungarian, # Indonesian, Italian, Japanese, Japanese-en (Japanese with English messages), # Korean, Korean-en (Korean with English messages), Latvian, Lithuanian, # Macedonian, Norwegian, Persian (Farsi), Polish, Portuguese, Romanian, Russian, # Serbian, Serbian-Cyrillic, Slovak, Slovene, Spanish, Swedish, Turkish, # Ukrainian and Vietnamese. # The default value is: English. OUTPUT_LANGUAGE = English # If the BRIEF_MEMBER_DESC tag is set to YES, doxygen will include brief member # descriptions after the members that are listed in the file and class # documentation (similar to Javadoc). Set to NO to disable this. # The default value is: YES. BRIEF_MEMBER_DESC = YES # If the REPEAT_BRIEF tag is set to YES, doxygen will prepend the brief # description of a member or function before the detailed description # # Note: If both HIDE_UNDOC_MEMBERS and BRIEF_MEMBER_DESC are set to NO, the # brief descriptions will be completely suppressed. # The default value is: YES. REPEAT_BRIEF = YES # This tag implements a quasi-intelligent brief description abbreviator that is # used to form the text in various listings. Each string in this list, if found # as the leading text of the brief description, will be stripped from the text # and the result, after processing the whole list, is used as the annotated # text. Otherwise, the brief description is used as-is. If left blank, the # following values are used ($name is automatically replaced with the name of # the entity):The $name class, The $name widget, The $name file, is, provides, # specifies, contains, represents, a, an and the. ABBREVIATE_BRIEF = # If the ALWAYS_DETAILED_SEC and REPEAT_BRIEF tags are both set to YES then # doxygen will generate a detailed section even if there is only a brief # description. # The default value is: NO. ALWAYS_DETAILED_SEC = YES # If the INLINE_INHERITED_MEMB tag is set to YES, doxygen will show all # inherited members of a class in the documentation of that class as if those # members were ordinary class members. Constructors, destructors and assignment # operators of the base classes will not be shown. # The default value is: NO. INLINE_INHERITED_MEMB = NO # If the FULL_PATH_NAMES tag is set to YES, doxygen will prepend the full path # before files name in the file list and in the header files. If set to NO the # shortest path that makes the file name unique will be used # The default value is: YES. FULL_PATH_NAMES = YES # The STRIP_FROM_PATH tag can be used to strip a user-defined part of the path. # Stripping is only done if one of the specified strings matches the left-hand # part of the path. The tag can be used to show relative paths in the file list. # If left blank the directory from which doxygen is run is used as the path to # strip. # # Note that you can specify absolute paths here, but also relative paths, which # will be relative from the directory where doxygen is started. # This tag requires that the tag FULL_PATH_NAMES is set to YES. STRIP_FROM_PATH = # The STRIP_FROM_INC_PATH tag can be used to strip a user-defined part of the # path mentioned in the documentation of a class, which tells the reader which # header file to include in order to use a class. If left blank only the name of # the header file containing the class definition is used. Otherwise one should # specify the list of include paths that are normally passed to the compiler # using the -I flag. STRIP_FROM_INC_PATH = # If the SHORT_NAMES tag is set to YES, doxygen will generate much shorter (but # less readable) file names. This can be useful is your file systems doesn't # support long names like on DOS, Mac, or CD-ROM. # The default value is: NO. SHORT_NAMES = NO # If the JAVADOC_AUTOBRIEF tag is set to YES then doxygen will interpret the # first line (until the first dot) of a Javadoc-style comment as the brief # description. If set to NO, the Javadoc-style will behave just like regular Qt- # style comments (thus requiring an explicit @brief command for a brief # description.) # The default value is: NO. JAVADOC_AUTOBRIEF = YES # If the QT_AUTOBRIEF tag is set to YES then doxygen will interpret the first # line (until the first dot) of a Qt-style comment as the brief description. If # set to NO, the Qt-style will behave just like regular Qt-style comments (thus # requiring an explicit \brief command for a brief description.) # The default value is: NO. QT_AUTOBRIEF = NO # The MULTILINE_CPP_IS_BRIEF tag can be set to YES to make doxygen treat a # multi-line C++ special comment block (i.e. a block of //! or /// comments) as # a brief description. This used to be the default behavior. The new default is # to treat a multi-line C++ comment block as a detailed description. Set this # tag to YES if you prefer the old behavior instead. # # Note that setting this tag to YES also means that rational rose comments are # not recognized any more. # The default value is: NO. MULTILINE_CPP_IS_BRIEF = NO # If the INHERIT_DOCS tag is set to YES then an undocumented member inherits the # documentation from any documented member that it re-implements. # The default value is: YES. INHERIT_DOCS = YES # If the SEPARATE_MEMBER_PAGES tag is set to YES then doxygen will produce a new # page for each member. If set to NO, the documentation of a member will be part # of the file/class/namespace that contains it. # The default value is: NO. SEPARATE_MEMBER_PAGES = NO # The TAB_SIZE tag can be used to set the number of spaces in a tab. Doxygen # uses this value to replace tabs by spaces in code fragments. # Minimum value: 1, maximum value: 16, default value: 4. TAB_SIZE = 2 # This tag can be used to specify a number of aliases that act as commands in # the documentation. An alias has the form: # name=value # For example adding # "sideeffect=@par Side Effects:\n" # will allow you to put the command \sideeffect (or @sideeffect) in the # documentation, which will result in a user-defined paragraph with heading # "Side Effects:". You can put \n's in the value part of an alias to insert # newlines. ALIASES = # This tag can be used to specify a number of word-keyword mappings (TCL only). # A mapping has the form "name=value". For example adding "class=itcl::class" # will allow you to use the command class in the itcl::class meaning. TCL_SUBST = # Set the OPTIMIZE_OUTPUT_FOR_C tag to YES if your project consists of C sources # only. Doxygen will then generate output that is more tailored for C. For # instance, some of the names that are used will be different. The list of all # members will be omitted, etc. # The default value is: NO. OPTIMIZE_OUTPUT_FOR_C = NO # Set the OPTIMIZE_OUTPUT_JAVA tag to YES if your project consists of Java or # Python sources only. Doxygen will then generate output that is more tailored # for that language. For instance, namespaces will be presented as packages, # qualified scopes will look different, etc. # The default value is: NO. OPTIMIZE_OUTPUT_JAVA = NO # Set the OPTIMIZE_FOR_FORTRAN tag to YES if your project consists of Fortran # sources. Doxygen will then generate output that is tailored for Fortran. # The default value is: NO. OPTIMIZE_FOR_FORTRAN = NO # Set the OPTIMIZE_OUTPUT_VHDL tag to YES if your project consists of VHDL # sources. Doxygen will then generate output that is tailored for VHDL. # The default value is: NO. OPTIMIZE_OUTPUT_VHDL = NO # Doxygen selects the parser to use depending on the extension of the files it # parses. With this tag you can assign which parser to use for a given # extension. Doxygen has a built-in mapping, but you can override or extend it # using this tag. The format is ext=language, where ext is a file extension, and # language is one of the parsers supported by doxygen: IDL, Java, Javascript, # C#, C, C++, D, PHP, Objective-C, Python, Fortran (fixed format Fortran: # FortranFixed, free formatted Fortran: FortranFree, unknown formatted Fortran: # Fortran. In the later case the parser tries to guess whether the code is fixed # or free formatted code, this is the default for Fortran type files), VHDL. For # instance to make doxygen treat .inc files as Fortran files (default is PHP), # and .f files as C (default is Fortran), use: inc=Fortran f=C. # # Note: For files without extension you can use no_extension as a placeholder. # # Note that for custom extensions you also need to set FILE_PATTERNS otherwise # the files are not read by doxygen. EXTENSION_MAPPING = # If the MARKDOWN_SUPPORT tag is enabled then doxygen pre-processes all comments # according to the Markdown format, which allows for more readable # documentation. See http://daringfireball.net/projects/markdown/ for details. # The output of markdown processing is further processed by doxygen, so you can # mix doxygen, HTML, and XML commands with Markdown formatting. Disable only in # case of backward compatibilities issues. # The default value is: YES. MARKDOWN_SUPPORT = YES # When enabled doxygen tries to link words that correspond to documented # classes, or namespaces to their corresponding documentation. Such a link can # be prevented in individual cases by putting a % sign in front of the word or # globally by setting AUTOLINK_SUPPORT to NO. # The default value is: YES. AUTOLINK_SUPPORT = YES # If you use STL classes (i.e. std::string, std::vector, etc.) but do not want # to include (a tag file for) the STL sources as input, then you should set this # tag to YES in order to let doxygen match functions declarations and # definitions whose arguments contain STL classes (e.g. func(std::string); # versus func(std::string) {}). This also make the inheritance and collaboration # diagrams that involve STL classes more complete and accurate. # The default value is: NO. BUILTIN_STL_SUPPORT = NO # If you use Microsoft's C++/CLI language, you should set this option to YES to # enable parsing support. # The default value is: NO. CPP_CLI_SUPPORT = NO # Set the SIP_SUPPORT tag to YES if your project consists of sip (see: # http://www.riverbankcomputing.co.uk/software/sip/intro) sources only. Doxygen # will parse them like normal C++ but will assume all classes use public instead # of private inheritance when no explicit protection keyword is present. # The default value is: NO. SIP_SUPPORT = NO # For Microsoft's IDL there are propget and propput attributes to indicate # getter and setter methods for a property. Setting this option to YES will make # doxygen to replace the get and set methods by a property in the documentation. # This will only work if the methods are indeed getting or setting a simple # type. If this is not the case, or you want to show the methods anyway, you # should set this option to NO. # The default value is: YES. IDL_PROPERTY_SUPPORT = YES # If member grouping is used in the documentation and the DISTRIBUTE_GROUP_DOC # tag is set to YES then doxygen will reuse the documentation of the first # member in the group (if any) for the other members of the group. By default # all members of a group must be documented explicitly. # The default value is: NO. DISTRIBUTE_GROUP_DOC = NO # Set the SUBGROUPING tag to YES to allow class member groups of the same type # (for instance a group of public functions) to be put as a subgroup of that # type (e.g. under the Public Functions section). Set it to NO to prevent # subgrouping. Alternatively, this can be done per class using the # \nosubgrouping command. # The default value is: YES. SUBGROUPING = YES # When the INLINE_GROUPED_CLASSES tag is set to YES, classes, structs and unions # are shown inside the group in which they are included (e.g. using \ingroup) # instead of on a separate page (for HTML and Man pages) or section (for LaTeX # and RTF). # # Note that this feature does not work in combination with # SEPARATE_MEMBER_PAGES. # The default value is: NO. INLINE_GROUPED_CLASSES = NO # When the INLINE_SIMPLE_STRUCTS tag is set to YES, structs, classes, and unions # with only public data fields or simple typedef fields will be shown inline in # the documentation of the scope in which they are defined (i.e. file, # namespace, or group documentation), provided this scope is documented. If set # to NO, structs, classes, and unions are shown on a separate page (for HTML and # Man pages) or section (for LaTeX and RTF). # The default value is: NO. INLINE_SIMPLE_STRUCTS = NO # When TYPEDEF_HIDES_STRUCT tag is enabled, a typedef of a struct, union, or # enum is documented as struct, union, or enum with the name of the typedef. So # typedef struct TypeS {} TypeT, will appear in the documentation as a struct # with name TypeT. When disabled the typedef will appear as a member of a file, # namespace, or class. And the struct will be named TypeS. This can typically be # useful for C code in case the coding convention dictates that all compound # types are typedef'ed and only the typedef is referenced, never the tag name. # The default value is: NO. TYPEDEF_HIDES_STRUCT = NO # The size of the symbol lookup cache can be set using LOOKUP_CACHE_SIZE. This # cache is used to resolve symbols given their name and scope. Since this can be # an expensive process and often the same symbol appears multiple times in the # code, doxygen keeps a cache of pre-resolved symbols. If the cache is too small # doxygen will become slower. If the cache is too large, memory is wasted. The # cache size is given by this formula: 2^(16+LOOKUP_CACHE_SIZE). The valid range # is 0..9, the default is 0, corresponding to a cache size of 2^16=65536 # symbols. At the end of a run doxygen will report the cache usage and suggest # the optimal cache size from a speed point of view. # Minimum value: 0, maximum value: 9, default value: 0. LOOKUP_CACHE_SIZE = 0 #--------------------------------------------------------------------------- # Build related configuration options #--------------------------------------------------------------------------- # If the EXTRACT_ALL tag is set to YES, doxygen will assume all entities in # documentation are documented, even if no documentation was available. Private # class members and static file members will be hidden unless the # EXTRACT_PRIVATE respectively EXTRACT_STATIC tags are set to YES. # Note: This will also disable the warnings about undocumented members that are # normally produced when WARNINGS is set to YES. # The default value is: NO. EXTRACT_ALL = YES # If the EXTRACT_PRIVATE tag is set to YES, all private members of a class will # be included in the documentation. # The default value is: NO. EXTRACT_PRIVATE = NO # If the EXTRACT_PACKAGE tag is set to YES, all members with package or internal # scope will be included in the documentation. # The default value is: NO. EXTRACT_PACKAGE = NO # If the EXTRACT_STATIC tag is set to YES, all static members of a file will be # included in the documentation. # The default value is: NO. EXTRACT_STATIC = YES # If the EXTRACT_LOCAL_CLASSES tag is set to YES, classes (and structs) defined # locally in source files will be included in the documentation. If set to NO, # only classes defined in header files are included. Does not have any effect # for Java sources. # The default value is: YES. EXTRACT_LOCAL_CLASSES = NO # This flag is only useful for Objective-C code. If set to YES, local methods, # which are defined in the implementation section but not in the interface are # included in the documentation. If set to NO, only methods in the interface are # included. # The default value is: NO. EXTRACT_LOCAL_METHODS = NO # If this flag is set to YES, the members of anonymous namespaces will be # extracted and appear in the documentation as a namespace called # 'anonymous_namespace{file}', where file will be replaced with the base name of # the file that contains the anonymous namespace. By default anonymous namespace # are hidden. # The default value is: NO. EXTRACT_ANON_NSPACES = NO # If the HIDE_UNDOC_MEMBERS tag is set to YES, doxygen will hide all # undocumented members inside documented classes or files. If set to NO these # members will be included in the various overviews, but no documentation # section is generated. This option has no effect if EXTRACT_ALL is enabled. # The default value is: NO. HIDE_UNDOC_MEMBERS = NO # If the HIDE_UNDOC_CLASSES tag is set to YES, doxygen will hide all # undocumented classes that are normally visible in the class hierarchy. If set # to NO, these classes will be included in the various overviews. This option # has no effect if EXTRACT_ALL is enabled. # The default value is: NO. HIDE_UNDOC_CLASSES = NO # If the HIDE_FRIEND_COMPOUNDS tag is set to YES, doxygen will hide all friend # (class|struct|union) declarations. If set to NO, these declarations will be # included in the documentation. # The default value is: NO. HIDE_FRIEND_COMPOUNDS = NO # If the HIDE_IN_BODY_DOCS tag is set to YES, doxygen will hide any # documentation blocks found inside the body of a function. If set to NO, these # blocks will be appended to the function's detailed documentation block. # The default value is: NO. HIDE_IN_BODY_DOCS = NO # The INTERNAL_DOCS tag determines if documentation that is typed after a # \internal command is included. If the tag is set to NO then the documentation # will be excluded. Set it to YES to include the internal documentation. # The default value is: NO. INTERNAL_DOCS = NO # If the CASE_SENSE_NAMES tag is set to NO then doxygen will only generate file # names in lower-case letters. If set to YES, upper-case letters are also # allowed. This is useful if you have classes or files whose names only differ # in case and if your file system supports case sensitive file names. Windows # and Mac users are advised to set this option to NO. # The default value is: system dependent. CASE_SENSE_NAMES = NO # If the HIDE_SCOPE_NAMES tag is set to NO then doxygen will show members with # their full class and namespace scopes in the documentation. If set to YES, the # scope will be hidden. # The default value is: NO. HIDE_SCOPE_NAMES = NO # If the HIDE_COMPOUND_REFERENCE tag is set to NO (default) then doxygen will # append additional text to a page's title, such as Class Reference. If set to # YES the compound reference will be hidden. # The default value is: NO. HIDE_COMPOUND_REFERENCE= NO # If the SHOW_INCLUDE_FILES tag is set to YES then doxygen will put a list of # the files that are included by a file in the documentation of that file. # The default value is: YES. SHOW_INCLUDE_FILES = YES # If the SHOW_GROUPED_MEMB_INC tag is set to YES then Doxygen will add for each # grouped member an include statement to the documentation, telling the reader # which file to include in order to use the member. # The default value is: NO. SHOW_GROUPED_MEMB_INC = NO # If the FORCE_LOCAL_INCLUDES tag is set to YES then doxygen will list include # files with double quotes in the documentation rather than with sharp brackets. # The default value is: NO. FORCE_LOCAL_INCLUDES = NO # If the INLINE_INFO tag is set to YES then a tag [inline] is inserted in the # documentation for inline members. # The default value is: YES. INLINE_INFO = YES # If the SORT_MEMBER_DOCS tag is set to YES then doxygen will sort the # (detailed) documentation of file and class members alphabetically by member # name. If set to NO, the members will appear in declaration order. # The default value is: YES. SORT_MEMBER_DOCS = YES # If the SORT_BRIEF_DOCS tag is set to YES then doxygen will sort the brief # descriptions of file, namespace and class members alphabetically by member # name. If set to NO, the members will appear in declaration order. Note that # this will also influence the order of the classes in the class list. # The default value is: NO. SORT_BRIEF_DOCS = NO # If the SORT_MEMBERS_CTORS_1ST tag is set to YES then doxygen will sort the # (brief and detailed) documentation of class members so that constructors and # destructors are listed first. If set to NO the constructors will appear in the # respective orders defined by SORT_BRIEF_DOCS and SORT_MEMBER_DOCS. # Note: If SORT_BRIEF_DOCS is set to NO this option is ignored for sorting brief # member documentation. # Note: If SORT_MEMBER_DOCS is set to NO this option is ignored for sorting # detailed member documentation. # The default value is: NO. SORT_MEMBERS_CTORS_1ST = NO # If the SORT_GROUP_NAMES tag is set to YES then doxygen will sort the hierarchy # of group names into alphabetical order. If set to NO the group names will # appear in their defined order. # The default value is: NO. SORT_GROUP_NAMES = NO # If the SORT_BY_SCOPE_NAME tag is set to YES, the class list will be sorted by # fully-qualified names, including namespaces. If set to NO, the class list will # be sorted only by class name, not including the namespace part. # Note: This option is not very useful if HIDE_SCOPE_NAMES is set to YES. # Note: This option applies only to the class list, not to the alphabetical # list. # The default value is: NO. SORT_BY_SCOPE_NAME = NO # If the STRICT_PROTO_MATCHING option is enabled and doxygen fails to do proper # type resolution of all parameters of a function it will reject a match between # the prototype and the implementation of a member function even if there is # only one candidate or it is obvious which candidate to choose by doing a # simple string match. By disabling STRICT_PROTO_MATCHING doxygen will still # accept a match between prototype and implementation in such cases. # The default value is: NO. STRICT_PROTO_MATCHING = NO # The GENERATE_TODOLIST tag can be used to enable (YES) or disable (NO) the todo # list. This list is created by putting \todo commands in the documentation. # The default value is: YES. GENERATE_TODOLIST = YES # The GENERATE_TESTLIST tag can be used to enable (YES) or disable (NO) the test # list. This list is created by putting \test commands in the documentation. # The default value is: YES. GENERATE_TESTLIST = YES # The GENERATE_BUGLIST tag can be used to enable (YES) or disable (NO) the bug # list. This list is created by putting \bug commands in the documentation. # The default value is: YES. GENERATE_BUGLIST = YES # The GENERATE_DEPRECATEDLIST tag can be used to enable (YES) or disable (NO) # the deprecated list. This list is created by putting \deprecated commands in # the documentation. # The default value is: YES. GENERATE_DEPRECATEDLIST= YES # The ENABLED_SECTIONS tag can be used to enable conditional documentation # sections, marked by \if <section_label> ... \endif and \cond <section_label> # ... \endcond blocks. ENABLED_SECTIONS = # The MAX_INITIALIZER_LINES tag determines the maximum number of lines that the # initial value of a variable or macro / define can have for it to appear in the # documentation. If the initializer consists of more lines than specified here # it will be hidden. Use a value of 0 to hide initializers completely. The # appearance of the value of individual variables and macros / defines can be # controlled using \showinitializer or \hideinitializer command in the # documentation regardless of this setting. # Minimum value: 0, maximum value: 10000, default value: 30. MAX_INITIALIZER_LINES = 30 # Set the SHOW_USED_FILES tag to NO to disable the list of files generated at # the bottom of the documentation of classes and structs. If set to YES, the # list will mention the files that were used to generate the documentation. # The default value is: YES. SHOW_USED_FILES = YES # Set the SHOW_FILES tag to NO to disable the generation of the Files page. This # will remove the Files entry from the Quick Index and from the Folder Tree View # (if specified). # The default value is: YES. SHOW_FILES = YES # Set the SHOW_NAMESPACES tag to NO to disable the generation of the Namespaces # page. This will remove the Namespaces entry from the Quick Index and from the # Folder Tree View (if specified). # The default value is: YES. SHOW_NAMESPACES = YES # The FILE_VERSION_FILTER tag can be used to specify a program or script that # doxygen should invoke to get the current version for each file (typically from # the version control system). Doxygen will invoke the program by executing (via # popen()) the command command input-file, where command is the value of the # FILE_VERSION_FILTER tag, and input-file is the name of an input file provided # by doxygen. Whatever the program writes to standard output is used as the file # version. For an example see the documentation. FILE_VERSION_FILTER = # The LAYOUT_FILE tag can be used to specify a layout file which will be parsed # by doxygen. The layout file controls the global structure of the generated # output files in an output format independent way. To create the layout file # that represents doxygen's defaults, run doxygen with the -l option. You can # optionally specify a file name after the option, if omitted DoxygenLayout.xml # will be used as the name of the layout file. # # Note that if you run doxygen from a directory containing a file called # DoxygenLayout.xml, doxygen will parse it automatically even if the LAYOUT_FILE # tag is left empty. LAYOUT_FILE = # The CITE_BIB_FILES tag can be used to specify one or more bib files containing # the reference definitions. This must be a list of .bib files. The .bib # extension is automatically appended if omitted. This requires the bibtex tool # to be installed. See also http://en.wikipedia.org/wiki/BibTeX for more info. # For LaTeX the style of the bibliography can be controlled using # LATEX_BIB_STYLE. To use this feature you need bibtex and perl available in the # search path. See also \cite for info how to create references. CITE_BIB_FILES = #--------------------------------------------------------------------------- # Configuration options related to warning and progress messages #--------------------------------------------------------------------------- # The QUIET tag can be used to turn on/off the messages that are generated to # standard output by doxygen. If QUIET is set to YES this implies that the # messages are off. # The default value is: NO. QUIET = YES # The WARNINGS tag can be used to turn on/off the warning messages that are # generated to standard error (stderr) by doxygen. If WARNINGS is set to YES # this implies that the warnings are on. # # Tip: Turn warnings on while writing the documentation. # The default value is: YES. WARNINGS = YES # If the WARN_IF_UNDOCUMENTED tag is set to YES then doxygen will generate # warnings for undocumented members. If EXTRACT_ALL is set to YES then this flag # will automatically be disabled. # The default value is: YES. WARN_IF_UNDOCUMENTED = YES # If the WARN_IF_DOC_ERROR tag is set to YES, doxygen will generate warnings for # potential errors in the documentation, such as not documenting some parameters # in a documented function, or documenting parameters that don't exist or using # markup commands wrongly. # The default value is: YES. WARN_IF_DOC_ERROR = YES # This WARN_NO_PARAMDOC option can be enabled to get warnings for functions that # are documented, but have no documentation for their parameters or return # value. If set to NO, doxygen will only warn about wrong or incomplete # parameter documentation, but not about the absence of documentation. # The default value is: NO. WARN_NO_PARAMDOC = NO # TODO # The WARN_FORMAT tag determines the format of the warning messages that doxygen # can produce. The string should contain the $file, $line, and $text tags, which # will be replaced by the file and line number from which the warning originated # and the warning text. Optionally the format may contain $version, which will # be replaced by the version of the file (if it could be obtained via # FILE_VERSION_FILTER) # The default value is: $file:$line: $text. WARN_FORMAT = "$file:$line: $text" # The WARN_LOGFILE tag can be used to specify a file to which warning and error # messages should be written. If left blank the output is written to standard # error (stderr). WARN_LOGFILE = #--------------------------------------------------------------------------- # Configuration options related to the input files #--------------------------------------------------------------------------- # The INPUT tag is used to specify the files and/or directories that contain # documented source files. You may enter file names like myfile.cpp or # directories like /usr/src/myproject. Separate the files or directories with # spaces. # Note: If this tag is empty the current directory is searched. INPUT = README.md lmdb++.h # This tag can be used to specify the character encoding of the source files # that doxygen parses. Internally doxygen uses the UTF-8 encoding. Doxygen uses # libiconv (or the iconv built into libc) for the transcoding. See the libiconv # documentation (see: http://www.gnu.org/software/libiconv) for the list of # possible encodings. # The default value is: UTF-8. INPUT_ENCODING = UTF-8 # If the value of the INPUT tag contains directories, you can use the # FILE_PATTERNS tag to specify one or more wildcard patterns (like *.cpp and # *.h) to filter out the source-files in the directories. If left blank the # following patterns are tested:*.c, *.cc, *.cxx, *.cpp, *.c++, *.java, *.ii, # *.ixx, *.ipp, *.i++, *.inl, *.idl, *.ddl, *.odl, *.h, *.hh, *.hxx, *.hpp, # *.h++, *.cs, *.d, *.php, *.php4, *.php5, *.phtml, *.inc, *.m, *.markdown, # *.md, *.mm, *.dox, *.py, *.f90, *.f, *.for, *.tcl, *.vhd, *.vhdl, *.ucf, # *.qsf, *.as and *.js. FILE_PATTERNS = # The RECURSIVE tag can be used to specify whether or not subdirectories should # be searched for input files as well. # The default value is: NO. RECURSIVE = NO # The EXCLUDE tag can be used to specify files and/or directories that should be # excluded from the INPUT source files. This way you can easily exclude a # subdirectory from a directory tree whose root is specified with the INPUT tag. # # Note that relative paths are relative to the directory from which doxygen is # run. EXCLUDE = # The EXCLUDE_SYMLINKS tag can be used to select whether or not files or # directories that are symbolic links (a Unix file system feature) are excluded # from the input. # The default value is: NO. EXCLUDE_SYMLINKS = NO # If the value of the INPUT tag contains directories, you can use the # EXCLUDE_PATTERNS tag to specify one or more wildcard patterns to exclude # certain files from those directories. # # Note that the wildcards are matched against the file with absolute path, so to # exclude all test directories for example use the pattern */test/* EXCLUDE_PATTERNS = # The EXCLUDE_SYMBOLS tag can be used to specify one or more symbol names # (namespaces, classes, functions, etc.) that should be excluded from the # output. The symbol name can be a fully qualified name, a word, or if the # wildcard * is used, a substring. Examples: ANamespace, AClass, # AClass::ANamespace, ANamespace::*Test # # Note that the wildcards are matched against the file with absolute path, so to # exclude all test directories use the pattern */test/* EXCLUDE_SYMBOLS = # The EXAMPLE_PATH tag can be used to specify one or more files or directories # that contain example code fragments that are included (see the \include # command). EXAMPLE_PATH = # If the value of the EXAMPLE_PATH tag contains directories, you can use the # EXAMPLE_PATTERNS tag to specify one or more wildcard pattern (like *.cpp and # *.h) to filter out the source-files in the directories. If left blank all # files are included. EXAMPLE_PATTERNS = # If the EXAMPLE_RECURSIVE tag is set to YES then subdirectories will be # searched for input files to be used with the \include or \dontinclude commands # irrespective of the value of the RECURSIVE tag. # The default value is: NO. EXAMPLE_RECURSIVE = NO # The IMAGE_PATH tag can be used to specify one or more files or directories # that contain images that are to be included in the documentation (see the # \image command). IMAGE_PATH = # The INPUT_FILTER tag can be used to specify a program that doxygen should # invoke to filter for each input file. Doxygen will invoke the filter program # by executing (via popen()) the command: # # <filter> <input-file> # # where <filter> is the value of the INPUT_FILTER tag, and <input-file> is the # name of an input file. Doxygen will then use the output that the filter # program writes to standard output. If FILTER_PATTERNS is specified, this tag # will be ignored. # # Note that the filter must not add or remove lines; it is applied before the # code is scanned, but not when the output code is generated. If lines are added # or removed, the anchors will not be placed correctly. INPUT_FILTER = # The FILTER_PATTERNS tag can be used to specify filters on a per file pattern # basis. Doxygen will compare the file name with each pattern and apply the # filter if there is a match. The filters are a list of the form: pattern=filter # (like *.cpp=my_cpp_filter). See INPUT_FILTER for further information on how # filters are used. If the FILTER_PATTERNS tag is empty or if none of the # patterns match the file name, INPUT_FILTER is applied. FILTER_PATTERNS = # If the FILTER_SOURCE_FILES tag is set to YES, the input filter (if set using # INPUT_FILTER) will also be used to filter the input files that are used for # producing the source files to browse (i.e. when SOURCE_BROWSER is set to YES). # The default value is: NO. FILTER_SOURCE_FILES = NO # The FILTER_SOURCE_PATTERNS tag can be used to specify source filters per file # pattern. A pattern will override the setting for FILTER_PATTERN (if any) and # it is also possible to disable source filtering for a specific pattern using # *.ext= (so without naming a filter). # This tag requires that the tag FILTER_SOURCE_FILES is set to YES. FILTER_SOURCE_PATTERNS = # If the USE_MDFILE_AS_MAINPAGE tag refers to the name of a markdown file that # is part of the input, its contents will be placed on the main page # (index.html). This can be useful if you have a project on for instance GitHub # and want to reuse the introduction page also for the doxygen output. USE_MDFILE_AS_MAINPAGE = README.md #--------------------------------------------------------------------------- # Configuration options related to source browsing #--------------------------------------------------------------------------- # If the SOURCE_BROWSER tag is set to YES then a list of source files will be # generated. Documented entities will be cross-referenced with these sources. # # Note: To get rid of all source code in the generated output, make sure that # also VERBATIM_HEADERS is set to NO. # The default value is: NO. SOURCE_BROWSER = NO # Setting the INLINE_SOURCES tag to YES will include the body of functions, # classes and enums directly into the documentation. # The default value is: NO. INLINE_SOURCES = NO # Setting the STRIP_CODE_COMMENTS tag to YES will instruct doxygen to hide any # special comment blocks from generated source code fragments. Normal C, C++ and # Fortran comments will always remain visible. # The default value is: YES. STRIP_CODE_COMMENTS = YES # If the REFERENCED_BY_RELATION tag is set to YES then for each documented # function all documented functions referencing it will be listed. # The default value is: NO. REFERENCED_BY_RELATION = NO # If the REFERENCES_RELATION tag is set to YES then for each documented function # all documented entities called/used by that function will be listed. # The default value is: NO. REFERENCES_RELATION = NO # If the REFERENCES_LINK_SOURCE tag is set to YES and SOURCE_BROWSER tag is set # to YES then the hyperlinks from functions in REFERENCES_RELATION and # REFERENCED_BY_RELATION lists will link to the source code. Otherwise they will # link to the documentation. # The default value is: YES. REFERENCES_LINK_SOURCE = YES # If SOURCE_TOOLTIPS is enabled (the default) then hovering a hyperlink in the # source code will show a tooltip with additional information such as prototype, # brief description and links to the definition and documentation. Since this # will make the HTML file larger and loading of large files a bit slower, you # can opt to disable this feature. # The default value is: YES. # This tag requires that the tag SOURCE_BROWSER is set to YES. SOURCE_TOOLTIPS = YES # If the USE_HTAGS tag is set to YES then the references to source code will # point to the HTML generated by the htags(1) tool instead of doxygen built-in # source browser. The htags tool is part of GNU's global source tagging system # (see http://www.gnu.org/software/global/global.html). You will need version # 4.8.6 or higher. # # To use it do the following: # - Install the latest version of global # - Enable SOURCE_BROWSER and USE_HTAGS in the config file # - Make sure the INPUT points to the root of the source tree # - Run doxygen as normal # # Doxygen will invoke htags (and that will in turn invoke gtags), so these # tools must be available from the command line (i.e. in the search path). # # The result: instead of the source browser generated by doxygen, the links to # source code will now point to the output of htags. # The default value is: NO. # This tag requires that the tag SOURCE_BROWSER is set to YES. USE_HTAGS = NO # If the VERBATIM_HEADERS tag is set the YES then doxygen will generate a # verbatim copy of the header file for each class for which an include is # specified. Set to NO to disable this. # See also: Section \class. # The default value is: YES. VERBATIM_HEADERS = YES #--------------------------------------------------------------------------- # Configuration options related to the alphabetical class index #--------------------------------------------------------------------------- # If the ALPHABETICAL_INDEX tag is set to YES, an alphabetical index of all # compounds will be generated. Enable this if the project contains a lot of # classes, structs, unions or interfaces. # The default value is: YES. ALPHABETICAL_INDEX = YES # The COLS_IN_ALPHA_INDEX tag can be used to specify the number of columns in # which the alphabetical index list will be split. # Minimum value: 1, maximum value: 20, default value: 5. # This tag requires that the tag ALPHABETICAL_INDEX is set to YES. COLS_IN_ALPHA_INDEX = 5 # In case all classes in a project start with a common prefix, all classes will # be put under the same header in the alphabetical index. The IGNORE_PREFIX tag # can be used to specify a prefix (or a list of prefixes) that should be ignored # while generating the index headers. # This tag requires that the tag ALPHABETICAL_INDEX is set to YES. IGNORE_PREFIX = #--------------------------------------------------------------------------- # Configuration options related to the HTML output #--------------------------------------------------------------------------- # If the GENERATE_HTML tag is set to YES, doxygen will generate HTML output # The default value is: YES. GENERATE_HTML = YES # The HTML_OUTPUT tag is used to specify where the HTML docs will be put. If a # relative path is entered the value of OUTPUT_DIRECTORY will be put in front of # it. # The default directory is: html. # This tag requires that the tag GENERATE_HTML is set to YES. HTML_OUTPUT = html # The HTML_FILE_EXTENSION tag can be used to specify the file extension for each # generated HTML page (for example: .htm, .php, .asp). # The default value is: .html. # This tag requires that the tag GENERATE_HTML is set to YES. HTML_FILE_EXTENSION = .html # The HTML_HEADER tag can be used to specify a user-defined HTML header file for # each generated HTML page. If the tag is left blank doxygen will generate a # standard header. # # To get valid HTML the header file that includes any scripts and style sheets # that doxygen needs, which is dependent on the configuration options used (e.g. # the setting GENERATE_TREEVIEW). It is highly recommended to start with a # default header using # doxygen -w html new_header.html new_footer.html new_stylesheet.css # YourConfigFile # and then modify the file new_header.html. See also section "Doxygen usage" # for information on how to generate the default header that doxygen normally # uses. # Note: The header is subject to change so you typically have to regenerate the # default header when upgrading to a newer version of doxygen. For a description # of the possible markers and block names see the documentation. # This tag requires that the tag GENERATE_HTML is set to YES. HTML_HEADER = # The HTML_FOOTER tag can be used to specify a user-defined HTML footer for each # generated HTML page. If the tag is left blank doxygen will generate a standard # footer. See HTML_HEADER for more information on how to generate a default # footer and what special commands can be used inside the footer. See also # section "Doxygen usage" for information on how to generate the default footer # that doxygen normally uses. # This tag requires that the tag GENERATE_HTML is set to YES. HTML_FOOTER = # The HTML_STYLESHEET tag can be used to specify a user-defined cascading style # sheet that is used by each HTML page. It can be used to fine-tune the look of # the HTML output. If left blank doxygen will generate a default style sheet. # See also section "Doxygen usage" for information on how to generate the style # sheet that doxygen normally uses. # Note: It is recommended to use HTML_EXTRA_STYLESHEET instead of this tag, as # it is more robust and this tag (HTML_STYLESHEET) will in the future become # obsolete. # This tag requires that the tag GENERATE_HTML is set to YES. HTML_STYLESHEET = # The HTML_EXTRA_STYLESHEET tag can be used to specify additional user-defined # cascading style sheets that are included after the standard style sheets # created by doxygen. Using this option one can overrule certain style aspects. # This is preferred over using HTML_STYLESHEET since it does not replace the # standard style sheet and is therefore more robust against future updates. # Doxygen will copy the style sheet files to the output directory. # Note: The order of the extra style sheet files is of importance (e.g. the last # style sheet in the list overrules the setting of the previous ones in the # list). For an example see the documentation. # This tag requires that the tag GENERATE_HTML is set to YES. HTML_EXTRA_STYLESHEET = # The HTML_EXTRA_FILES tag can be used to specify one or more extra images or # other source files which should be copied to the HTML output directory. Note # that these files will be copied to the base HTML output directory. Use the # $relpath^ marker in the HTML_HEADER and/or HTML_FOOTER files to load these # files. In the HTML_STYLESHEET file, use the file name only. Also note that the # files will be copied as-is; there are no commands or markers available. # This tag requires that the tag GENERATE_HTML is set to YES. HTML_EXTRA_FILES = # The HTML_COLORSTYLE_HUE tag controls the color of the HTML output. Doxygen # will adjust the colors in the style sheet and background images according to # this color. Hue is specified as an angle on a colorwheel, see # http://en.wikipedia.org/wiki/Hue for more information. For instance the value # 0 represents red, 60 is yellow, 120 is green, 180 is cyan, 240 is blue, 300 # purple, and 360 is red again. # Minimum value: 0, maximum value: 359, default value: 220. # This tag requires that the tag GENERATE_HTML is set to YES. HTML_COLORSTYLE_HUE = 220 # The HTML_COLORSTYLE_SAT tag controls the purity (or saturation) of the colors # in the HTML output. For a value of 0 the output will use grayscales only. A # value of 255 will produce the most vivid colors. # Minimum value: 0, maximum value: 255, default value: 100. # This tag requires that the tag GENERATE_HTML is set to YES. HTML_COLORSTYLE_SAT = 100 # The HTML_COLORSTYLE_GAMMA tag controls the gamma correction applied to the # luminance component of the colors in the HTML output. Values below 100 # gradually make the output lighter, whereas values above 100 make the output # darker. The value divided by 100 is the actual gamma applied, so 80 represents # a gamma of 0.8, The value 220 represents a gamma of 2.2, and 100 does not # change the gamma. # Minimum value: 40, maximum value: 240, default value: 80. # This tag requires that the tag GENERATE_HTML is set to YES. HTML_COLORSTYLE_GAMMA = 80 # If the HTML_TIMESTAMP tag is set to YES then the footer of each generated HTML # page will contain the date and time when the page was generated. Setting this # to NO can help when comparing the output of multiple runs. # The default value is: YES. # This tag requires that the tag GENERATE_HTML is set to YES. HTML_TIMESTAMP = YES # If the HTML_DYNAMIC_SECTIONS tag is set to YES then the generated HTML # documentation will contain sections that can be hidden and shown after the # page has loaded. # The default value is: NO. # This tag requires that the tag GENERATE_HTML is set to YES. HTML_DYNAMIC_SECTIONS = NO # With HTML_INDEX_NUM_ENTRIES one can control the preferred number of entries # shown in the various tree structured indices initially; the user can expand # and collapse entries dynamically later on. Doxygen will expand the tree to # such a level that at most the specified number of entries are visible (unless # a fully collapsed tree already exceeds this amount). So setting the number of # entries 1 will produce a full collapsed tree by default. 0 is a special value # representing an infinite number of entries and will result in a full expanded # tree by default. # Minimum value: 0, maximum value: 9999, default value: 100. # This tag requires that the tag GENERATE_HTML is set to YES. HTML_INDEX_NUM_ENTRIES = 100 # If the GENERATE_DOCSET tag is set to YES, additional index files will be # generated that can be used as input for Apple's Xcode 3 integrated development # environment (see: http://developer.apple.com/tools/xcode/), introduced with # OSX 10.5 (Leopard). To create a documentation set, doxygen will generate a # Makefile in the HTML output directory. Running make will produce the docset in # that directory and running make install will install the docset in # ~/Library/Developer/Shared/Documentation/DocSets so that Xcode will find it at # startup. See http://developer.apple.com/tools/creatingdocsetswithdoxygen.html # for more information. # The default value is: NO. # This tag requires that the tag GENERATE_HTML is set to YES. GENERATE_DOCSET = NO # This tag determines the name of the docset feed. A documentation feed provides # an umbrella under which multiple documentation sets from a single provider # (such as a company or product suite) can be grouped. # The default value is: Doxygen generated docs. # This tag requires that the tag GENERATE_DOCSET is set to YES. DOCSET_FEEDNAME = "Doxygen generated docs" # This tag specifies a string that should uniquely identify the documentation # set bundle. This should be a reverse domain-name style string, e.g. # com.mycompany.MyDocSet. Doxygen will append .docset to the name. # The default value is: org.doxygen.Project. # This tag requires that the tag GENERATE_DOCSET is set to YES. DOCSET_BUNDLE_ID = org.doxygen.Project # The DOCSET_PUBLISHER_ID tag specifies a string that should uniquely identify # the documentation publisher. This should be a reverse domain-name style # string, e.g. com.mycompany.MyDocSet.documentation. # The default value is: org.doxygen.Publisher. # This tag requires that the tag GENERATE_DOCSET is set to YES. DOCSET_PUBLISHER_ID = org.doxygen.Publisher # The DOCSET_PUBLISHER_NAME tag identifies the documentation publisher. # The default value is: Publisher. # This tag requires that the tag GENERATE_DOCSET is set to YES. DOCSET_PUBLISHER_NAME = Publisher # If the GENERATE_HTMLHELP tag is set to YES then doxygen generates three # additional HTML index files: index.hhp, index.hhc, and index.hhk. The # index.hhp is a project file that can be read by Microsoft's HTML Help Workshop # (see: http://www.microsoft.com/en-us/download/details.aspx?id=21138) on # Windows. # # The HTML Help Workshop contains a compiler that can convert all HTML output # generated by doxygen into a single compiled HTML file (.chm). Compiled HTML # files are now used as the Windows 98 help format, and will replace the old # Windows help format (.hlp) on all Windows platforms in the future. Compressed # HTML files also contain an index, a table of contents, and you can search for # words in the documentation. The HTML workshop also contains a viewer for # compressed HTML files. # The default value is: NO. # This tag requires that the tag GENERATE_HTML is set to YES. GENERATE_HTMLHELP = NO # The CHM_FILE tag can be used to specify the file name of the resulting .chm # file. You can add a path in front of the file if the result should not be # written to the html output directory. # This tag requires that the tag GENERATE_HTMLHELP is set to YES. CHM_FILE = # The HHC_LOCATION tag can be used to specify the location (absolute path # including file name) of the HTML help compiler (hhc.exe). If non-empty, # doxygen will try to run the HTML help compiler on the generated index.hhp. # The file has to be specified with full path. # This tag requires that the tag GENERATE_HTMLHELP is set to YES. HHC_LOCATION = # The GENERATE_CHI flag controls if a separate .chi index file is generated # (YES) or that it should be included in the master .chm file (NO). # The default value is: NO. # This tag requires that the tag GENERATE_HTMLHELP is set to YES. GENERATE_CHI = NO # The CHM_INDEX_ENCODING is used to encode HtmlHelp index (hhk), content (hhc) # and project file content. # This tag requires that the tag GENERATE_HTMLHELP is set to YES. CHM_INDEX_ENCODING = # The BINARY_TOC flag controls whether a binary table of contents is generated # (YES) or a normal table of contents (NO) in the .chm file. Furthermore it # enables the Previous and Next buttons. # The default value is: NO. # This tag requires that the tag GENERATE_HTMLHELP is set to YES. BINARY_TOC = NO # The TOC_EXPAND flag can be set to YES to add extra items for group members to # the table of contents of the HTML help documentation and to the tree view. # The default value is: NO. # This tag requires that the tag GENERATE_HTMLHELP is set to YES. TOC_EXPAND = NO # If the GENERATE_QHP tag is set to YES and both QHP_NAMESPACE and # QHP_VIRTUAL_FOLDER are set, an additional index file will be generated that # can be used as input for Qt's qhelpgenerator to generate a Qt Compressed Help # (.qch) of the generated HTML documentation. # The default value is: NO. # This tag requires that the tag GENERATE_HTML is set to YES. GENERATE_QHP = NO # If the QHG_LOCATION tag is specified, the QCH_FILE tag can be used to specify # the file name of the resulting .qch file. The path specified is relative to # the HTML output folder. # This tag requires that the tag GENERATE_QHP is set to YES. QCH_FILE = # The QHP_NAMESPACE tag specifies the namespace to use when generating Qt Help # Project output. For more information please see Qt Help Project / Namespace # (see: http://qt-project.org/doc/qt-4.8/qthelpproject.html#namespace). # The default value is: org.doxygen.Project. # This tag requires that the tag GENERATE_QHP is set to YES. QHP_NAMESPACE = org.doxygen.Project # The QHP_VIRTUAL_FOLDER tag specifies the namespace to use when generating Qt # Help Project output. For more information please see Qt Help Project / Virtual # Folders (see: http://qt-project.org/doc/qt-4.8/qthelpproject.html#virtual- # folders). # The default value is: doc. # This tag requires that the tag GENERATE_QHP is set to YES. QHP_VIRTUAL_FOLDER = doc # If the QHP_CUST_FILTER_NAME tag is set, it specifies the name of a custom # filter to add. For more information please see Qt Help Project / Custom # Filters (see: http://qt-project.org/doc/qt-4.8/qthelpproject.html#custom- # filters). # This tag requires that the tag GENERATE_QHP is set to YES. QHP_CUST_FILTER_NAME = # The QHP_CUST_FILTER_ATTRS tag specifies the list of the attributes of the # custom filter to add. For more information please see Qt Help Project / Custom # Filters (see: http://qt-project.org/doc/qt-4.8/qthelpproject.html#custom- # filters). # This tag requires that the tag GENERATE_QHP is set to YES. QHP_CUST_FILTER_ATTRS = # The QHP_SECT_FILTER_ATTRS tag specifies the list of the attributes this # project's filter section matches. Qt Help Project / Filter Attributes (see: # http://qt-project.org/doc/qt-4.8/qthelpproject.html#filter-attributes). # This tag requires that the tag GENERATE_QHP is set to YES. QHP_SECT_FILTER_ATTRS = # The QHG_LOCATION tag can be used to specify the location of Qt's # qhelpgenerator. If non-empty doxygen will try to run qhelpgenerator on the # generated .qhp file. # This tag requires that the tag GENERATE_QHP is set to YES. QHG_LOCATION = # If the GENERATE_ECLIPSEHELP tag is set to YES, additional index files will be # generated, together with the HTML files, they form an Eclipse help plugin. To # install this plugin and make it available under the help contents menu in # Eclipse, the contents of the directory containing the HTML and XML files needs # to be copied into the plugins directory of eclipse. The name of the directory # within the plugins directory should be the same as the ECLIPSE_DOC_ID value. # After copying Eclipse needs to be restarted before the help appears. # The default value is: NO. # This tag requires that the tag GENERATE_HTML is set to YES. GENERATE_ECLIPSEHELP = NO # A unique identifier for the Eclipse help plugin. When installing the plugin # the directory name containing the HTML and XML files should also have this # name. Each documentation set should have its own identifier. # The default value is: org.doxygen.Project. # This tag requires that the tag GENERATE_ECLIPSEHELP is set to YES. ECLIPSE_DOC_ID = org.doxygen.Project # If you want full control over the layout of the generated HTML pages it might # be necessary to disable the index and replace it with your own. The # DISABLE_INDEX tag can be used to turn on/off the condensed index (tabs) at top # of each HTML page. A value of NO enables the index and the value YES disables # it. Since the tabs in the index contain the same information as the navigation # tree, you can set this option to YES if you also set GENERATE_TREEVIEW to YES. # The default value is: NO. # This tag requires that the tag GENERATE_HTML is set to YES. DISABLE_INDEX = NO # The GENERATE_TREEVIEW tag is used to specify whether a tree-like index # structure should be generated to display hierarchical information. If the tag # value is set to YES, a side panel will be generated containing a tree-like # index structure (just like the one that is generated for HTML Help). For this # to work a browser that supports JavaScript, DHTML, CSS and frames is required # (i.e. any modern browser). Windows users are probably better off using the # HTML help feature. Via custom style sheets (see HTML_EXTRA_STYLESHEET) one can # further fine-tune the look of the index. As an example, the default style # sheet generated by doxygen has an example that shows how to put an image at # the root of the tree instead of the PROJECT_NAME. Since the tree basically has # the same information as the tab index, you could consider setting # DISABLE_INDEX to YES when enabling this option. # The default value is: NO. # This tag requires that the tag GENERATE_HTML is set to YES. GENERATE_TREEVIEW = NO # The ENUM_VALUES_PER_LINE tag can be used to set the number of enum values that # doxygen will group on one line in the generated HTML documentation. # # Note that a value of 0 will completely suppress the enum values from appearing # in the overview section. # Minimum value: 0, maximum value: 20, default value: 4. # This tag requires that the tag GENERATE_HTML is set to YES. ENUM_VALUES_PER_LINE = 4 # If the treeview is enabled (see GENERATE_TREEVIEW) then this tag can be used # to set the initial width (in pixels) of the frame in which the tree is shown. # Minimum value: 0, maximum value: 1500, default value: 250. # This tag requires that the tag GENERATE_HTML is set to YES. TREEVIEW_WIDTH = 250 # If the EXT_LINKS_IN_WINDOW option is set to YES, doxygen will open links to # external symbols imported via tag files in a separate window. # The default value is: NO. # This tag requires that the tag GENERATE_HTML is set to YES. EXT_LINKS_IN_WINDOW = NO # Use this tag to change the font size of LaTeX formulas included as images in # the HTML documentation. When you change the font size after a successful # doxygen run you need to manually remove any form_*.png images from the HTML # output directory to force them to be regenerated. # Minimum value: 8, maximum value: 50, default value: 10. # This tag requires that the tag GENERATE_HTML is set to YES. FORMULA_FONTSIZE = 10 # Use the FORMULA_TRANPARENT tag to determine whether or not the images # generated for formulas are transparent PNGs. Transparent PNGs are not # supported properly for IE 6.0, but are supported on all modern browsers. # # Note that when changing this option you need to delete any form_*.png files in # the HTML output directory before the changes have effect. # The default value is: YES. # This tag requires that the tag GENERATE_HTML is set to YES. FORMULA_TRANSPARENT = YES # Enable the USE_MATHJAX option to render LaTeX formulas using MathJax (see # http://www.mathjax.org) which uses client side Javascript for the rendering # instead of using pre-rendered bitmaps. Use this if you do not have LaTeX # installed or if you want to formulas look prettier in the HTML output. When # enabled you may also need to install MathJax separately and configure the path # to it using the MATHJAX_RELPATH option. # The default value is: NO. # This tag requires that the tag GENERATE_HTML is set to YES. USE_MATHJAX = NO # When MathJax is enabled you can set the default output format to be used for # the MathJax output. See the MathJax site (see: # http://docs.mathjax.org/en/latest/output.html) for more details. # Possible values are: HTML-CSS (which is slower, but has the best # compatibility), NativeMML (i.e. MathML) and SVG. # The default value is: HTML-CSS. # This tag requires that the tag USE_MATHJAX is set to YES. MATHJAX_FORMAT = HTML-CSS # When MathJax is enabled you need to specify the location relative to the HTML # output directory using the MATHJAX_RELPATH option. The destination directory # should contain the MathJax.js script. For instance, if the mathjax directory # is located at the same level as the HTML output directory, then # MATHJAX_RELPATH should be ../mathjax. The default value points to the MathJax # Content Delivery Network so you can quickly see the result without installing # MathJax. However, it is strongly recommended to install a local copy of # MathJax from http://www.mathjax.org before deployment. # The default value is: http://cdn.mathjax.org/mathjax/latest. # This tag requires that the tag USE_MATHJAX is set to YES. MATHJAX_RELPATH = http://cdn.mathjax.org/mathjax/latest # The MATHJAX_EXTENSIONS tag can be used to specify one or more MathJax # extension names that should be enabled during MathJax rendering. For example # MATHJAX_EXTENSIONS = TeX/AMSmath TeX/AMSsymbols # This tag requires that the tag USE_MATHJAX is set to YES. MATHJAX_EXTENSIONS = # The MATHJAX_CODEFILE tag can be used to specify a file with javascript pieces # of code that will be used on startup of the MathJax code. See the MathJax site # (see: http://docs.mathjax.org/en/latest/output.html) for more details. For an # example see the documentation. # This tag requires that the tag USE_MATHJAX is set to YES. MATHJAX_CODEFILE = # When the SEARCHENGINE tag is enabled doxygen will generate a search box for # the HTML output. The underlying search engine uses javascript and DHTML and # should work on any modern browser. Note that when using HTML help # (GENERATE_HTMLHELP), Qt help (GENERATE_QHP), or docsets (GENERATE_DOCSET) # there is already a search function so this one should typically be disabled. # For large projects the javascript based search engine can be slow, then # enabling SERVER_BASED_SEARCH may provide a better solution. It is possible to # search using the keyboard; to jump to the search box use <access key> + S # (what the <access key> is depends on the OS and browser, but it is typically # <CTRL>, <ALT>/<option>, or both). Inside the search box use the <cursor down # key> to jump into the search results window, the results can be navigated # using the <cursor keys>. Press <Enter> to select an item or <escape> to cancel # the search. The filter options can be selected when the cursor is inside the # search box by pressing <Shift>+<cursor down>. Also here use the <cursor keys> # to select a filter and <Enter> or <escape> to activate or cancel the filter # option. # The default value is: YES. # This tag requires that the tag GENERATE_HTML is set to YES. SEARCHENGINE = YES # When the SERVER_BASED_SEARCH tag is enabled the search engine will be # implemented using a web server instead of a web client using Javascript. There # are two flavors of web server based searching depending on the EXTERNAL_SEARCH # setting. When disabled, doxygen will generate a PHP script for searching and # an index file used by the script. When EXTERNAL_SEARCH is enabled the indexing # and searching needs to be provided by external tools. See the section # "External Indexing and Searching" for details. # The default value is: NO. # This tag requires that the tag SEARCHENGINE is set to YES. SERVER_BASED_SEARCH = NO # When EXTERNAL_SEARCH tag is enabled doxygen will no longer generate the PHP # script for searching. Instead the search results are written to an XML file # which needs to be processed by an external indexer. Doxygen will invoke an # external search engine pointed to by the SEARCHENGINE_URL option to obtain the # search results. # # Doxygen ships with an example indexer (doxyindexer) and search engine # (doxysearch.cgi) which are based on the open source search engine library # Xapian (see: http://xapian.org/). # # See the section "External Indexing and Searching" for details. # The default value is: NO. # This tag requires that the tag SEARCHENGINE is set to YES. EXTERNAL_SEARCH = NO # The SEARCHENGINE_URL should point to a search engine hosted by a web server # which will return the search results when EXTERNAL_SEARCH is enabled. # # Doxygen ships with an example indexer (doxyindexer) and search engine # (doxysearch.cgi) which are based on the open source search engine library # Xapian (see: http://xapian.org/). See the section "External Indexing and # Searching" for details. # This tag requires that the tag SEARCHENGINE is set to YES. SEARCHENGINE_URL = # When SERVER_BASED_SEARCH and EXTERNAL_SEARCH are both enabled the unindexed # search data is written to a file for indexing by an external tool. With the # SEARCHDATA_FILE tag the name of this file can be specified. # The default file is: searchdata.xml. # This tag requires that the tag SEARCHENGINE is set to YES. SEARCHDATA_FILE = searchdata.xml # When SERVER_BASED_SEARCH and EXTERNAL_SEARCH are both enabled the # EXTERNAL_SEARCH_ID tag can be used as an identifier for the project. This is # useful in combination with EXTRA_SEARCH_MAPPINGS to search through multiple # projects and redirect the results back to the right project. # This tag requires that the tag SEARCHENGINE is set to YES. EXTERNAL_SEARCH_ID = # The EXTRA_SEARCH_MAPPINGS tag can be used to enable searching through doxygen # projects other than the one defined by this configuration file, but that are # all added to the same external search index. Each project needs to have a # unique id set via EXTERNAL_SEARCH_ID. The search mapping then maps the id of # to a relative location where the documentation can be found. The format is: # EXTRA_SEARCH_MAPPINGS = tagname1=loc1 tagname2=loc2 ... # This tag requires that the tag SEARCHENGINE is set to YES. EXTRA_SEARCH_MAPPINGS = #--------------------------------------------------------------------------- # Configuration options related to the LaTeX output #--------------------------------------------------------------------------- # If the GENERATE_LATEX tag is set to YES, doxygen will generate LaTeX output. # The default value is: YES. GENERATE_LATEX = NO # The LATEX_OUTPUT tag is used to specify where the LaTeX docs will be put. If a # relative path is entered the value of OUTPUT_DIRECTORY will be put in front of # it. # The default directory is: latex. # This tag requires that the tag GENERATE_LATEX is set to YES. LATEX_OUTPUT = latex # The LATEX_CMD_NAME tag can be used to specify the LaTeX command name to be # invoked. # # Note that when enabling USE_PDFLATEX this option is only used for generating # bitmaps for formulas in the HTML output, but not in the Makefile that is # written to the output directory. # The default file is: latex. # This tag requires that the tag GENERATE_LATEX is set to YES. LATEX_CMD_NAME = latex # The MAKEINDEX_CMD_NAME tag can be used to specify the command name to generate # index for LaTeX. # The default file is: makeindex. # This tag requires that the tag GENERATE_LATEX is set to YES. MAKEINDEX_CMD_NAME = makeindex # If the COMPACT_LATEX tag is set to YES, doxygen generates more compact LaTeX # documents. This may be useful for small projects and may help to save some # trees in general. # The default value is: NO. # This tag requires that the tag GENERATE_LATEX is set to YES. COMPACT_LATEX = NO # The PAPER_TYPE tag can be used to set the paper type that is used by the # printer. # Possible values are: a4 (210 x 297 mm), letter (8.5 x 11 inches), legal (8.5 x # 14 inches) and executive (7.25 x 10.5 inches). # The default value is: a4. # This tag requires that the tag GENERATE_LATEX is set to YES. PAPER_TYPE = a4 # The EXTRA_PACKAGES tag can be used to specify one or more LaTeX package names # that should be included in the LaTeX output. To get the times font for # instance you can specify # EXTRA_PACKAGES=times # If left blank no extra packages will be included. # This tag requires that the tag GENERATE_LATEX is set to YES. EXTRA_PACKAGES = # The LATEX_HEADER tag can be used to specify a personal LaTeX header for the # generated LaTeX document. The header should contain everything until the first # chapter. If it is left blank doxygen will generate a standard header. See # section "Doxygen usage" for information on how to let doxygen write the # default header to a separate file. # # Note: Only use a user-defined header if you know what you are doing! The # following commands have a special meaning inside the header: $title, # $datetime, $date, $doxygenversion, $projectname, $projectnumber, # $projectbrief, $projectlogo. Doxygen will replace $title with the empty # string, for the replacement values of the other commands the user is referred # to HTML_HEADER. # This tag requires that the tag GENERATE_LATEX is set to YES. LATEX_HEADER = # The LATEX_FOOTER tag can be used to specify a personal LaTeX footer for the # generated LaTeX document. The footer should contain everything after the last # chapter. If it is left blank doxygen will generate a standard footer. See # LATEX_HEADER for more information on how to generate a default footer and what # special commands can be used inside the footer. # # Note: Only use a user-defined footer if you know what you are doing! # This tag requires that the tag GENERATE_LATEX is set to YES. LATEX_FOOTER = # The LATEX_EXTRA_STYLESHEET tag can be used to specify additional user-defined # LaTeX style sheets that are included after the standard style sheets created # by doxygen. Using this option one can overrule certain style aspects. Doxygen # will copy the style sheet files to the output directory. # Note: The order of the extra style sheet files is of importance (e.g. the last # style sheet in the list overrules the setting of the previous ones in the # list). # This tag requires that the tag GENERATE_LATEX is set to YES. LATEX_EXTRA_STYLESHEET = # The LATEX_EXTRA_FILES tag can be used to specify one or more extra images or # other source files which should be copied to the LATEX_OUTPUT output # directory. Note that the files will be copied as-is; there are no commands or # markers available. # This tag requires that the tag GENERATE_LATEX is set to YES. LATEX_EXTRA_FILES = # If the PDF_HYPERLINKS tag is set to YES, the LaTeX that is generated is # prepared for conversion to PDF (using ps2pdf or pdflatex). The PDF file will # contain links (just like the HTML output) instead of page references. This # makes the output suitable for online browsing using a PDF viewer. # The default value is: YES. # This tag requires that the tag GENERATE_LATEX is set to YES. PDF_HYPERLINKS = YES # If the USE_PDFLATEX tag is set to YES, doxygen will use pdflatex to generate # the PDF file directly from the LaTeX files. Set this option to YES, to get a # higher quality PDF documentation. # The default value is: YES. # This tag requires that the tag GENERATE_LATEX is set to YES. USE_PDFLATEX = YES # If the LATEX_BATCHMODE tag is set to YES, doxygen will add the \batchmode # command to the generated LaTeX files. This will instruct LaTeX to keep running # if errors occur, instead of asking the user for help. This option is also used # when generating formulas in HTML. # The default value is: NO. # This tag requires that the tag GENERATE_LATEX is set to YES. LATEX_BATCHMODE = NO # If the LATEX_HIDE_INDICES tag is set to YES then doxygen will not include the # index chapters (such as File Index, Compound Index, etc.) in the output. # The default value is: NO. # This tag requires that the tag GENERATE_LATEX is set to YES. LATEX_HIDE_INDICES = NO # If the LATEX_SOURCE_CODE tag is set to YES then doxygen will include source # code with syntax highlighting in the LaTeX output. # # Note that which sources are shown also depends on other settings such as # SOURCE_BROWSER. # The default value is: NO. # This tag requires that the tag GENERATE_LATEX is set to YES. LATEX_SOURCE_CODE = NO # The LATEX_BIB_STYLE tag can be used to specify the style to use for the # bibliography, e.g. plainnat, or ieeetr. See # http://en.wikipedia.org/wiki/BibTeX and \cite for more info. # The default value is: plain. # This tag requires that the tag GENERATE_LATEX is set to YES. LATEX_BIB_STYLE = plain #--------------------------------------------------------------------------- # Configuration options related to the RTF output #--------------------------------------------------------------------------- # If the GENERATE_RTF tag is set to YES, doxygen will generate RTF output. The # RTF output is optimized for Word 97 and may not look too pretty with other RTF # readers/editors. # The default value is: NO. GENERATE_RTF = NO # The RTF_OUTPUT tag is used to specify where the RTF docs will be put. If a # relative path is entered the value of OUTPUT_DIRECTORY will be put in front of # it. # The default directory is: rtf. # This tag requires that the tag GENERATE_RTF is set to YES. RTF_OUTPUT = rtf # If the COMPACT_RTF tag is set to YES, doxygen generates more compact RTF # documents. This may be useful for small projects and may help to save some # trees in general. # The default value is: NO. # This tag requires that the tag GENERATE_RTF is set to YES. COMPACT_RTF = NO # If the RTF_HYPERLINKS tag is set to YES, the RTF that is generated will # contain hyperlink fields. The RTF file will contain links (just like the HTML # output) instead of page references. This makes the output suitable for online # browsing using Word or some other Word compatible readers that support those # fields. # # Note: WordPad (write) and others do not support links. # The default value is: NO. # This tag requires that the tag GENERATE_RTF is set to YES. RTF_HYPERLINKS = NO # Load stylesheet definitions from file. Syntax is similar to doxygen's config # file, i.e. a series of assignments. You only have to provide replacements, # missing definitions are set to their default value. # # See also section "Doxygen usage" for information on how to generate the # default style sheet that doxygen normally uses. # This tag requires that the tag GENERATE_RTF is set to YES. RTF_STYLESHEET_FILE = # Set optional variables used in the generation of an RTF document. Syntax is # similar to doxygen's config file. A template extensions file can be generated # using doxygen -e rtf extensionFile. # This tag requires that the tag GENERATE_RTF is set to YES. RTF_EXTENSIONS_FILE = # If the RTF_SOURCE_CODE tag is set to YES then doxygen will include source code # with syntax highlighting in the RTF output. # # Note that which sources are shown also depends on other settings such as # SOURCE_BROWSER. # The default value is: NO. # This tag requires that the tag GENERATE_RTF is set to YES. RTF_SOURCE_CODE = NO #--------------------------------------------------------------------------- # Configuration options related to the man page output #--------------------------------------------------------------------------- # If the GENERATE_MAN tag is set to YES, doxygen will generate man pages for # classes and files. # The default value is: NO. GENERATE_MAN = NO # The MAN_OUTPUT tag is used to specify where the man pages will be put. If a # relative path is entered the value of OUTPUT_DIRECTORY will be put in front of # it. A directory man3 will be created inside the directory specified by # MAN_OUTPUT. # The default directory is: man. # This tag requires that the tag GENERATE_MAN is set to YES. MAN_OUTPUT = man # The MAN_EXTENSION tag determines the extension that is added to the generated # man pages. In case the manual section does not start with a number, the number # 3 is prepended. The dot (.) at the beginning of the MAN_EXTENSION tag is # optional. # The default value is: .3. # This tag requires that the tag GENERATE_MAN is set to YES. MAN_EXTENSION = .3 # The MAN_SUBDIR tag determines the name of the directory created within # MAN_OUTPUT in which the man pages are placed. If defaults to man followed by # MAN_EXTENSION with the initial . removed. # This tag requires that the tag GENERATE_MAN is set to YES. MAN_SUBDIR = # If the MAN_LINKS tag is set to YES and doxygen generates man output, then it # will generate one additional man file for each entity documented in the real # man page(s). These additional files only source the real man page, but without # them the man command would be unable to find the correct page. # The default value is: NO. # This tag requires that the tag GENERATE_MAN is set to YES. MAN_LINKS = NO #--------------------------------------------------------------------------- # Configuration options related to the XML output #--------------------------------------------------------------------------- # If the GENERATE_XML tag is set to YES, doxygen will generate an XML file that # captures the structure of the code including all documentation. # The default value is: NO. GENERATE_XML = NO # The XML_OUTPUT tag is used to specify where the XML pages will be put. If a # relative path is entered the value of OUTPUT_DIRECTORY will be put in front of # it. # The default directory is: xml. # This tag requires that the tag GENERATE_XML is set to YES. XML_OUTPUT = xml # If the XML_PROGRAMLISTING tag is set to YES, doxygen will dump the program # listings (including syntax highlighting and cross-referencing information) to # the XML output. Note that enabling this will significantly increase the size # of the XML output. # The default value is: YES. # This tag requires that the tag GENERATE_XML is set to YES. XML_PROGRAMLISTING = NO #--------------------------------------------------------------------------- # Configuration options related to the DOCBOOK output #--------------------------------------------------------------------------- # If the GENERATE_DOCBOOK tag is set to YES, doxygen will generate Docbook files # that can be used to generate PDF. # The default value is: NO. GENERATE_DOCBOOK = NO # The DOCBOOK_OUTPUT tag is used to specify where the Docbook pages will be put. # If a relative path is entered the value of OUTPUT_DIRECTORY will be put in # front of it. # The default directory is: docbook. # This tag requires that the tag GENERATE_DOCBOOK is set to YES. DOCBOOK_OUTPUT = docbook # If the DOCBOOK_PROGRAMLISTING tag is set to YES, doxygen will include the # program listings (including syntax highlighting and cross-referencing # information) to the DOCBOOK output. Note that enabling this will significantly # increase the size of the DOCBOOK output. # The default value is: NO. # This tag requires that the tag GENERATE_DOCBOOK is set to YES. DOCBOOK_PROGRAMLISTING = NO #--------------------------------------------------------------------------- # Configuration options for the AutoGen Definitions output #--------------------------------------------------------------------------- # If the GENERATE_AUTOGEN_DEF tag is set to YES, doxygen will generate an # AutoGen Definitions (see http://autogen.sf.net) file that captures the # structure of the code including all documentation. Note that this feature is # still experimental and incomplete at the moment. # The default value is: NO. GENERATE_AUTOGEN_DEF = NO #--------------------------------------------------------------------------- # Configuration options related to the Perl module output #--------------------------------------------------------------------------- # If the GENERATE_PERLMOD tag is set to YES, doxygen will generate a Perl module # file that captures the structure of the code including all documentation. # # Note that this feature is still experimental and incomplete at the moment. # The default value is: NO. GENERATE_PERLMOD = NO # If the PERLMOD_LATEX tag is set to YES, doxygen will generate the necessary # Makefile rules, Perl scripts and LaTeX code to be able to generate PDF and DVI # output from the Perl module output. # The default value is: NO. # This tag requires that the tag GENERATE_PERLMOD is set to YES. PERLMOD_LATEX = NO # If the PERLMOD_PRETTY tag is set to YES, the Perl module output will be nicely # formatted so it can be parsed by a human reader. This is useful if you want to # understand what is going on. On the other hand, if this tag is set to NO, the # size of the Perl module output will be much smaller and Perl will parse it # just the same. # The default value is: YES. # This tag requires that the tag GENERATE_PERLMOD is set to YES. PERLMOD_PRETTY = YES # The names of the make variables in the generated doxyrules.make file are # prefixed with the string contained in PERLMOD_MAKEVAR_PREFIX. This is useful # so different doxyrules.make files included by the same Makefile don't # overwrite each other's variables. # This tag requires that the tag GENERATE_PERLMOD is set to YES. PERLMOD_MAKEVAR_PREFIX = #--------------------------------------------------------------------------- # Configuration options related to the preprocessor #--------------------------------------------------------------------------- # If the ENABLE_PREPROCESSING tag is set to YES, doxygen will evaluate all # C-preprocessor directives found in the sources and include files. # The default value is: YES. ENABLE_PREPROCESSING = YES # If the MACRO_EXPANSION tag is set to YES, doxygen will expand all macro names # in the source code. If set to NO, only conditional compilation will be # performed. Macro expansion can be done in a controlled way by setting # EXPAND_ONLY_PREDEF to YES. # The default value is: NO. # This tag requires that the tag ENABLE_PREPROCESSING is set to YES. MACRO_EXPANSION = NO # If the EXPAND_ONLY_PREDEF and MACRO_EXPANSION tags are both set to YES then # the macro expansion is limited to the macros specified with the PREDEFINED and # EXPAND_AS_DEFINED tags. # The default value is: NO. # This tag requires that the tag ENABLE_PREPROCESSING is set to YES. EXPAND_ONLY_PREDEF = NO # If the SEARCH_INCLUDES tag is set to YES, the include files in the # INCLUDE_PATH will be searched if a #include is found. # The default value is: YES. # This tag requires that the tag ENABLE_PREPROCESSING is set to YES. SEARCH_INCLUDES = YES # The INCLUDE_PATH tag can be used to specify one or more directories that # contain include files that are not input files but should be processed by the # preprocessor. # This tag requires that the tag SEARCH_INCLUDES is set to YES. INCLUDE_PATH = # You can use the INCLUDE_FILE_PATTERNS tag to specify one or more wildcard # patterns (like *.h and *.hpp) to filter out the header-files in the # directories. If left blank, the patterns specified with FILE_PATTERNS will be # used. # This tag requires that the tag ENABLE_PREPROCESSING is set to YES. INCLUDE_FILE_PATTERNS = # The PREDEFINED tag can be used to specify one or more macro names that are # defined before the preprocessor is started (similar to the -D option of e.g. # gcc). The argument of the tag is a list of macros of the form: name or # name=definition (no spaces). If the definition and the "=" are omitted, "=1" # is assumed. To prevent a macro definition from being undefined via #undef or # recursively expanded use the := operator instead of the = operator. # This tag requires that the tag ENABLE_PREPROCESSING is set to YES. PREDEFINED = # If the MACRO_EXPANSION and EXPAND_ONLY_PREDEF tags are set to YES then this # tag can be used to specify a list of macro names that should be expanded. The # macro definition that is found in the sources will be used. Use the PREDEFINED # tag if you want to use a different macro definition that overrules the # definition found in the source code. # This tag requires that the tag ENABLE_PREPROCESSING is set to YES. EXPAND_AS_DEFINED = # If the SKIP_FUNCTION_MACROS tag is set to YES then doxygen's preprocessor will # remove all references to function-like macros that are alone on a line, have # an all uppercase name, and do not end with a semicolon. Such function macros # are typically used for boiler-plate code, and will confuse the parser if not # removed. # The default value is: YES. # This tag requires that the tag ENABLE_PREPROCESSING is set to YES. SKIP_FUNCTION_MACROS = YES #--------------------------------------------------------------------------- # Configuration options related to external references #--------------------------------------------------------------------------- # The TAGFILES tag can be used to specify one or more tag files. For each tag # file the location of the external documentation should be added. The format of # a tag file without this location is as follows: # TAGFILES = file1 file2 ... # Adding location for the tag files is done as follows: # TAGFILES = file1=loc1 "file2 = loc2" ... # where loc1 and loc2 can be relative or absolute paths or URLs. See the # section "Linking to external documentation" for more information about the use # of tag files. # Note: Each tag file must have a unique name (where the name does NOT include # the path). If a tag file is not located in the directory in which doxygen is # run, you must also specify the path to the tagfile here. TAGFILES = # When a file name is specified after GENERATE_TAGFILE, doxygen will create a # tag file that is based on the input files it reads. See section "Linking to # external documentation" for more information about the usage of tag files. GENERATE_TAGFILE = # If the ALLEXTERNALS tag is set to YES, all external class will be listed in # the class index. If set to NO, only the inherited external classes will be # listed. # The default value is: NO. ALLEXTERNALS = NO # If the EXTERNAL_GROUPS tag is set to YES, all external groups will be listed # in the modules index. If set to NO, only the current project's groups will be # listed. # The default value is: YES. EXTERNAL_GROUPS = YES # If the EXTERNAL_PAGES tag is set to YES, all external pages will be listed in # the related pages index. If set to NO, only the current project's pages will # be listed. # The default value is: YES. EXTERNAL_PAGES = YES # The PERL_PATH should be the absolute path and name of the perl script # interpreter (i.e. the result of 'which perl'). # The default file (with absolute path) is: /usr/bin/perl. PERL_PATH = /usr/bin/perl #--------------------------------------------------------------------------- # Configuration options related to the dot tool #--------------------------------------------------------------------------- # If the CLASS_DIAGRAMS tag is set to YES, doxygen will generate a class diagram # (in HTML and LaTeX) for classes with base or super classes. Setting the tag to # NO turns the diagrams off. Note that this option also works with HAVE_DOT # disabled, but it is recommended to install and use dot, since it yields more # powerful graphs. # The default value is: YES. CLASS_DIAGRAMS = YES # You can define message sequence charts within doxygen comments using the \msc # command. Doxygen will then run the mscgen tool (see: # http://www.mcternan.me.uk/mscgen/)) to produce the chart and insert it in the # documentation. The MSCGEN_PATH tag allows you to specify the directory where # the mscgen tool resides. If left empty the tool is assumed to be found in the # default search path. MSCGEN_PATH = # You can include diagrams made with dia in doxygen documentation. Doxygen will # then run dia to produce the diagram and insert it in the documentation. The # DIA_PATH tag allows you to specify the directory where the dia binary resides. # If left empty dia is assumed to be found in the default search path. DIA_PATH = # If set to YES the inheritance and collaboration graphs will hide inheritance # and usage relations if the target is undocumented or is not a class. # The default value is: YES. HIDE_UNDOC_RELATIONS = YES # If you set the HAVE_DOT tag to YES then doxygen will assume the dot tool is # available from the path. This tool is part of Graphviz (see: # http://www.graphviz.org/), a graph visualization toolkit from AT&T and Lucent # Bell Labs. The other options in this section have no effect if this option is # set to NO # The default value is: NO. HAVE_DOT = NO # The DOT_NUM_THREADS specifies the number of dot invocations doxygen is allowed # to run in parallel. When set to 0 doxygen will base this on the number of # processors available in the system. You can set it explicitly to a value # larger than 0 to get control over the balance between CPU load and processing # speed. # Minimum value: 0, maximum value: 32, default value: 0. # This tag requires that the tag HAVE_DOT is set to YES. DOT_NUM_THREADS = 0 # When you want a differently looking font in the dot files that doxygen # generates you can specify the font name using DOT_FONTNAME. You need to make # sure dot is able to find the font, which can be done by putting it in a # standard location or by setting the DOTFONTPATH environment variable or by # setting DOT_FONTPATH to the directory containing the font. # The default value is: Helvetica. # This tag requires that the tag HAVE_DOT is set to YES. DOT_FONTNAME = Helvetica # The DOT_FONTSIZE tag can be used to set the size (in points) of the font of # dot graphs. # Minimum value: 4, maximum value: 24, default value: 10. # This tag requires that the tag HAVE_DOT is set to YES. DOT_FONTSIZE = 10 # By default doxygen will tell dot to use the default font as specified with # DOT_FONTNAME. If you specify a different font using DOT_FONTNAME you can set # the path where dot can find it using this tag. # This tag requires that the tag HAVE_DOT is set to YES. DOT_FONTPATH = # If the CLASS_GRAPH tag is set to YES then doxygen will generate a graph for # each documented class showing the direct and indirect inheritance relations. # Setting this tag to YES will force the CLASS_DIAGRAMS tag to NO. # The default value is: YES. # This tag requires that the tag HAVE_DOT is set to YES. CLASS_GRAPH = YES # If the COLLABORATION_GRAPH tag is set to YES then doxygen will generate a # graph for each documented class showing the direct and indirect implementation # dependencies (inheritance, containment, and class references variables) of the # class with other documented classes. # The default value is: YES. # This tag requires that the tag HAVE_DOT is set to YES. COLLABORATION_GRAPH = YES # If the GROUP_GRAPHS tag is set to YES then doxygen will generate a graph for # groups, showing the direct groups dependencies. # The default value is: YES. # This tag requires that the tag HAVE_DOT is set to YES. GROUP_GRAPHS = YES # If the UML_LOOK tag is set to YES, doxygen will generate inheritance and # collaboration diagrams in a style similar to the OMG's Unified Modeling # Language. # The default value is: NO. # This tag requires that the tag HAVE_DOT is set to YES. UML_LOOK = NO # If the UML_LOOK tag is enabled, the fields and methods are shown inside the # class node. If there are many fields or methods and many nodes the graph may # become too big to be useful. The UML_LIMIT_NUM_FIELDS threshold limits the # number of items for each type to make the size more manageable. Set this to 0 # for no limit. Note that the threshold may be exceeded by 50% before the limit # is enforced. So when you set the threshold to 10, up to 15 fields may appear, # but if the number exceeds 15, the total amount of fields shown is limited to # 10. # Minimum value: 0, maximum value: 100, default value: 10. # This tag requires that the tag HAVE_DOT is set to YES. UML_LIMIT_NUM_FIELDS = 10 # If the TEMPLATE_RELATIONS tag is set to YES then the inheritance and # collaboration graphs will show the relations between templates and their # instances. # The default value is: NO. # This tag requires that the tag HAVE_DOT is set to YES. TEMPLATE_RELATIONS = NO # If the INCLUDE_GRAPH, ENABLE_PREPROCESSING and SEARCH_INCLUDES tags are set to # YES then doxygen will generate a graph for each documented file showing the # direct and indirect include dependencies of the file with other documented # files. # The default value is: YES. # This tag requires that the tag HAVE_DOT is set to YES. INCLUDE_GRAPH = YES # If the INCLUDED_BY_GRAPH, ENABLE_PREPROCESSING and SEARCH_INCLUDES tags are # set to YES then doxygen will generate a graph for each documented file showing # the direct and indirect include dependencies of the file with other documented # files. # The default value is: YES. # This tag requires that the tag HAVE_DOT is set to YES. INCLUDED_BY_GRAPH = YES # If the CALL_GRAPH tag is set to YES then doxygen will generate a call # dependency graph for every global function or class method. # # Note that enabling this option will significantly increase the time of a run. # So in most cases it will be better to enable call graphs for selected # functions only using the \callgraph command. # The default value is: NO. # This tag requires that the tag HAVE_DOT is set to YES. CALL_GRAPH = NO # If the CALLER_GRAPH tag is set to YES then doxygen will generate a caller # dependency graph for every global function or class method. # # Note that enabling this option will significantly increase the time of a run. # So in most cases it will be better to enable caller graphs for selected # functions only using the \callergraph command. # The default value is: NO. # This tag requires that the tag HAVE_DOT is set to YES. CALLER_GRAPH = NO # If the GRAPHICAL_HIERARCHY tag is set to YES then doxygen will graphical # hierarchy of all classes instead of a textual one. # The default value is: YES. # This tag requires that the tag HAVE_DOT is set to YES. GRAPHICAL_HIERARCHY = YES # If the DIRECTORY_GRAPH tag is set to YES then doxygen will show the # dependencies a directory has on other directories in a graphical way. The # dependency relations are determined by the #include relations between the # files in the directories. # The default value is: YES. # This tag requires that the tag HAVE_DOT is set to YES. DIRECTORY_GRAPH = YES # The DOT_IMAGE_FORMAT tag can be used to set the image format of the images # generated by dot. # Note: If you choose svg you need to set HTML_FILE_EXTENSION to xhtml in order # to make the SVG files visible in IE 9+ (other browsers do not have this # requirement). # Possible values are: png, jpg, gif and svg. # The default value is: png. # This tag requires that the tag HAVE_DOT is set to YES. DOT_IMAGE_FORMAT = png # If DOT_IMAGE_FORMAT is set to svg, then this option can be set to YES to # enable generation of interactive SVG images that allow zooming and panning. # # Note that this requires a modern browser other than Internet Explorer. Tested # and working are Firefox, Chrome, Safari, and Opera. # Note: For IE 9+ you need to set HTML_FILE_EXTENSION to xhtml in order to make # the SVG files visible. Older versions of IE do not have SVG support. # The default value is: NO. # This tag requires that the tag HAVE_DOT is set to YES. INTERACTIVE_SVG = NO # The DOT_PATH tag can be used to specify the path where the dot tool can be # found. If left blank, it is assumed the dot tool can be found in the path. # This tag requires that the tag HAVE_DOT is set to YES. DOT_PATH = # The DOTFILE_DIRS tag can be used to specify one or more directories that # contain dot files that are included in the documentation (see the \dotfile # command). # This tag requires that the tag HAVE_DOT is set to YES. DOTFILE_DIRS = # The MSCFILE_DIRS tag can be used to specify one or more directories that # contain msc files that are included in the documentation (see the \mscfile # command). MSCFILE_DIRS = # The DIAFILE_DIRS tag can be used to specify one or more directories that # contain dia files that are included in the documentation (see the \diafile # command). DIAFILE_DIRS = # When using plantuml, the PLANTUML_JAR_PATH tag should be used to specify the # path where java can find the plantuml.jar file. If left blank, it is assumed # PlantUML is not used or called during a preprocessing step. Doxygen will # generate a warning when it encounters a \startuml command in this case and # will not generate output for the diagram. PLANTUML_JAR_PATH = # When using plantuml, the specified paths are searched for files specified by # the !include statement in a plantuml block. PLANTUML_INCLUDE_PATH = # The DOT_GRAPH_MAX_NODES tag can be used to set the maximum number of nodes # that will be shown in the graph. If the number of nodes in a graph becomes # larger than this value, doxygen will truncate the graph, which is visualized # by representing a node as a red box. Note that doxygen if the number of direct # children of the root node in a graph is already larger than # DOT_GRAPH_MAX_NODES then the graph will not be shown at all. Also note that # the size of a graph can be further restricted by MAX_DOT_GRAPH_DEPTH. # Minimum value: 0, maximum value: 10000, default value: 50. # This tag requires that the tag HAVE_DOT is set to YES. DOT_GRAPH_MAX_NODES = 50 # The MAX_DOT_GRAPH_DEPTH tag can be used to set the maximum depth of the graphs # generated by dot. A depth value of 3 means that only nodes reachable from the # root by following a path via at most 3 edges will be shown. Nodes that lay # further from the root node will be omitted. Note that setting this option to 1 # or 2 may greatly reduce the computation time needed for large code bases. Also # note that the size of a graph can be further restricted by # DOT_GRAPH_MAX_NODES. Using a depth of 0 means no depth restriction. # Minimum value: 0, maximum value: 1000, default value: 0. # This tag requires that the tag HAVE_DOT is set to YES. MAX_DOT_GRAPH_DEPTH = 0 # Set the DOT_TRANSPARENT tag to YES to generate images with a transparent # background. This is disabled by default, because dot on Windows does not seem # to support this out of the box. # # Warning: Depending on the platform used, enabling this option may lead to # badly anti-aliased labels on the edges of a graph (i.e. they become hard to # read). # The default value is: NO. # This tag requires that the tag HAVE_DOT is set to YES. DOT_TRANSPARENT = NO # Set the DOT_MULTI_TARGETS tag to YES to allow dot to generate multiple output # files in one run (i.e. multiple -o and -T options on the command line). This # makes dot run faster, but since only newer versions of dot (>1.8.10) support # this, this feature is disabled by default. # The default value is: NO. # This tag requires that the tag HAVE_DOT is set to YES. DOT_MULTI_TARGETS = NO # If the GENERATE_LEGEND tag is set to YES doxygen will generate a legend page # explaining the meaning of the various boxes and arrows in the dot generated # graphs. # The default value is: YES. # This tag requires that the tag HAVE_DOT is set to YES. GENERATE_LEGEND = YES # If the DOT_CLEANUP tag is set to YES, doxygen will remove the intermediate dot # files that are used to generate the various graphs. # The default value is: YES. # This tag requires that the tag HAVE_DOT is set to YES. DOT_CLEANUP = YES
{ "repo_name": "drycpp/lmdbxx", "stars": "261", "repo_language": "C++", "file_name": "lmdb.info", "mime_type": "text/plain" }
* Arto Bendiken <arto@bendiken.net>
{ "repo_name": "drycpp/lmdbxx", "stars": "261", "repo_language": "C++", "file_name": "lmdb.info", "mime_type": "text/plain" }
To install lmdb++, just copy the "lmdb++.h" header file to a location of your choosing. To make it available system-wide, you'd normally copy it to /usr/local/include. Alternatively, you can embed it directly in your project. If you have `make', a Makefile is provided to facilitate installation: make install make uninstall To facilitate packaging, the Makefile rules honor the DESTDIR and PREFIX variables, in case values are provided for those.
{ "repo_name": "drycpp/lmdbxx", "stars": "261", "repo_language": "C++", "file_name": "lmdb.info", "mime_type": "text/plain" }
0.9.14.1
{ "repo_name": "drycpp/lmdbxx", "stars": "261", "repo_language": "C++", "file_name": "lmdb.info", "mime_type": "text/plain" }
# Makefile for lmdb++ <http://lmdbxx.sourceforge.net/> PACKAGE_NAME := lmdb++ PACKAGE_TARNAME := lmdbxx PACKAGE_VERSION = $(shell cat VERSION) PACKAGE_STRING = $(PACKAGE_NAME) $(PACKAGE_TARNAME) PACKAGE_TARSTRING = $(PACKAGE_TARNAME)-$(PACKAGE_VERSION) PACKAGE_BUGREPORT := arto@bendiken.net PACKAGE_URL := http://lmdbxx.sourceforge.net/ DESTDIR := PREFIX := /usr/local CPPFLAGS := -I. CXXFLAGS := -g -O0 -std=c++11 -Wall -Werror LDFLAGS := LDADD := -llmdb includedir = $(PREFIX)/include MKDIR := mkdir -p RM := rm -f INSTALL := install -c INSTALL_DATA = $(INSTALL) -m 644 INSTALL_HEADER = $(INSTALL_DATA) DISTFILES := AUTHORS CREDITS INSTALL README TODO UNLICENSE VERSION \ Makefile check.cc example.cc lmdb++.h default: help help: @echo 'Install the <lmdb++.h> header file using `make install`.' check: check.o $(CXX) $(LDFLAGS) -o $@ $^ $(LDADD) && ./$@ example: example.o $(CXX) $(LDFLAGS) -o $@ $^ $(LDADD) && ./$@ %.o: %.cc lmdb++.h $(CXX) $(CPPFLAGS) $(CXXFLAGS) -c -o $@ $< installdirs: $(MKDIR) $(DESTDIR)$(includedir) install: lmdb++.h installdirs $(INSTALL_HEADER) $< $(DESTDIR)$(includedir) uninstall: $(RM) $(DESTDIR)$(includedir)/lmdb++.h clean: $(RM) README.html README.md check example $(PACKAGE_TARSTRING).tar.* *.o *~ README: README.html README.md README.html: README.rst pandoc -s -f rst -t html5 -S -o $@ $< README.md: README.rst pandoc -s -f rst -t markdown_github -o - $< | tail -n +5 > $@ doxygen: README.md doxygen Doxyfile sed -e 's/Main Page/a C++11 wrapper for LMDB/' \ -e 's/lmdb++ Documentation/lmdb++: a C++11 wrapper for LMDB/' \ -i.orig .doxygen/html/index.html maintainer-clean: clean maintainer-doxygen: doxygen rsync -az .doxygen/html/ bendiken@web.sourceforge.net:/home/project-web/lmdbxx/htdocs/ dist: tar -chJf $(PACKAGE_TARSTRING).tar.xz \ --transform 's,^,$(PACKAGE_TARSTRING)/,' $(DISTFILES) tar -chjf $(PACKAGE_TARSTRING).tar.bz2 \ --transform 's,^,$(PACKAGE_TARSTRING)/,' $(DISTFILES) tar -chzf $(PACKAGE_TARSTRING).tar.gz \ --transform 's,^,$(PACKAGE_TARSTRING)/,' $(DISTFILES) .PHONY: help check example installdirs install uninstall clean doxygen maintainer-doxygen dist
{ "repo_name": "drycpp/lmdbxx", "stars": "261", "repo_language": "C++", "file_name": "lmdb.info", "mime_type": "text/plain" }
/* This is free and unencumbered software released into the public domain. */ #include <cstdio> #include <cstdlib> #include <lmdb++.h> int main() { /* Create and open the LMDB environment: */ auto env = lmdb::env::create(); env.set_mapsize(1UL * 1024UL * 1024UL * 1024UL); /* 1 GiB */ env.open("./example.mdb", 0, 0664); /* Insert some key/value pairs in a write transaction: */ auto wtxn = lmdb::txn::begin(env); auto dbi = lmdb::dbi::open(wtxn, nullptr); dbi.put(wtxn, "username", "jhacker"); dbi.put(wtxn, "email", "jhacker@example.org"); dbi.put(wtxn, "fullname", "J. Random Hacker"); wtxn.commit(); /* Fetch key/value pairs in a read-only transaction: */ auto rtxn = lmdb::txn::begin(env, nullptr, MDB_RDONLY); auto cursor = lmdb::cursor::open(rtxn, dbi); std::string key, value; while (cursor.get(key, value, MDB_NEXT)) { std::printf("key: '%s', value: '%s'\n", key.c_str(), value.c_str()); } cursor.close(); rtxn.abort(); /* The enviroment is closed automatically. */ return EXIT_SUCCESS; }
{ "repo_name": "drycpp/lmdbxx", "stars": "261", "repo_language": "C++", "file_name": "lmdb.info", "mime_type": "text/plain" }
/* This is free and unencumbered software released into the public domain. */ #include <cstdio> /* for std::*printf() */ #include <cstdlib> /* for EXIT_FAILURE, EXIT_SUCCESS */ #include <lmdb++.h> int main() { try { /* Create the LMDB environment: */ auto env = lmdb::env::create(); /* Open the LMDB environment: */ env.open("/tmp"); /* Begin the LMDB transaction: */ auto txn = lmdb::txn::begin(env); // TODO return EXIT_SUCCESS; } catch (const lmdb::error& error) { std::fprintf(stderr, "Failed with error: %s\n", error.what()); return EXIT_FAILURE; } }
{ "repo_name": "drycpp/lmdbxx", "stars": "261", "repo_language": "C++", "file_name": "lmdb.info", "mime_type": "text/plain" }
<?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE repositories SYSTEM "/dtd/repositories.dtd"> <repositories xmlns="" version="1.0"> <repo quality="experimental" status="unofficial"> <name>lmdbxx</name> <description lang="en"> <![CDATA[C++11 wrapper for the LMDB database library]]> </description> <homepage>https://github.com/bendiken/lmdbxx</homepage> <owner type="person"> <email>arto@bendiken.net</email> <name><![CDATA[Arto Bendiken]]></name> </owner> <source type="git">https://github.com/bendiken/lmdbxx.git</source> <feed>https://github.com/bendiken/lmdbxx/commits/master.atom</feed> </repo> </repositories>
{ "repo_name": "drycpp/lmdbxx", "stars": "261", "repo_language": "C++", "file_name": "lmdb.info", "mime_type": "text/plain" }
repo-name = lmdbxx masters = gentoo use-manifests = strict
{ "repo_name": "drycpp/lmdbxx", "stars": "261", "repo_language": "C++", "file_name": "lmdb.info", "mime_type": "text/plain" }
lmdbxx
{ "repo_name": "drycpp/lmdbxx", "stars": "261", "repo_language": "C++", "file_name": "lmdb.info", "mime_type": "text/plain" }
# Copyright 1999-2015 Gentoo Foundation # Distributed under the terms of the GNU General Public License v2 # $Header: $ EAPI=5 MY_P="${P/lmdb++/lmdbxx}" S="${WORKDIR}/${MY_P}" DESCRIPTION="C++11 wrapper for the LMDB database library" HOMEPAGE="http://lmdbxx.sourceforge.net/" SRC_URI="mirror://sourceforge/lmdbxx/${PV}/${MY_P}.tar.gz" LICENSE="public-domain" SLOT="0" KEYWORDS="~amd64 ~x86" IUSE="" RDEPEND="dev-db/lmdb" src_install() { emake PREFIX="${D}/usr" install dodoc AUTHORS CREDITS INSTALL README TODO UNLICENSE }
{ "repo_name": "drycpp/lmdbxx", "stars": "261", "repo_language": "C++", "file_name": "lmdb.info", "mime_type": "text/plain" }
# -*- coding: utf-8; mode: tcl; tab-width: 4; indent-tabs-mode: nil; c-basic-offset: 4 -*- vim:fenc=utf-8:ft=tcl:et:sw=4:ts=4:sts=4 # $Id$ PortSystem 1.0 name lmdbxx version 0.9.14.0 categories databases platforms darwin supported_archs noarch license public-domain maintainers bendiken.net:arto description C++11 wrapper for the LMDB embedded B+ tree database library. long_description This is a comprehensive C++ wrapper for the LMDB embedded \ database library, offering both an error-checked procedural \ interface and an object-oriented resource interface with RAII \ semantics. homepage http://lmdbxx.sourceforge.net/ master_sites sourceforge:project/${name}/${version}/ use_xz yes checksums rmd160 c5cc24adacd88cd080159b3da4b6599025dd6911 \ sha256 29caf35dcd6962909be5a2d8147f85e65fe098030992f1ecdd902633dbcdcbde depends_lib port:lmdb use_configure no build {} destroot.args-append PREFIX=${prefix}
{ "repo_name": "drycpp/lmdbxx", "stars": "261", "repo_language": "C++", "file_name": "lmdb.info", "mime_type": "text/plain" }
# Maintainer: Arto Bendiken <arto@bendiken.net> pkgname=liblmdb++ pkgver=0.9.14.0 pkgrel=1 pkgdesc="C++11 wrapper for the LMDB database library" arch=('any') url="http://lmdbxx.sourceforge.net/" license=('custom:Public Domain') depends=('liblmdb') source=("http://download.sourceforge.net/sourceforge/lmdbxx/lmdbxx-$pkgver.tar.gz") sha256sums=('f74c55184bff19de607948a5f00fe1073de5b59c4309ab2e8ebc488cf675f419') package() { cd "$srcdir/lmdbxx-$pkgver" make PREFIX=/usr DESTDIR="$pkgdir" install install -m644 -D UNLICENSE "$pkgdir/usr/share/licenses/$pkgname/LICENSE" } # vim: ts=2 sw=2 et:
{ "repo_name": "drycpp/lmdbxx", "stars": "261", "repo_language": "C++", "file_name": "lmdb.info", "mime_type": "text/plain" }
Package: lmdb++ Version: 0.9.14.0 Revision: 1 Description: C++11 wrapper for the LMDB database library DescDetail: << This is a comprehensive C++ wrapper for the LMDB embedded database library, offering both an error-checked procedural interface and an object-oriented resource interface with RAII semantics. << License: Public Domain Homepage: http://lmdbxx.sourceforge.net/ Maintainer: Arto Bendiken <arto@bendiken.net> # Dependencies: BuildDependsOnly: True Depends: lmdb1-shlibs Enhances: lmdb1-dev # Unpack: Source: mirror:sourceforge:lmdbxx/%v/lmdbxx-%v.tar.gz Source-MD5: e672c3d6140b78327a02c0b36ab4cba0 # Compile: CompileScript: make # Install: InstallScript: << #!/bin/sh -ev make install PREFIX=%p DESTDIR=%d << DocFiles: AUTHORS CREDITS INSTALL README TODO UNLICENSE VERSION DescPackaging: << See: https://sourceforge.net/p/fink/package-submissions/4487/ <<
{ "repo_name": "drycpp/lmdbxx", "stars": "261", "repo_language": "C++", "file_name": "lmdb.info", "mime_type": "text/plain" }
--- Makefile.orig 2014-09-20 08:24:32.000000000 +0200 +++ Makefile 2015-04-24 01:12:59.000000000 +0200 @@ -17,6 +17,7 @@ # read mdb.c before changing any of them. # CC = gcc +LD = $(CC) W = -W -Wall -Wno-unused-parameter -Wbad-function-cast -Wuninitialized THREADS = -pthread OPT = -O2 -g @@ -28,7 +29,7 @@ ######################################################################## IHDRS = lmdb.h -ILIBS = liblmdb.a liblmdb.so +ILIBS = liblmdb.a liblmdb.1.dylib IPROGS = mdb_stat mdb_copy mdb_dump mdb_load IDOCS = mdb_stat.1 mdb_copy.1 mdb_dump.1 mdb_load.1 PROGS = $(IPROGS) mtest mtest2 mtest3 mtest4 mtest5 @@ -38,7 +39,7 @@ for f in $(IPROGS); do cp $$f $(DESTDIR)$(prefix)/bin; done for f in $(ILIBS); do cp $$f $(DESTDIR)$(prefix)/lib; done for f in $(IHDRS); do cp $$f $(DESTDIR)$(prefix)/include; done - for f in $(IDOCS); do cp $$f $(DESTDIR)$(prefix)/man/man1; done + for f in $(IDOCS); do cp $$f $(DESTDIR)$(prefix)/share/man/man1; done clean: rm -rf $(PROGS) *.[ao] *.so *~ testdb @@ -50,9 +51,8 @@ liblmdb.a: mdb.o midl.o ar rs $@ mdb.o midl.o -liblmdb.so: mdb.o midl.o -# $(CC) $(LDFLAGS) -pthread -shared -Wl,-Bsymbolic -o $@ mdb.o midl.o $(SOLIBS) - $(CC) $(LDFLAGS) -pthread -shared -o $@ mdb.o midl.o $(SOLIBS) +liblmdb.1.dylib: mdb.o midl.o + $(LD) $(LDFLAGS) -dynamiclib -install_name $(prefix)/lib/$(notdir $@) -compatibility_version $(PACKAGE_VERSION) -current_version $(PACKAGE_VERSION) -o $@ $^ $(SOLIBS) mdb_stat: mdb_stat.o liblmdb.a mdb_copy: mdb_copy.o liblmdb.a
{ "repo_name": "drycpp/lmdbxx", "stars": "261", "repo_language": "C++", "file_name": "lmdb.info", "mime_type": "text/plain" }
Package: lmdb Version: 0.9.14 Revision: 1 Description: Lightning Memory-Mapped Database DescDetail: << LMDB is an ultra-fast, ultra-compact key-value embedded data store developed by Symas for the OpenLDAP Project. It uses memory-mapped files, so it has the read performance of a pure in-memory database while still offering the persistence of standard disk-based databases, and is only limited to the size of the virtual address space (it is not limited to the size of physical RAM). << License: DFSG-Approved Homepage: http://symas.com/mdb/ Maintainer: Arto Bendiken <arto@bendiken.net> # Dependencies: Depends: %N1-shlibs (= %v-%r) # Unpack: Source: https://github.com/LMDB/lmdb/archive/LMDB_%v.tar.gz SourceDirectory: lmdb-LMDB_%v/libraries/liblmdb SourceRename: %n-%v.tar.gz Source-MD5: 11be69064d4c6eb003f934cf09a00537 # Patch: PatchFile: %n.patch PatchFile-MD5: bf1ebbea9cd8627d997b0c833e4ca875 PatchScript: patch -p0 < %{PatchFile} # Compile: CompileScript: make prefix=%p PACKAGE_VERSION=%v # Install: InstallScript: << #!/bin/sh -ev mkdir -p -m0755 %i/bin %i/lib %i/include %i/share/man/man1 make install prefix=%p DESTDIR=%d ln -sf %p/lib/liblmdb.1.dylib %i/lib/liblmdb.dylib << DocFiles: CHANGES COPYRIGHT LICENSE # Build lmdb1-shlibs: SplitOff: << Package: %N1-shlibs Description: LMDB shared library Provides: lmdb-shlibs Files: << lib/liblmdb.1.dylib << Shlibs: << %p/lib/liblmdb.1.dylib 0.9.14 %n (>= 0.9.14-1) << DocFiles: CHANGES COPYRIGHT LICENSE << # Build lmdb1-dev: SplitOff2: << Package: %N1-dev Description: LMDB development files BuildDependsOnly: True Depends: %N1-shlibs (= %v-%r) Provides: lmdb-dev Files: << include lib/liblmdb.{a,dylib} << DocFiles: CHANGES COPYRIGHT LICENSE << DescPackaging: << See: https://sourceforge.net/p/fink/package-submissions/4487/ <<
{ "repo_name": "drycpp/lmdbxx", "stars": "261", "repo_language": "C++", "file_name": "lmdb.info", "mime_type": "text/plain" }
let pkgs = import ./nixpkgs.nix ; server = pkgs.callPackage ./server/default.nix {}; client = pkgs.callPackage ./client/default.nix {}; in pkgs.runCommand "miso-ismorphic-example" { inherit client server; } '' mkdir -p $out/{bin,static} cp ${server}/bin/* $out/bin/ cp ${client}/static/* $out/static/ ''
{ "repo_name": "FPtje/miso-isomorphic-example", "stars": "92", "repo_language": "Haskell", "file_name": "Common.hs", "mime_type": "text/plain" }
import Distribution.Simple main = defaultMain
{ "repo_name": "FPtje/miso-isomorphic-example", "stars": "92", "repo_language": "Haskell", "file_name": "Common.hs", "mime_type": "text/plain" }
let bootstrap = import <nixpkgs> {}; nixpkgs-src = bootstrap.fetchFromGitHub { owner = "NixOS"; repo = "nixpkgs"; rev = "725b5499b89fe80d7cfbb00bd3c140a73cbdd97f"; sha256 = "0xdhv9k0nq8d91qdw66d6ln2jsqc9ij7r24l9jnv4c4bfpl4ayy7"; }; config = { packageOverrides = pkgs: rec { haskell = pkgs.haskell // { packages = pkgs.haskell.packages // { ghc = pkgs.haskell.packages.ghc864; } // { # Many packages don't build on ghcjs because of a dependency on doctest # (which doesn't build), or because of a runtime error during the test run. # See: https://github.com/ghcjs/ghcjs/issues/711 ghcjs = pkgs.haskell.packages.ghcjs86.override { overrides = self: super: with pkgs.haskell.lib; { tasty-quickcheck = dontCheck super.tasty-quickcheck; http-types = dontCheck super.http-types; http-media = dontCheck super.http-media; comonad = dontCheck super.comonad; semigroupoids = dontCheck super.semigroupoids; lens = dontCheck super.lens; QuickCheck = dontCheck super.QuickCheck; network = dontCheck (doJailbreak super.network_2_6_3_1); servant-client = dontCheck (doJailbreak super.servant-client); servant = dontCheck (doJailbreak super.servant); }; }; }; }; }; }; in import nixpkgs-src { inherit config; }
{ "repo_name": "FPtje/miso-isomorphic-example", "stars": "92", "repo_language": "Haskell", "file_name": "Common.hs", "mime_type": "text/plain" }