README
https://huakunshen.super.site/ contains older blogs not moved here.
https://huakunshen.super.site/ contains older blogs not moved here.
To use sqlite with encryption enabled, rusqlite is a popular option. Just enable bundled-sqlcipher
feature.
It's super simple on Mac and Linux, on Windows, I have to configure OpenSSL library, it's a bit harder than Linux/Mac.
First download openssl from https://slproweb.com/products/Win32OpenSSL.html
OpenSSL is a very important library used everywhere. This website looks a bit old and I don't know if I can trust it. Why don't Microsoft include it in Windows or provide an official way to install it?
Download the latest, larger file, not the light installer.
The file I downloaded was Win64OpenSSL-3_4_0.msi
, install it.
Before installing, here is the error
note: To improve backtraces for build dependencies, set the CARGO_PROFILE_DEV_BUILD_OVERRIDE_DEBUG=true environment variable to enable debug information generation.
Caused by:
process didn't exit successfully: `C:\Users\shenh\Desktop\sqlcipher\target\debug\build\libsqlite3-sys-458bb895065b5297\build-script-build` (exit code: 101)
--- stdout
cargo:rerun-if-env-changed=LIBSQLITE3_SYS_USE_PKG_CONFIG
cargo:include=C:\Users\shenh\.cargo\registry\src\index.crates.io-6f17d22bba15001f\libsqlite3-sys-0.30.1/sqlcipher
cargo:rerun-if-changed=sqlcipher/sqlite3.c
cargo:rerun-if-changed=sqlite3/wasm32-wasi-vfs.c
--- stderr
thread 'main' panicked at C:\Users\shenh\.cargo\registry\src\index.crates.io-6f17d22bba15001f\libsqlite3-sys-0.30.1\build.rs:164:29:
Missing environment variable OPENSSL_DIR or OPENSSL_DIR is not set
OPENSSL_DIR
is missing.
My install path is C:\Program Files\OpenSSL-Win64
.
Set OPENSSL_DIR
to C:\Program Files\OpenSSL-Win64
.
Then remove target
and build again. This time I get new errors
error: linking with `link.exe` failed: exit code: 1181
...
= note: LINK : fatal error LNK1181: cannot open input file 'libcrypto.lib'
libcrypto.lib
is not found.
For my installation, I found the file under C:\Program Files\OpenSSL-Win64\lib\VC\x64\MDd
I need to add environment variable OPENSSL_LIB_DIR
to C:\Program Files\OpenSSL-Win64\lib\VC\x64\MDd
Then it works. DB is encrypted. VSCode has sqlite viewer extension. Try open the encrypted db file, it won't open as it's encrypted.
From instructions online, I initially set OPENSSL_LIB_DIR
to C:\Program Files\OpenSSL-Win64\lib
, it doesn't work.
You may also want to set OPENSSL_INCLUDE_DIR
to C:\Program Files\OpenSSL-Win64\include\
Also, you have to restart code editor everytime PATH is changed.
As a Mac user and Linux fan, Windows is too hard for developers. Even deleting files could be a big problem as it's used by some process. I rarely had these problems on Mac and Linux.
Also configuring environment variables on Mac and Linux is simply editing files.
In this article, I will briefly discuss application of Asymmetric Encryption (ed25519) and TLS in Rust.
ed25519 is often used for SSH authentication. You store the public key on the server, and the private key on the client. Then you can ssh to server with the client private key, without the need to enter password.
SSH (Secure Shell) uses ed25519 keys for authentication in the following way:
Key Generation: The user generates an ed25519 key pair. The private key is kept securely on the client machine, while the public key is placed on the server.
Connection Initiation: When a client attempts to connect to an SSH server, the server sends a challenge to the client.
Signing the Challenge: The client uses its private key to sign the challenge, creating a signature.
Signature Verification: The client sends this signature back to the server. The server then uses the stored public key to verify the signature.
Authentication: If the signature is valid, it proves that the client possesses the corresponding private key, and the server grants access.
This process ensures secure authentication without transmitting the private key, leveraging the strength and efficiency of the ed25519 algorithm.
Let's use the popular ed25519 algorithm, which is usually used for ssh keys.
use ring::rand::SystemRandom;
use ring::signature::{Ed25519KeyPair, KeyPair, Signature, UnparsedPublicKey, ED25519};
fn main() {
// Generate key pair
let rng = SystemRandom::new();
let private_key = Ed25519KeyPair::generate_pkcs8(&rng).unwrap();
let key_pair = Ed25519KeyPair::from_pkcs8(private_key.as_ref()).unwrap();
// Data to sign
let message = b"Hello, sign this data!";
// Sign the message
let signature: Signature = key_pair.sign(message);
// Verify the signature using the public key
let public_key = key_pair.public_key();
let peer_public_key_bytes = public_key.as_ref();
let public_key_for_verification = UnparsedPublicKey::new(&ED25519, peer_public_key_bytes);
match public_key_for_verification.verify(message, signature.as_ref()) {
Ok(_) => println!("Signature verified successfully!"),
Err(_) => println!("Signature verification failed!"),
}
}
Asymmetric encryption is used for public key encryption, where the public key is used to encrypt data, and the private key is used to decrypt data.
ed25519 is not used for public key encryption, but RSA is.
use rsa::{Pkcs1v15Encrypt, RsaPrivateKey, RsaPublicKey};
fn main() {
let mut rng = rand::thread_rng();
let bits = 2048;
let priv_key = RsaPrivateKey::new(&mut rng, bits).expect("failed to generate a key");
let pub_key = RsaPublicKey::from(&priv_key);
// Encrypt
let data = b"hello world";
let enc_data = pub_key
.encrypt(&mut rng, Pkcs1v15Encrypt, &data[..])
.expect("failed to encrypt");
assert_ne!(&data[..], &enc_data[..]);
// Decrypt
let dec_data = priv_key
.decrypt(Pkcs1v15Encrypt, &enc_data)
.expect("failed to decrypt");
assert_eq!(&data[..], &dec_data[..]);
}
TLS/SSL is more complicated, it involves asymmetric encryption, symmetric encryption, hash functions, and more.
Here we only show how to use a self-signed certificate for local use.
Let's generate a self-signed certificate and serve with an Axum rust server. Self-signed certificate is usually used in local network, between devices, but require some level of security.
use rcgen::generate_simple_self_signed;
use std::fs::File;
use std::io::Write;
use std::path::PathBuf;
fn main() {
let subject_alt_names = vec!["localhost".to_string()];
let cert = generate_simple_self_signed(subject_alt_names).unwrap();
println!("Certificate generated successfully");
let pem = cert.serialize_pem().unwrap();
let key = cert.serialize_private_key_pem();
// Sign a piece of data with the private key and verify with the public key
let data_to_sign = b"Hello, World!";
// Create a signing key from the private key PEM
let signing_key = rcgen::KeyPair::from_pem(&key).unwrap();
println!("{}", cert.serialize_pem().unwrap());
println!("{}", cert.serialize_private_key_pem());
let pem_path = PathBuf::from(env!("CARGO_MANIFEST_DIR"))
.join("self_signed_certs")
.join("cert.pem");
let key_path = PathBuf::from(env!("CARGO_MANIFEST_DIR"))
.join("self_signed_certs")
.join("key.pem");
let mut pem_file = File::create(pem_path).unwrap();
let mut key_file = File::create(key_path).unwrap();
pem_file.write_all(pem.as_bytes()).unwrap();
key_file.write_all(key.as_bytes()).unwrap();
}
use axum::{routing::get, Router};
use axum_server::tls_rustls::RustlsConfig;
use std::{net::SocketAddr, path::PathBuf};
#[tokio::main]
async fn main() {
let https_port = 8080;
rustls::crypto::ring::default_provider()
.install_default()
.expect("Failed to install default CryptoProvider");
let handle = axum_server::Handle::new();
let config = RustlsConfig::from_pem_file(
PathBuf::from(env!("CARGO_MANIFEST_DIR"))
.join("self_signed_certs")
.join("cert.pem"),
PathBuf::from(env!("CARGO_MANIFEST_DIR"))
.join("self_signed_certs")
.join("key.pem"),
)
.await
.unwrap();
let app = Router::new().route("/", get(handler));
let addr = SocketAddr::from(([127, 0, 0, 1], https_port));
tracing::debug!("listening on {addr}");
axum_server::bind_rustls(addr, config)
.handle(handle)
.serve(app.into_make_service())
.await
.unwrap();
}
async fn handler() -> &'static str {
"Hello, World!"
}
For full examples with http redirect and graceful shutdown, go to Axum's GitHub repo which contains TLS examples.
To query a https server with self-signed cert, you would usually get errors.
To do it in Rust with reqwest
, set .danger_accept_invalid_certs(true)
.
use reqwest::Client;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Create a custom client
let client = Client::builder()
.danger_accept_invalid_certs(true)
.build()?;
// Make the request
let response = client
.get("https://localhost:3000")
.send()
.await?;
println!("Status: {}", response.status());
println!("Body: {}", response.text().await?);
Ok(())
}
In Node.js and Bun, need to set environment variable NODE_TLS_REJECT_UNAUTHORIZED=0
, but this will disable security check for entire program, not just a single request.
I get type error with TypeScript when I try to use Deno + DOM in an npm package.
Normally, a Deno package is a standalone Deno package with a deno.json
file, without package.json
or tsconfig.json
.
That's the point of using Deno.
However, I have a package with all kinds of code, divided into many subpackages, some runs in browser, some in Deno, some in Node.
Then there could be many type errors
When Deno VSCode extension disabled, Deno
gloabal var is missing.
Create deno.d.ts
in root
deno types > deno.d.ts
If you code contains dom operation, like using document.
or KeyboardEvent
, adding deno.d.ts
will cause DOM and other types to be missing.
This is because the following lines are added to deno.d.ts
/// <reference no-default-lib="true" />
/// <reference lib="esnext" />
/// <reference lib="deno.net" />
Simply remove the first line containing no-default-lib
.
Then other libs specified in tsconfig.json
will be loaded.
This works for me, but may not be the best solution for other use cases.
Deno and DOM packages should be separated into multiple packages when feasible.
I was working on a Go project by writing PocketBase Go extension code. I wanted to have a hot reload feature for the Go server. I found that there are many ways to achieve this, such as using air
, fresh
, realize
, etc.
But I wanted to try a different way by using JavaScript/TypeScript.
This gives me more control over the reload process.
let child: Deno.ChildProcess | null = null;
async function startGoServer() {
if (child) {
console.log("Killing previous Go server...");
child.kill("SIGTERM"); // Send SIGTERM to the process
await child.status; // Wait for the process to terminate
}
const buildCmd = new Deno.Command("go", {
args: ["build", "-o", "pocketbase", "main.go"],
});
await buildCmd.output();
const cmd = new Deno.Command("./pocketbase", { args: ["serve"] });
child = cmd.spawn();
console.log("Go server started.");
}
startGoServer();
for await (const _event of Deno.watchFs("main.go")) {
console.log("File change detected, restarting server...");
await startGoServer();
}
import { $, type Subprocess } from "bun";
import { watch } from "fs";
let child: Subprocess | null = null;
async function startGoServer() {
console.log("Starting Go server...");
if (child) {
console.log(`Killing previous Go server with PID: ${child.pid}`);
await child.kill(9);
// sleep 3 seconds
// use setTimeout to iteratively wait until killed is true, check every 1 second
while (child.killed === false) {
await new Promise((resolve) => setTimeout(resolve, 1000));
}
console.log(`Killed ${child.pid}: ${child.killed}`);
}
await $`go build -o pocketbase main.go`;
child = Bun.spawn(["./pocketbase", "serve"], {
stdio: ["inherit", "inherit", "inherit"],
});
console.log(`Go server started with PID: ${child.pid}`);
}
startGoServer();
watch("./main.go", { recursive: true }, async (event, filename) => {
await startGoServer();
});
title: IIFE authors: [huakun] tags: [Web]
https://developer.mozilla.org/en-US/docs/Glossary/IIFE
An IIFE (Immediately Invoked Function Expression) is a JavaScript function that runs as soon as it is defined.
Immediately Invoked Function Expression (IIFE) 是 JavaScript 中的一种常见的设计模式,它是一个立即执行的匿名函数表达式。IIFE 通常用于创建一个独立的作用域,避免变量污染全局作用域。
(function () {
// …
})();
(() => {
// …
})();
(async () => {
// …
})();
IIFE is usually used to create a separate scope to avoid polluting the global scope. In this blog I will instead talk about the use of IIFE in hosting web pages.
Traditionally, static websites are compiled into an index.html
file as an entrypoint. css and js files are linked in the head
and body
tags.
Here is an example of a simple index.html
file:
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<link rel="icon" type="image/svg+xml" href="/vite.svg" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Vite + Vue + TS</title>
<script type="module" crossorigin src="/assets/index-D7F47PqG.js"></script>
<link rel="stylesheet" crossorigin href="/assets/index-DRBiz0Jz.css" />
</head>
<body>
<div id="app"></div>
</body>
</html>
You can see that in such a client-side rendered web page, the HTML file doesn't contain any content. The content is rendered by the JavaScript file linked in the head
tag.
The index.html
is only used as a container for the JavaScript file. The JavaScript file is responsible for rendering the content of the web page.
Theoretically, we can ship a single JavaScript file contains all the logic and content of the web page (including styles and images (png doesn't work, but svg does)).;
For example, this is the original vite.config.ts
file for a vue project:
import { defineConfig } from "vite";
import vue from "@vitejs/plugin-vue";
// https://vitejs.dev/config/
export default defineConfig({
plugins: [vue()],
});
Output dist
folder:
dist
├── assets
│ ├── Vue.js_Logo_2.svg-BtIZHRhy.png
│ ├── index-CjgLCVzZ.css
│ └── index-CwnRthTM.js
├── index.html
└── vite.svg
Next, set output format to iife
import { defineConfig } from "vite";
import vue from "@vitejs/plugin-vue";
import path from "path";
// https://vitejs.dev/config/
export default defineConfig({
plugins: [vue()],
build: {
emptyOutDir: false,
rollupOptions: {
input: path.resolve(__dirname, "./src/main.ts"),
output: {
format: "iife",
dir: path.resolve(__dirname, "./dist"),
entryFileNames: "web.js",
},
},
},
});
dist
├── assets
│ └── Vue.js_Logo_2.svg-BtIZHRhy.png
├── vite.svg
└── web.js
Now how to use the web.js
without an index.html
file?
Let's serve the dist
folder with a simple http server:
serve dist --cors
The web.js
is at http://localhost:3000/web.js
In an HTML file,
<!DOCTYPE html>
<html lang="en">
<body>
<iframe></iframe>
<script>
fetch("http://localhost:3000/web.js", {
method: "GET",
})
.then((res) => res.text())
.then((data) => {
document
.querySelector("iframe")
.contentDocument.write(
"<div id='app'/><script>".concat(data, "<\/script>")
);
});
</script>
</body>
</html>
Here we are rendering the content of web.js
in an iframe
. It can also render the page directly as long as there is a <div id="app" />
.
<!DOCTYPE html>
<html lang="en">
<body>
<script>
fetch("http://localhost:3000/web.js", {
method: "GET",
})
.then((res) => res.text())
.then((data) => {
document.write(
"<div id='app'></div><script>".concat(data, "<\/script>")
);
});
</script>
</body>
</html>
This is because the main.ts
is like this
import { createApp } from "vue";
import "./style.css";
import App from "./App.vue";
createApp(App).mount("#app");
So what can be the use case of this?
In Raycast Analysis and uTools Analysis I discussed the two successful app launchers and their plugin system designs. But both of them have big limitations. Raycast is mac-only. uTools is cross-platform (almost perfect), but it is built with Electron, thus large bundle size and memory consumption.
Tauri is a new framework that can build cross-platform desktop apps with Rust and Web. With much smaller bundle size and memory consumption. It’s a good choice for building a cross-platform app launcher.
Plugins will be developed as regular single page application. They will be saved in a directory like the following.
plugins/
├── plugin-a/
│ └── dist/
│ ├── index.html
│ └── ...
└── plugin-b/
└── dist/
├── index.html
└── ...
Optionally use symbolic link to build the following structure (link dist
of each plugin to the plugin name. You will see why this could be helpful later.
plugins-link/
├── plugin-a/
│ ├── index.html
│ └── ...
└── plugin-b/
├── index.html
└── ...
When a plugin is triggered, the main Tauri core process will start a new process running a http server serving the entire plugins
or plugins-link
folder as static asset. The http server can be actix-web.
Then open a new WebView process
const w = new WebviewWindow('plugin-a', {
url: 'http://localhost:8000/plugin-a'
});
If we didn’t do the dist
folder symlink step, the url would be http://localhost:8000/plugin-a/dist
Do the linking could avoid some problem.
One problem is base url. A single page application like react and vue does routing with url, but the base url is /
by default. i.e. index page is loaded on http://localhost:8000
. If the plugin redirects to /login
, it should redirect to http://localhost:8000/login
instead of http://localhost:8000/plugin-a/login
In this case, https://vite-plugin-ssr.com/base-url, https://vitejs.dev/guide/build#public-base-path can be configured in vite config.
Another solution is to use proxy in the http server. Like proxy_pass
in nginx config.
Now the plugin’s page can be loaded in a WebView window.
However, a plugin not only needs to display a UI, but also need to interact with system API to implement more features, such as interacting with file system. Now IPC is involved.
Tauri by default won’t allow WebView loaded from other sources to run commands or call Tauri APIs.
See this security config dangerousRemoteDomainIpcAccess
https://tauri.app/v1/api/config/#securityconfig.dangerousremotedomainipcaccess
"security": {
"csp": null,
"dangerousRemoteDomainIpcAccess": [
{
"domain": "localhost:8000",
"enableTauriAPI": true,
"windows": ["plugin-a"],
"plugins": []
}
]
},
enableTauriAPI
determines whether the plugin will have access to the Tauri APIs. If you don’t want the plugin to have the same level of permission as the main app, then set it to false.
This not only work with localhost hosted plugins. The plugin can also be hosted on public web (but you won’t be able to access it if there is no internet). This will be very dangerous, as compromised plugin on public web will affect all users. In addition, it’s unstable. Local plugin is always safer.
There is another plugins
attribute used to control which tauri plugin’s (The plugin here means plugin in rust for Tauri framework, not our plugin) command the plugin can call.
https://tauri.app/v1/api/config/#remotedomainaccessscope.plugins
plugins
is The list of plugins that are allowed in this scope. The names should be without thetauri-plugin-
prefix, for example"store"
fortauri-plugin-store
.
For example, Raycast has a list of APIs exposed to extensions (https://developers.raycast.com/api-reference/clipboard)
Raycast uses NodeJS runtime to run plugins, so plugins can access file system and more. This is dangerous. From their blog https://www.raycast.com/blog/how-raycast-api-extensions-work, their solution is to open source all plugins and let the community verify the plugins.
This gives plugins more freedom and introduces more risks. In our approach with Tauri, we can provide a Tauri plugin for app plugins with all APIs to expose to the extensions. For example, get list of all applications, access storage, clipboard, shell, and more. File system access can also be checked and limited to some folders (could be set by users with a whitelist/blacklist). Just don’t give plugin access to Tauri’s FS API, but our provided, limited, and censored API plugin.
Unlike Raycast where the plugin is run directly with NodeJS, and render the UI by turning React into Swift AppKit native components. The Tauri approach has its UI part in browser. There is no way to let the UI plugin access OS API (like FS) directly. The advantage of this approach is that the UI can be any shape, while Raycast’s UI is limited by its pre-defined UI components.
If a plugin needs to run some binary like ffmpeg to convert/compress files, the previous sandbox API method with a custom Tauri plugin won’t work. In this scenario, this will be more complicated. Here are some immature thoughts
Raycast supports multiple user interfaces, such as list, detail, form.
To implement this in Jarvis, there are 2 options.
Could be difficult in our case, as we need to call a JS function to get data, this requires importing JS from Tauri WebView or run the JS script with a JS runtime and get a json response.
To do this, we need a common API contract in JSON format on how to render the response.
Write a command script cmd1.js
Jarvis will call bun cmd1.js
with argv, get response
Example of a list view
{
"view": "list",
"data": [
{
"title": "Title 1",
"description": "Description 1"
},
{
"title": "Title 2",
"description": "Description 2"
},
{
"title": "Title 3",
"description": "Description 3"
}
]
}
This method requires shipping the app with a bun runtime (or download the runtime when app is first launched).
After some thinking, I believe this is similar to script command. Any other language can support this. One difference is, “script command” relies on users’ local dependency, custom libraries must be installed for special tasks, e.g. pandas
in python. It’s fine for script command because users are coders who know what they are doing. In a plugin, we don’t expect users to know programming, and install libraries. So shipping a built JS dist
with all dependencies is a better idea. e.g. bun build index.ts --target=node > index.js
, then bun index.js
to run it without installing node_modules
.
In the plugin’s package.json, list all commands available and their entrypoints (e.g. dist/cmd1.js
, dist/cmd2.js
).
{
"commands": [
{
"name": "list-translators",
"title": "List all translators",
"description": "List all available translators",
"mode": "cmd"
}
]
}
If we let the extension handle everything, it’s more difficult to develop, but less UI to worry about.
e.g. translate input1
, press enter, open extension window, and pass the input1
to the WebView.
By default, load dist/index.html
as the plugin’s UI. There is only one entrypoint to the plugin UI, but a single plugin can have multiple sub-commands with url path. e.g. http://localhost:8080/plugin-a/command1
i.e. Routes in single page app
All available sub-commands can be specified in package.json
{
"commands": [
{
"name": "list-translators",
"title": "List all translators",
"description": "List all available translators",
"mode": "cmd"
},
{
"name": "google-translate",
"title": "Google Translate",
"description": "Translate a text to another language with Google",
"mode": "view"
},
{
"name": "bing-translate",
"title": "Bing Translate",
"description": "Translate a text to another language with Bing",
"mode": "view"
}
]
}
If mode
is view
, render it. For example, bing-translate
will try to load http://localhost:8080/translate-plugin/bing-translate
If mode
is cmd
, it will try to run bun dist/list-translators
and render the response.
mode cmd
can have an optional language field, to allow using Python or other languages.
Script Command from Raycast is a simple way to implement a plugin. A script file is created, and can be run when triggered. The stdout is sent back to the main app process.
Supported languages by Raycast script command are
In fact, let users specify an interpreter, any script can be run, even executable binaries.
Alfred has a similar feature in workflow. The difference is, Raycast saves the code in a separate file, and Alfred saves the code within the workflow/plugin (in fact also in a file in some hidden folder).
I had problem building a universal Tauri app for Mac (M1 pro).
rustup target add x86_64-apple-darwin
rustup target add aarch64-apple-darwin
npm run tauri build -- --target universal-apple-darwin
The problem was with OpenSSL. The error message was:
Finished `release` profile [optimized] target(s) in 4.50s
Compiling openssl-sys v0.9.102
Compiling cssparser v0.27.2
Compiling walkdir v2.5.0
Compiling alloc-stdlib v0.2.2
Compiling markup5ever v0.11.0
Compiling uuid v1.8.0
Compiling fxhash v0.2.1
Compiling crossbeam-epoch v0.9.18
Compiling selectors v0.22.0
Compiling html5ever v0.26.0
Compiling indexmap v1.9.3
Compiling tracing-core v0.1.32
error: failed to run custom build command for `openssl-sys v0.9.102`
Caused by:
process didn't exit successfully: `/Users/hacker/Dev/projects/devclean/devclean-ui/src-tauri/target/release/build/openssl-sys-2efafcc1e9e30675/build-script-main` (exit status: 101)
--- stdout
cargo:rerun-if-env-changed=X86_64_APPLE_DARWIN_OPENSSL_LIB_DIR
X86_64_APPLE_DARWIN_OPENSSL_LIB_DIR unset
cargo:rerun-if-env-changed=OPENSSL_LIB_DIR
OPENSSL_LIB_DIR unset
cargo:rerun-if-env-changed=X86_64_APPLE_DARWIN_OPENSSL_INCLUDE_DIR
X86_64_APPLE_DARWIN_OPENSSL_INCLUDE_DIR unset
cargo:rerun-if-env-changed=OPENSSL_INCLUDE_DIR
OPENSSL_INCLUDE_DIR unset
cargo:rerun-if-env-changed=X86_64_APPLE_DARWIN_OPENSSL_DIR
X86_64_APPLE_DARWIN_OPENSSL_DIR unset
cargo:rerun-if-env-changed=OPENSSL_DIR
OPENSSL_DIR unset
cargo:rerun-if-env-changed=OPENSSL_NO_PKG_CONFIG
cargo:rerun-if-env-changed=PKG_CONFIG_ALLOW_CROSS_x86_64-apple-darwin
cargo:rerun-if-env-changed=PKG_CONFIG_ALLOW_CROSS_x86_64_apple_darwin
cargo:rerun-if-env-changed=TARGET_PKG_CONFIG_ALLOW_CROSS
cargo:rerun-if-env-changed=PKG_CONFIG_ALLOW_CROSS
cargo:rerun-if-env-changed=PKG_CONFIG_x86_64-apple-darwin
cargo:rerun-if-env-changed=PKG_CONFIG_x86_64_apple_darwin
cargo:rerun-if-env-changed=TARGET_PKG_CONFIG
cargo:rerun-if-env-changed=PKG_CONFIG
cargo:rerun-if-env-changed=PKG_CONFIG_SYSROOT_DIR_x86_64-apple-darwin
cargo:rerun-if-env-changed=PKG_CONFIG_SYSROOT_DIR_x86_64_apple_darwin
cargo:rerun-if-env-changed=TARGET_PKG_CONFIG_SYSROOT_DIR
cargo:rerun-if-env-changed=PKG_CONFIG_SYSROOT_DIR
run pkg_config fail: pkg-config has not been configured to support cross-compilation.
Install a sysroot for the target platform and configure it via
PKG_CONFIG_SYSROOT_DIR and PKG_CONFIG_PATH, or install a
cross-compiling wrapper for pkg-config and set it via
PKG_CONFIG environment variable.
--- stderr
thread 'main' panicked at /Users/hacker/.cargo/registry/src/index.crates.io-6f17d22bba15001f/openssl-sys-0.9.102/build/find_normal.rs:190:5:
Could not find directory of OpenSSL installation, and this `-sys` crate cannot
proceed without this knowledge. If OpenSSL is installed and this crate had
trouble finding it, you can set the `OPENSSL_DIR` environment variable for the
compilation process.
Make sure you also have the development packages of openssl installed.
For example, `libssl-dev` on Ubuntu or `openssl-devel` on Fedora.
If you're in a situation where you think the directory *should* be found
automatically, please open a bug at https://github.com/sfackler/rust-openssl
and include information about your system as well as this message.
$HOST = aarch64-apple-darwin
$TARGET = x86_64-apple-darwin
openssl-sys = 0.9.102
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
warning: build failed, waiting for other jobs to finish...
Error failed to build x86_64-apple-darwin binary: failed to build app
Install OpenSSL with brew
brew install openssl
export export OPENSSL_DIR=$(brew --prefix openssl)
This problem was not fully solved. I had to run GitHub Action on macos-13
runners (intel CPU). The build passes but the resulting app won't run on x86_64 Macs. Keeps saying the openssl lib cannot be loaded. I will update this post when I find a solution.
Read more here https://github.com/tauri-apps/tauri/issues/9684#event-12728702751
The real source of problem was actually git2
's dependency (https://crates.io/crates/git2/0.18.3/dependencies) openssl-sys
. Removing git2
from my app fixed all problems. Running on macos-14
runner (M1 pro) worked fine.
openssl-sys
is a OpenSSL bindings for rust. So it doesn't include the actual OpenSSL library. You need to install OpenSSL on your system.
My guess is, during the build process on GitHub Action, the openssl library location is different from the one on my local machine, and the path is burned into the binary. So the binary won't run on other machines. This is just a guess. There must be some solution. I will update this post when I find it.
GitHub Repo: https://github.com/HuakunShen/nestjs-neo4j-graphql-demo
I haven't found a good update-to-date example of using Neo4j with NestJS and GraphQL. So I decided to write one myself.
Neo4j's graphql library has updated its API, some examples I found online were outdated (https://neo4j.com/developer-blog/creating-api-in-nestjs-with-graphql-neo4j-and-aws-cognito/). This demo uses v5.x.x.
type Mutation {
signUp(username: String!, password: String!): String
signIn(username: String!, password: String!): String
}
# Only authenticated users can access this type
type Movie @authentication {
title: String
actors: [Actor!]! @relationship(type: "ACTED_IN", direction: IN)
}
# Anyone can access this type
type Actor {
name: String
movies: [Movie!]! @relationship(type: "ACTED_IN", direction: OUT)
}
# Only authenticated users can access this type
type User @authentication {
id: ID! @id
username: String!
# this is just an example of how to use @authorization to restrict access to a field
# If you list all users without the plaintextPassword field, you will see all users
# If you list all users with the plaintextPassword field, you will only see the user whose id matches the jwt.sub (which is the id of the authenticated user)
# in reality, never store plaintext passwords in the database
plaintextPassword: String!
@authorization(filter: [{ where: { node: { id: "$jwt.sub" } } }])
password: String! @private
}
A GraphQL module can be generated with bunx nest g mo graphql
.
Here is the configuration. In new Neo4jGraphQL()
, authorization key is provided for JWT auth. Queries can be restricted by adding @authentication
or @authorization
to the type.
One important thing to note is the custom auth resolvers. Neo4jGraphQL
auto-generate types, queries, mutations implementations for the types in the schema to provide basic CRUD operations, but custom functions like sign in and sign up must be implemented separately. Either as regular REST endpoints in other modules or provide a custom resolver to the Neo4jGraphQL
instance.
Usually in NestJS, you would add resolvers to the providers
list of the module, but in this case, the resolvers must be added to the Neo4jGraphQL
instance. Otherwise you will see the custom queries defined in schema in the playground, but they always return null
.
@Module({
imports: [
GraphQLModule.forRootAsync<ApolloDriverConfig>({
driver: ApolloDriver,
useFactory: async () => {
export const { NEO4J_URI, NEO4J_USERNAME, NEO4J_PASSWORD } =
envSchema.parse(process.env);
export const neo4jDriver = neo4j.driver(
NEO4J_URI,
neo4j.auth.basic(NEO4J_USERNAME, NEO4J_PASSWORD)
);
const typedefPath = path.join(RootDir, "src/graphql/schema.gql");
export const typeDefs = fs.readFileSync(typedefPath).toString();
const neoSchema = new Neo4jGraphQL({
typeDefs: typeDefs,
driver: neo4jDriver,
resolvers: authResolvers, // custom resolvers must be added to Neo4jGraphQL instead of providers list of NestJS module
features: {
authorization: {
key: "huakun",
},
},
});
const schema = await neoSchema.getSchema();
return {
schema,
plugins: [ApolloServerPluginLandingPageLocalDefault()],
playground: false,
context: ({ req }) => ({
token: req.headers.authorization,
}),
};
},
}),
],
providers: [],
})
export class GraphqlModule {}
The resolver must be provided to Neo4jGraphQL
constructor. It must be an object, so NestJS's class-based resolver won't work.
You must provide regular apollo stype resolvers. See https://neo4j.com/docs/graphql/current/ogm/installation/ for similar example.
export const authResolvers = {
Mutation: {
signUp: async (_source, { username, password }) => {
...
return createJWT({ sub: users[0].id });
},
signIn: async (_source, { username, password }) => {
...
return createJWT({ sub: user.id });
},
},
};
Read the README.md of this repo for more details. Run the code and read the source code to understand how it works. It's a minimal example.
https://the-guild.dev/graphql/codegen is used to generate TypeScript types and more from the GraphQL schema.
Usually you provide the graphql schema file, but in this demo, the schema is designed for neo4j and not recognized by the codegen tool.
You need to let Neo4jGraphQL
generate the schema and deploy it to a server first, then provide the server's endpoint to the codegen tool.
Then the codegen tool will introspect the schema from the server and generate the types.
Make sure the server is running before running codegen
cd packages/codegen
pnpm codegen
The generated files are in the packages/codegen/src/gql
folder.
Sample operations can be added to packages/codegen/operations
. Types and caller for operations will also be generated.
Read the documentation of codegen for more details.
Examples is always provided in the packages/codegen
folder.
This is roughly how the generated code it works:
You get full type safety when calling the operations. The operations documents are predefined in a central place rather than in the code. This is useful when you have a large project with many operations. Modifying one operation will update all the callers. And if the type is no longer valid, the compiler will tell you.
The input variables are also protected by TypeScript. You won't need to guess what the input variables are. The compiler will tell you.
import { CreateMoviesDocument } from "./src/gql/graphql";
async function main() {
const client = new ApolloClient({
uri: "http://localhost:3000/graphql",
cache: new InMemoryCache(),
});
client
.mutate({
mutation: CreateMoviesDocument,
variables: {
input: [
{
actors: {
create: [
{
node: {
name: "jacky",
},
},
],
},
title: "fallout",
},
],
},
})
.then((res) => {
console.log(res);
});
}
As a developer, I have hundreds of repos on my computer. I also need to reinstall the system every year or few months depending on the amount of garbage I added to the OS or if I broken the OS.
Data loss is a serous problem. My design goal is to prevent any possibility of losing important files even if the OS crashed, the computer is stolen, and I can reset it any time without worrying about backing up my data. (Reinstalling apps is fine for me).
The solution is simple, cloud.
My files include the following categories
My code repos is always tracked with git and all changes must be backed up to GitHub immediately.
The code are stored in ~/Dev/
All other files are saved in OneDrive or iCloud Drive.
There is nothing on my ~/Desktop
. I like to keep it clean.
This way I can reset my computer any time.
Although I have all my files in the cloud, sometimes you may forgot to commit all changes to git. You may want to backup the entire projects folder to a drive, or cloud.
However, projects can take up a huge amount of space. My single rust project can easily take up 10-60GB due to large cache. This will take forever to upload to the cloud.
I wrote a small app called devclean
(https://github.com/HuakunShen/devclean) with 2 simple feautres:
node_modules
, target
, etc.). These files can be deleted before backup. Then the project code is probably a few MB.