0%

Why Use Fauxpilot?

Fauxpilot is an open-source alternative to GitHub Copilot, leveraging the power of large-scale models to deliver a range of AI-driven functionalities, including dialogue and code suggestions. In this article, we’ll focus exclusively on its code suggestion capabilities, with other features to be detailed in future posts.

Local Setup

To run Fauxpilot, the following prerequisites must be met:

  • A Windows or Linux system with an NVIDIA graphics card.
  • At least 50GB of local storage space (required for the large model).
  • Docker and nvidia-container-toolkit.
  • The curl and zstd command-line tools, necessary for model downloads.

I am using Ubuntu.

The Docker installation is simple and won’t be elaborated here. The NVIDIA Container Toolkit is a project that enables containers to utilize NVIDIA GPU resources. The following commands can be used for installation:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
# Configure the production repository:
curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \
&& curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | \
sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \
sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list

# Configure the repository to use experimental packages:
sed -i -e '/experimental/ s/^#//g' /etc/apt/sources.list.d/nvidia-container-toolkit.list

# Update and install:
sudo apt-get update && apt-get install -y nvidia-container-toolkit

# Configure the container runtime to use NVIDIA GPUs:
sudo nvidia-ctk runtime configure --runtime=docker

Then, simply restart Docker:

1
sudo systemctl restart docker

Installation methods for other systems can be found in the complete installation guide.

Next, enter the project root directory and run the startup script directly:

1
2
cd fauxpilot
./setup.sh
Read more »

1. Introduction to Hugging Face

Hugging Face is a platform that provides powerful tools and resources for the natural language processing (NLP) community. It offers various pre-trained models, APIs, and tools to simplify the development and deployment of NLP tasks for developers and researchers. The core mission of Hugging Face is to make NLP technology more accessible and shareable, promoting the advancement and democratization of artificial intelligence.

On Hugging Face, users can access a wide range of pre-trained models for tasks like text classification, question answering, language generation, and more. Additionally, Hugging Face allows users to create their own “Spaces” to store and share their models, code, and datasets. In this guide, we will focus on how to deploy a fast one-click face swapping software on Hugging Face.

2. Creating a Space and Uploading Code

To deploy a one-click face swapping software on Hugging Face, we first need to create a Hugging Face account and then create a new Space.

2.1 Create a Hugging Face Account

If you don’t have a Hugging Face account yet, head to the Hugging Face registration page and create a new account.

2.2 Login and Create a New Space

  1. Login to Hugging Face, click the “Login” button in the top right corner, and enter your account credentials to log in.

  2. After logging in, you will see your username in the top right corner. Click on your username and select “Create a new space.”. and select the Space SDK Gradio
    for space hardware, we will be using CPU.

  3. On the “Create a new space” page, provide a name and description for your Space. Choose a relevant name related to your one-click face swapping software and provide an appropriate description to let others know about your project.

  4. Click “Create new space” to finish creating the Space.
    create-huggingface-space

2.3 Upload Code and Applications

To deploy our one-click face swapping software, we will be using the roop repository available on GitHub. This software enables one-click face swapping for both images and videos by simply requiring users to upload a portrait image. For the purpose of this guide, we will focus on face swapping as an example. As we are using the free version of Hugging Face’s space, it currently supports CPU inference only.

To get started, clone the roop repository and create an app.py file.

we will call the core module in app.py using the following code:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
# -* coding:UTF-8 -*
# !/usr/bin/env python
import numpy as np
import gradio as gr
import roop.globals
from roop.core import (
start,
decode_execution_providers,
suggest_max_memory,
suggest_execution_threads,
)
from roop.processors.frame.core import get_frame_processors_modules
from roop.utilities import normalize_output_path
import os
from PIL import Image


def swap_face(source_file, target_file):

source_path = "input.jpg"
target_path = "target.jpg"

source_image = Image.fromarray(source_file)
source_image.save(source_path)
target_image = Image.fromarray(target_file)
target_image.save(target_path)

print("source_path: ", source_path)
print("target_path: ", target_path)

roop.globals.source_path = source_path
roop.globals.target_path = target_path
output_path = "output.jpg"
roop.globals.output_path = normalize_output_path(
roop.globals.source_path, roop.globals.target_path, output_path
)
roop.globals.frame_processors = ["face_swapper"]
roop.globals.headless = True
roop.globals.keep_fps = True
roop.globals.keep_audio = True
roop.globals.keep_frames = False
roop.globals.many_faces = False
roop.globals.video_encoder = "libx264"
roop.globals.video_quality = 18
roop.globals.max_memory = suggest_max_memory()
roop.globals.execution_providers = decode_execution_providers(["cpu"])
roop.globals.execution_threads = suggest_execution_threads()

print(
"start process",
roop.globals.source_path,
roop.globals.target_path,
roop.globals.output_path,
)

for frame_processor in get_frame_processors_modules(
roop.globals.frame_processors
):
if not frame_processor.pre_check():
return

start()
return output_path


app = gr.Interface(
fn=swap_face, inputs=[gr.Image(), gr.Image()], outputs="image"
)
app.launch()

We will use Gradio to design the user interface for our program. The good news is that Hugging Face natively supports Gradio, so we don’t need to import any additional libraries.

Once you have written the program, you can push it to your Hugging Face space. After the push, the program will be automatically deployed. All you need to do is wait for the deployment to complete, and then you can start using the one-click face swapping software.

Remember, the deployment process is hassle-free and user-friendly, allowing you to focus on the exciting applications of your software.

huggingface-deployment

This article will introduce how to use Oracle Cloud to build a highly available, cross-regional system. We know that high availability describes a system that is available most of the time and can provide us with services. High availability means the service is still available even during a hardware failure or system upgrade. Eliminate single points of failure through proper use of deploying instances across multiple availability domains
We’ll create load balancers in three different regions, each with two servers at least behind them that use the same application and expose the same ports.

Let’s take a look at our system design architecture design first.
architecture

In this architecture, We create three load-balancers distributed in the us-west, us-east, and Europe regions.
So we have three regions with identical resources. And the traffic will route to the specific load balancer with rules. And fallback to other load balancers.
We’ll show how to create this architecture step by step

Read more »

for example we will acquire ssl certificate for ezioruan.com

install

1
sudo apt update && sudo apt install certbot

get ssl certificate by dns change

1
sudo certbot -d ezioruan.com --manual --preferred-challenges dns certonly

it will ask you to create a txt record in your dns provider
image

after add the record press enter and you will get two files fullchain.pem and privkey.pem

image

import to loadbalcer

parse fullchain.pem in SSL Certificate and privkey.pem to Private Key

and you’ll able to use in loadbalcer’s Certificates manager

#What is Data Migration?
Data migration is the process of moving data from one system to another. While this might seem straightforward, it involves storage, database, or application change.
Any data migration will involve at least the transform and load steps in the context of the extract/transform/load (ETL) process. This means that extracted data needs to go through a series of functions in preparation, after which it can be loaded into a target location.

A strategic data migration plan should include consideration of these critical factors:

  • Knowing the data — Before migration, source data must undergo a complete audit. Unexpected issues can surface if this step is ignored.
  • Cleanup — Once you identify any issues with your source data, they must be resolved. This may require additional software tools and third-party resources because of the scale of the work.
  • Maintenance and protection — Data undergoes degradation after some time, making it unreliable. This means there must be controls in place to maintain data quality.
  • Governance — Tracking and reporting on data quality is crucial because it enables a better understanding of data integrity. The processes and tools used to produce this information should be highly usable and automate functions where possible.
Read more »

Configuring VLANs and DHCP for User Groups

In a network environment, it is sometimes necessary to segment different user groups to improve security and performance. VLANs (Virtual Local Area Networks) provide a way to do this by dividing a physical network into multiple logical networks. This allows different user groups to be isolated from each other and to have their own set of configurations, such as DHCP (Dynamic Host Configuration Protocol) settings.

In this tutorial, we will go through the steps of configuring VLANs and DHCP for different user groups.

Prerequisites

Before we begin, we assume that you have the following:

  • A Linux machine with a network interface named tap_home that is connected to a physical network
  • Root access to the Linux machine
  • Basic knowledge of networking concepts, such as IP addresses and subnets

network_topology-overview

Creating VLANs

The first step is to create VLANs for each user group. We will create two VLANs, one for each user group, with IDs 100 and 200.

1
2
ip link add link tap_home name tap_home.100 type vlan id 100
ip link add link tap_home name tap_home.200 type vlan id 200
Read more »

Develop APISIX plugin via Golang

APISIX allows users to extend functions through plugins, Although the Apache APISIX kernel code is written in Lua by the plugin, Apache APISIX supports the development of plugins in multiple languages, such as Go and Java. This article will explain in detail how to use plugins to develop Apache APISIX.
We will write an HTTP base auth plugin in Golang

write the plugin

Prerequisites

  • Go (>= 1.15)
  • apisix-go-plugin-runner

Let’s look at the Plugin interface, To write a custom plugin, we need to implement the plugin’s interface

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
type Plugin interface {
// Name returns the plguin name
Name() string

// ParseConf is the method to parse the given plugin configuration. When the
// configuration can't be parsed, it will be skipped.
ParseConf(in []byte) (conf interface{}, err error)

// Filter is the method to handle request.
// It is like the `http.ServeHTTP`, plus the ctx and the configuration created by
// ParseConf.
//
// When the `w` is written, the execution of the plugin chain will be stopped.
// We don't use onion model like Gin/Caddy because we don't serve the whole request lifecycle
// inside the runner. The plugin is only a filter running at one stage.
Filter(conf interface{}, w http.ResponseWriter, r pkgHTTP.Request)
}
Read more »

oracle has tools to increase disk sise for Oracle Image, but not ubuntu, we need to do it manualy

The script internally calls growpart from cloud-guest-utils and resize2fs. So if you are not using LVM etc, to grow main ext4 partition of a boot volume in Ubuntu, simply run:

1
2
sudo growpart /dev/sda 1
sudo resize2fs -z ./sda1.e2undo /dev/sda1

Since Oracle puts the boot partition after main, to be safe you can also install efibootmgr and check

1
efibootmgr -v 

output. If yours also looks like

1
Boot0002* UEFI ORACLE BlockVolume       PciRoot(0x0)/Pci(0x12,0x7)/Pci(0x0,0x0)/SCSI(0,1)N.....YM....R,Y.

then it means it uses SCSI disk/partition number to locate boot partition and it should be safe to reboot now.

fix gpg unusable key

when we import gpg keys from backup, and try to use it for decript/encript, it will tell that the key is not usable,
we need to trust the keys first

use gpg --list-keys to list all keys

the output would like this

1
2
pub   2048R/B660885G 2013-05-16 [expires: 2030-12-31]
uid [] SAP Admin <xxx@xx.com>

we can get the keys id here B660885G

and then gpg>trust

If you’re sure about the authenticity of your key, select the trust level 5.

sftp is a way to use ftp on a linux server. it use the ssh account to connect the server, we can setup the sftp account and provide differences authentication

create sftp account without shell permission

1
sudo adduser --shell /bin/false sftpuser

then set the password for the user

1
sudo passwd sftpuser

create sftp directory for the user, and give the permission

/var/sftp/files here

1
2
3
4
5
6
sudo mkdir -p /var/sftp/files
sudo chown sftpuser:sftpuser /var/sftp/files

sudo chown root:root /var/sftp
sudo chmod 755 /var/sftp

create public key

we need to create the files in the user’s home directory, and generate the authentication keys

1
mkdir /home/sftpuser/.ssh

generate keys by ssh-keygen -t rsa and point the file location to /home/sftpuser/.ssh/id_rsa

set authorized keys and permission

1
2
3
4
5
6
cd .ssh
touch authorized_keys
cat id_rsa.pub >> authorized_keys
cd ..
chmod 700 .ssh
chmod 600 .ssh/authorized_keys

save the generate private key to your compucter and give 600 permission

1
2
cat /home/sftp/.ssh/id_rsa > sftp.pem
chmod 600 sftp.pem

update sftp user config to enable password and public key authentication

edit file at /etc/ssh/sshd_config, append below lines

1
2
3
4
5
6
7
8
9
10
11
Match User sftpuser
ForceCommand internal-sftp
PasswordAuthentication yes
ChrootDirectory /var/sftp
PermitTunnel no
AllowAgentForwarding no
AllowTcpForwarding no
X11Forwarding no
RSAAuthentication yes
PubkeyAuthentication yes
AuthorizedKeysFile /home/sftpuser/.ssh/authorized_keys

set PasswordAuthentication to yes to enable password authentication,
set PubkeyAuthentication to yes to enable public key authentication,

and then restart sshd services
sudo service sshd restart

test sftp connection by command line

password authentication

1
sftp  sftpuser@<server_host>

public key authentication

1
sftp -i sftp.pem sftpuser@<server_host>