Skip to content

Instantly share code, notes, and snippets.

View indiejoseph's full-sized avatar
🏠
Working from home

Joseph Cheng indiejoseph

🏠
Working from home
View GitHub Profile
分析以下幾篇文章內容,以分析員角度分析,歸納成一份學術性文章。
- 畀出主題、主體同概念
- 必須用廣東話書寫
- 為主體人物或概念提供簡介
- 為主題作總結
文章內容:
# 香港電影業新聞與動態
import queue
import time
import threading
from transformers.generation.logits_process import (
TopPLogitsWarper,
RepetitionPenaltyLogitsProcessor,
)
@torch.inference_mode()
def inference_v2(
@indiejoseph
indiejoseph / docker-compose.yml
Created October 25, 2024 09:53
Label Studio
version: "3.9"
services:
app:
image: heartexlabs/label-studio:latest
restart: unless-stopped
depends_on:
- db
expose:
- "8080"
environment:
@indiejoseph
indiejoseph / gist:b1f04b4f71b77ad7bce8b9379fc72e29
Created September 5, 2024 11:03
Register Conda environment as Ipython kernel
$ conda activate ml
(ml) $ conda install ipykernel
(ml) $ ipython kernel install --user --name=<any_name_for_kernel>
(ml) $ conda deactivate
import torchaudio
def eval(audio, text):
# convert audio to 16000 sample rate
audio = torchaudio.transforms.Resample(orig_freq=44100, new_freq=16000)(torch.tensor(audio).unsqueeze(0)).squeeze()
# process text
tokenized_seq = torch.tensor([processor.tokenizer(text, add_special_tokens=True).input_ids]).to(device)
decoder_input_ids = tokenized_seq[:, 1:]
decoder_input_ids_right_shifted = tokenized_seq[:, :-1]
# process audio
word jyutping
一剎那 no4
七竅 hiu3
七鰓鰻 soi1
丈母娘 noeng4
bam1
九層塔 taap3
乞兒 haat1
乾坤 kin4
乾闥婆 taat3
const resampling = (audioBuffer: AudioBuffer, targetSampleRate: number): Promise<AudioBuffer> => {
const offlineAudioContext = new OfflineAudioContext(1, audioBuffer.length, targetSampleRate);
const source = offlineAudioContext.createBufferSource();
source.buffer = audioBuffer;
source.connect(offlineAudioContext.destination);
source.start();
return offlineAudioContext.startRendering();
You are asked to come up with a set of 20 diverse and detailed roleplay scenarios. These scenarios will be given to a large language model and we will evaluate the large language model for completing the roleplay effectively.
Here are the requirements:
1. Ensure the roles and scenarios are diverse, covering different contexts, professions, and situations.
2. The language used for the scenarios should also be diverse. For example, include both formal and informal dialogues.
3. The type of scenarios should be diverse. Include diverse types of interactions like customer service, medical consultations, casual conversations, educational settings, etc.
4. A large language model should be able to complete the roleplay. For example, do not include scenarios that require real-time actions or physical interactions.
5. The scenarios and dialogues should be in **Cantonese**.
6. Each scenario should be detailed, providing a clear context and background. Include relevant information such as the setting, characters involv
adb -e shell date $(date +%m%d%H%M%Y.%S)

Exploring the Vision of AI Smart Watch

Here is some thoughts of mine on the vision of AI Smart Watch. I hope you enjoy it.

Introduction

Running LLM(Large language model) on a embedding device is a trendy topic in the field of AI. The AI Smart Watch is a typical example of such devices. It is also a challenging task because of the limited resources of the device. In this article, I will explore the vision and the solution of AI Smart Watch.

Vision