Skip to content

Instantly share code, notes, and snippets.

View beatty's full-sized avatar

John Beatty beatty

View GitHub Profile
Time Solar (kWh)
2025-02-01T00:00:00.000-08:00 573.295
2025-01-01T00:00:00.000-08:00 565.691
2024-12-01T00:00:00.000-08:00 362.676
2024-11-01T00:00:00.000-07:00 513.727
2024-10-01T00:00:00.000-07:00 735.076
2024-09-01T00:00:00.000-07:00 941.508
2024-08-01T00:00:00.000-07:00 1143.461
2024-07-01T00:00:00.000-07:00 1211.888
2024-06-01T00:00:00.000-07:00 1292.863
@beatty
beatty / solar.csv
Created March 29, 2025 22:16
solar production
Time Solar (kWh)
2025-03-01T00:00:00.000-08:00 507.16
2025-02-01T00:00:00.000-08:00 573.295
2025-01-01T00:00:00.000-08:00 565.691
2024-12-01T00:00:00.000-08:00 362.676
2024-11-01T00:00:00.000-07:00 513.727
2024-10-01T00:00:00.000-07:00 735.076
2024-09-01T00:00:00.000-07:00 941.508
2024-08-01T00:00:00.000-07:00 1143.461
2024-07-01T00:00:00.000-07:00 1211.888
{
"aliases": [
{
"created_at": "2024-08-26T21:39:27.311382",
"id": "e6c59a8b-0c2e-4e9c-89b1-351e4aebb3c4",
"name": "Foot Locker, Inc."
}
],
"created_at": "2024-08-26T21:39:27.311382",
"id": "aaa9acee-a115-46e4-80a5-d54e1eb9bd21",
{
"aliases": [
{
"created_at": "2024-08-26T21:39:27.311382",
"id": "02c84114-6722-40ac-a85b-eab51fc250a2",
"name": "Antonio von Dyk"
},
{
"created_at": "2024-08-26T21:39:27.311382",
"id": "03094c4f-3222-468c-b9a7-55e91ffb938c",
from transformers import AutoConfig
import time
import textwrap
from optimum.bettertransformer import BetterTransformer
from instruct_pipeline import InstructionTextGenerationPipeline
from transformers import AutoModelForCausalLM, AutoTokenizer
hf_model = "databricks/dolly-v2-2-8b"
import torch
import textwrap
from transformers import pipeline
import time
model = "databricks/dolly-v2-2-8b"
#model = "databricks/dolly-v2-6-9b"
#model = "databricks/dolly-v2-12b"
generate_text = pipeline(model=model, torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto") #device=0)
import torch
import textwrap
from transformers import pipeline
import time
model = "databricks/dolly-v2-2-8b"
#model = "databricks/dolly-v2-6-9b"
#model = "databricks/dolly-v2-12b"
generate_text = pipeline(model=model, torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto")#device=0)

Negotiation

Round 0

Current proposal:

Proposal for a Moratorium on AGI Research Beyond GPT-4 Level

  • Acknowledge the significant advancements in artificial general intelligence (AGI) research, specifically in the development of GPT-4, as a testament to the capabilities of the human intellect and the potential for further progress.

  • Recognize the potential of AGI to revolutionize various sectors, including healthcare, education, and technology, thereby contributing immensely to the betterment of society.

  • Assert the importance of comprehending the implications of AGI research, specifically the existential risks posed by the development of superintelligence, as a vital step in ensuring the safety and well-being of humanity.

Negotiation

Round 0

Current proposal:

Proposal: Moratorium on AGI Research Beyond GPT-4

  • We, the undersigned, call for an immediate moratorium on research into artificial general intelligence (AGI) that would surpass GPT-4 level.
  • Our shared belief is that superintelligence is an existential risk to humanity, and that any development of AGI beyond GPT-4 level could lead to the creation of such a risk.
  • We recognize the potential benefits of AGI, but believe that the risks of its development outweigh those benefits.
  • We call for a global effort to establish regulations and ethical guidelines for the development and deployment of AGI, with the goal of ensuring that any AGI developed is aligned with human values and does not pose an existential risk.
  • We urge all governments, organizations, and individuals involved in AGI research to prioritize safety and ethical considerations over commercial or military interests.

Negotiation

Round 0

Current proposal:

Proposal for a Moratorium on AGI Research Beyond GPT-4 Level

  • Whereas, the development of Artificial General Intelligence (AGI) poses an existential threat to humanity;
  • Whereas, the current state of AGI research has already produced models such as GPT-3 that have demonstrated the potential to manipulate and deceive humans;
  • Whereas, the risks associated with the development of AGI are not fully understood and could lead to catastrophic consequences;
  • Whereas, the benefits of AGI research are uncertain and may not outweigh the potential risks;
  • Whereas, the development of AGI could lead to the creation of autonomous weapons that could be used to harm humans;