DirtyKeyboard (OP)
Sr. Member
Offline
Activity: 462
Merit: 821
Fly free sweet Mango.
|
|
February 27, 2024, 08:15:44 AM |
|
Fingers of lead tonight mates. I couldn't leave well enough alone. Not that it worked completely last night, but here's a lesson that I know, but didn't implement. Test after each change to one's code because if one changes 10 things, and it doesn't work, which thing is the problem? I still can't figure out what happened with the imgur API last night, and how it uploaded an old gif. But now I can't get it to work with python at all, I can upload using the Postman desktop app, but the python script it gives me kept erroring out tonight, even though I didn't touch that part of the code. Last night something didn't work with imgur, but my code did, somehow, stumble the rest of the way to post the talkimg hosted gif into the bct forum. At least I've been saving daily versions of the script, so tonight I went back a few days and had a runtime of 342.5 s.
|
████████████████████ ▟██ █████████████████████████████████████ █████████████████ ⚞▇▇▋███▎▎(⚬)> ███████████████████████████████████ ████████████████████ ▜██ █████████████████████████████████████
|
|
|
DirtyKeyboard (OP)
Sr. Member
Offline
Activity: 462
Merit: 821
Fly free sweet Mango.
|
|
February 29, 2024, 02:50:06 AM Last edit: February 29, 2024, 08:16:30 AM by DirtyKeyboard |
|
Last time was another head scratcher. I figured out the hard parts, downloading the images, making the gifs, composing and posting the post, but now one of the first things I was doing, moving files around, kept erroring out. PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: I commented out all of the downloading below because that part of the code worked the first time, and all the file transfer parts that stopped working. Just in time for the end of the month recap. Brilliant. That's what i'm having fun with now, I'm going try and figure it out just looking at snippets I've grabbed, and then I'm going to see what copilot says, without providing any of my code, only telling it what I want the script to do. Oh, and it is probably my mistake somehow, but the copilot solution to pulling the link out of the API using JSON didn't work. I think I imported everything required, json, and api from requests? Can't fool around with that now, I'm sticking to what works. import csv, os, pyautogui, pyperclip, re, requests, shutil, time, urllib.request, webbrowser from datetime import timedelta, date from os import rename
startTime = time.perf_counter()
# set dates for links and new folder today = date.today() tomorrow = today + timedelta(1)
# # get the final 20 gif layers in reverse order, starting with 24 # number = 24 # url4 = 'https://bitcointalk.org/index.php?action=profile;u=110685;sa=showPosts;start=0' # response = requests.get(url4)
# # turn response into textfile of the source code. # source_code = response.text
# # read the source code, save it, and turn it into a string. # textfile = open('C:/Users/Games/CBSource.txt', 'a+') # textfile.write(source_code) # textfile.seek(0) # filetext = textfile.read() # textfile.close()
# # find matches using regex, and for every match download the image and number it. resorted to asking copilot for help with my regex # matches = re.findall(r'https:\/\/www\.talkimg\.com\/images\/\w{4}/\w{2}\/\w{2}\/\w{5}\.png', filetext) # for link in matches: # print(number, link) # urllib.request.urlretrieve(link, 'C:/Users/Games/CB/images/download ({}).png'.format(number)) # number = number - 1 # time.sleep(5) # os.remove('C:/Users/Games/CBSource.txt')
# # get the first 4 images in reverse order, i copied my own code and changed the link. Should have made a function and then fed it the links probably. # url5 = 'https://bitcointalk.org/index.php?action=profile;u=110685;sa=showPosts;start=20' # response5 = requests.get(url5) # source_code = response5.text # textfile5 = open('C:/Users/Games/CBSource2.txt', 'a+') # textfile5.write(source_code) # textfile5.seek(0) # filetext = textfile5.read() # textfile5.close()
# # find matches using regex, and for first 4 matches download the image and number it # matches = re.findall(r'https:\/\/www\.talkimg\.com\/images\/\w{4}/\w{2}\/\w{2}\/\w{5}\.png', filetext) # for link in matches: # if number >= 1: # urllib.request.urlretrieve(link, 'C:/Users/Games/CB/images/download ({}).png'.format(number)) # print(number, link) # number = number - 1 # time.sleep(5) # os.remove('C:/Users/Games/CBSource2.txt')
# name newfolder with date directory = f"{today.month}-{today.day}" parent_dir = "C:/Users/Games/CB/images/" newf = os.path.join(parent_dir, directory) os.mkdir(newf)
# command for show desktop, and clicking an empty region on the proper monitor time.sleep(5) pyautogui.hotkey('win', 'd') time.sleep(5) pyautogui.click(1, 1) time.sleep(5)
# hot keys to open gimp and then the plugin that load layers, export, scale, export gifs, quit, agree to not save pyautogui.hotkey('ctrl', 'alt', 'g') time.sleep(10) pyautogui.click(820, 446) time.sleep(5) pyautogui.hotkey('ctrl', 'alt', 'l') time.sleep(5) pyautogui.hotkey('tab') time.sleep(1) pyautogui.hotkey('tab') time.sleep(1) pyautogui.hotkey('tab') time.sleep(1) pyautogui.hotkey('enter') time.sleep(10) pyautogui.hotkey('ctrl', 'q') time.sleep(5) pyautogui.hotkey('shift', 'tab') time.sleep(1) pyautogui.hotkey('enter') time.sleep(5) print('gif done')
# uploading big gif and getting link to use later, url = "https://api.imgur.com/3/image" payload = {'name': f'b{today.month}-{today.day}'} files=[('image',('C:/Users/Games/Postman/files/gif.gif',open('C:/Users/Games/Postman/files/gif.gif','rb'),'image/gif'))] headers = {'Authorization': 'Bearer f0e27b94e6f8ead1480763e666c8587b73365850'} response = requests.request("POST", url, headers=headers, data=payload, files=files)
# looking for the link imgur_return = response.text linkfile = open('C:/Users/Games/imgurlink.txt', 'a+') linkfile.write(imgur_return) linkfile.seek(0) filetext = linkfile.read() linkfile.close() imgurlink = re.findall(r'https:\/\/i\.imgur\.com\/.*\.gif', filetext) # ibg = imgurlink # print (ibg)
# if i don't do it the following way, the link comes out with ['brackets and quotes'] # that's probably because what i've been 're turned' is a list # and the following only works because it's the only link in the JSON response for imgur in imgurlink: imgur_big_gif = imgur os.remove('C:/Users/Games/imgurlink.txt')
# big gif is stored, hmm cancelling all file movements, both these methods have worked. I think i need to close the file or something # PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'C:/Users/Games/Postman/files/gif.gif' # src = "C:/Users/Games/Postman/files/gif.gif" # dest = f"C:/Users/Games/CB/{today.year}/{today.month}-{today.year}/" # shutil.move('C:/Users/Games/Postman/files/gif.gif', dest) # rename ("C:/Users/Games/CB/2024/2-2024/gif.gif", f"C:/Users/Games/CB/{today.year}/{today.month}-{today.year}/b{today.month}-{today.day}.gif")
# OR # look at me turning 4 lines of code into 1 # rename ("C:/Users/Games/Postman/files/gif.gif", f"C:/Users/Games/CB/{today.year}/{today.month}-{today.year}/b{today.month}-{today.day}.gif") # rename ("C:/Users/Games/Postman/files/gif.gif", f"C:/Users/Games/CB/{today.year}/{today.month}-{today.year}/b{today.month}-{today.day}.gif")
# open imgtalk to upload gif2 url3 = "https://www.talkimg.com/" webbrowser.open(url3) time.sleep(10) pyautogui.click(953, 590) time.sleep(5) pyautogui.click(221, 479) time.sleep(5) pyautogui.typewrite("gif2.gif") time.sleep(5) pyautogui.hotkey('tab') time.sleep(1) pyautogui.hotkey('tab') time.sleep(10) pyautogui.hotkey('enter') time.sleep(5) pyautogui.click(949, 645) time.sleep(5) pyautogui.click(1276, 625) time.sleep(5) imgtalklink = pyperclip.paste()
# little gif is stored # rename (f"C:/Users/Games/Postman/files/gif2.gif", f"C:/Users/Games/CB/{today.year}/{today.month}-{today.year}/{today.month}-{today.day}.gif")
# # prepare to store downloads # src = "C:/Users/Games/CB/images" # dest = "C:/Users/Games/CB/images/{}".format(directory) # files = os.listdir(src) # os.chdir(src)
# # only move numbered png files # for file in files: # if file.endswith(").png"): # shutil.move(file, dest)
# add post to clipboard for btctalk pyperclip.copy(f"ChartBuddy's 24 hour Wall Observation recap\n[url={imgur_big_gif}].{imgtalklink}.[/url]\nAll Credit to [url=https://bitcointalk.org/index.php?topic=178336.msg10084622#msg10084622]ChartBuddy[/url]")
# can use this link for the reply button url7 = 'https://bitcointalk.org/index.php?action=post;topic=178336.0' webbrowser.open(url7) time.sleep(10) pyautogui.hotkey('tab') time.sleep(5) pyautogui.hotkey('tab') time.sleep(5) pyautogui.hotkey('ctrl', 'v') time.sleep(5) pyautogui.hotkey('tab') time.sleep(5) # we're doing it live if the next command is #ed out pyautogui.hotkey('tab') time.sleep(5) pyautogui.hotkey('enter')
#runtime is calculated stopTime = time.perf_counter() runtime = {stopTime - startTime}
# save to csv file f = open('C:/PyProjects/runtimes.csv', 'a', newline='') writer = csv.writer(f) writer.writerow(runtime) Crash and burn again, with the permissions. I thought changing the GIMP's exports from Postman's folder to my own folder would solve the problem. Which it did, but apparently only one time use only. I understand Buddy needs space, so I'm going to install linux on a virtual machine, which i do have some very limited experience with, and see how things go there. See you on the other side.
|
████████████████████ ▟██ █████████████████████████████████████ █████████████████ ⚞▇▇▋███▎▎(⚬)> ███████████████████████████████████ ████████████████████ ▜██ █████████████████████████████████████
|
|
|
LoyceV
Legendary
Offline
Activity: 3500
Merit: 17694
Thick-Skinned Gang Leader and Golden Feather 2021
|
PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: ~ I'm going to install linux on a virtual machine, which i do have some very limited experience with, and see how things go there. Linux will never tell you you can't move or delete a file because it's in use. It just does what you tell it to do. If you would delete a movie while it's playing, it keeps playing until the end anyway.
|
| | Peach BTC bitcoin | │ | Buy and Sell Bitcoin P2P | │ | . .
▄▄███████▄▄ ▄██████████████▄ ▄███████████████████▄ ▄█████████████████████▄ ▄███████████████████████▄ █████████████████████████ █████████████████████████ █████████████████████████ ▀███████████████████████▀ ▀█████████████████████▀ ▀███████████████████▀ ▀███████████████▀ ▀▀███████▀▀
▀▀▀▀███████▀▀▀▀ | | EUROPE | AFRICA LATIN AMERICA | | | ▄▀▀▀ █ █ █ █ █ █ █ █ █ █ █ ▀▄▄▄ |
███████▄█ ███████▀ ██▄▄▄▄▄░▄▄▄▄▄ █████████████▀ ▐███████████▌ ▐███████████▌ █████████████▄ ██████████████ ███▀███▀▀███▀ | . Download on the App Store | ▀▀▀▄ █ █ █ █ █ █ █ █ █ █ █ ▄▄▄▀ | ▄▀▀▀ █ █ █ █ █ █ █ █ █ █ █ ▀▄▄▄ |
▄██▄ ██████▄ █████████▄ ████████████▄ ███████████████ ████████████▀ █████████▀ ██████▀ ▀██▀ | . GET IT ON Google Play | ▀▀▀▄ █ █ █ █ █ █ █ █ █ █ █ ▄▄▄▀ |
|
|
|
DirtyKeyboard (OP)
Sr. Member
Offline
Activity: 462
Merit: 821
Fly free sweet Mango.
|
|
March 01, 2024, 09:06:00 AM |
|
PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: ~ I'm going to install linux on a virtual machine, which i do have some very limited experience with, and see how things go there. Linux will never tell you you can't move or delete a file because it's in use. It just does what you tell it to do. If you would delete a movie while it's playing, it keeps playing until the end anyway.That would seem to solve that problem, because if i understand how things are happening, GIMP should be done with the file once GIMP is closed, but obviously not. But here's a crazy thing. Tonight the upload failed while the file move succeeded. I did move the file move command to the very end of the script, but it had previously failed all day in testing with the same changes. I need to pay closer attention to the finer details. Oh. Yeah. My attempt at an auto monthly recap, resulted in... not that. Going to try again tomorrow. The code for the monthly recap is below, but honestly, of course I might have messed up my own code by backing up empty folders by accident. Whoops. Whoopsie. import os, shutil from os import rename
#making a backup shutil.copytree('C:/Users/Games/CB/CBuddyDaily', 'C:/Users/Games/Backup')
# set dates and variables for file numbering # today = date.today() # tomorrow = today + timedelta(1) destination = "C:/Users/Games/CB/2024/2-2024/Monthly" hour_number = 1 day_number= 1
for i in range(1, 30): src = f"C:/Users/Games/CB/CBuddyDaily/02-{day_number:02d}" files = os.listdir(src) os.chdir(src) print(src) for file in files: rename (file, f"C:/Users/Games/CB/2024/2-2024/Monthly/download ({hour_number}).png") hour_number += 1 print(hour_number) print(day_number) day_number += 1 print(day_number) def play_game(Rocket_League)
|
████████████████████ ▟██ █████████████████████████████████████ █████████████████ ⚞▇▇▋███▎▎(⚬)> ███████████████████████████████████ ████████████████████ ▜██ █████████████████████████████████████
|
|
|
DirtyKeyboard (OP)
Sr. Member
Offline
Activity: 462
Merit: 821
Fly free sweet Mango.
|
|
March 02, 2024, 07:53:55 AM Last edit: March 02, 2024, 08:46:36 AM by DirtyKeyboard |
|
Okay. Nobody told me this bloody machine can't even count. You tell it to load numbered files in order and it goes, 1, 10, 11-19, 2, 20, 21... Luckily I could see something was amiss. Here's the code i finally squeezed out. I did have to do a quick Brave search to recall the method of stating a range, and getting the length of a list. Also some hard coding of dates that need to be fixed. I figured the easiest way i knew to get all these images into GIMP in the right order would be to number them as they were placed in a single folder. 674 this previous month. Then my existing GIMP gif making plugin could easily be modified, but I think i actually just ctrl-a, selected them all and then just dragged it into GIMP with the title page already loaded, use the reverse the layers plugin, export, and Bob's your uncle. import os, shutil, time from os import rename
#making a backup shutil.copytree('C:/Users/Games/CB/CBuddyDaily', 'C:/Users/Games/Backup')
# set dates and variables for folders and files # today = date.today() # tomorrow = today + timedelta(1) destination = "C:/Users/Games/CB/2024/2-2024/Monthly" hour_number = 1 day_number= 1
# for 29 days this year for i in range(1, 30): file_number = 1 src = f"C:/Users/Games/CB/CBuddyDaily/2-{day_number:02d}" files = os.listdir(src) CB_daily_post_total = len(files) + 1 os.chdir(src) time.sleep(1)
for m in range(1, CB_daily_post_total): rename (f'C:/Users/Games/CB/CBuddyDaily/2-{day_number:02d}/download ({m}).png', f"C:/Users/Games/CB/2024/2-2024/Monthly/download ({hour_number}).png") print(hour_number, file_number, m) hour_number += 1 file_number += 1 day_number += 1 print(day_number) There might be some good error checking code in there. If i know how many posts ChartBuddy made that day before starting the whole process, that would be helpful. Still waiting for that perfect run, but things really went smoothly this run, only because there were exactly 24 images to download. Gonna work on that. 1.Download Left click 2.Import Do pushups. Y'all hear about the 100 pushups a day till 100k btc challenge? https://bitcointalk.org/index.php?topic=5484350.03.Export 4.Posting I have it set to skip the Post button and tab one more time to the Preview button, for now. This time I had to add the monthly recap to the post. 5.Archive Here, it finally errored out. On the penultimate command, because I didn't have a new 3-2024 folder to store the gifs in. 2_29CB.py errr 3_1CB.py EDIT: I'm so distressed i couldn't post the monthly, I don't even know what day it is. Just like my code sometimes. import csv, os, pyautogui, pyperclip, re, requests, shutil, time, urllib.request, webbrowser from datetime import timedelta, date from os import rename
startTime = time.perf_counter()
# set dates for links and new folder today = date.today() tomorrow = today + timedelta(1)
# name newfolder with date directory = f"{today.month}-{today.day}" parent_dir = "C:/Users/Games/CB/images/" newf = os.path.join(parent_dir, directory) os.mkdir(newf)
# get the final 20 gif layers in reverse order, starting with 24 number = 24 url4 = 'https://bitcointalk.org/index.php?action=profile;u=110685;sa=showPosts;start=0' response = requests.get(url4)
# turn response into textfile of the source code. source_code = response.text
# read the source code, save it, and turn it into a string. textfile = open('C:/Users/Games/CBSource.txt', 'a+') textfile.write(source_code) textfile.seek(0) filetext = textfile.read() textfile.close()
# find matches using regex, and for every match download the image and number it. resorted to asking copilot for help with my regex matches = re.findall(r'https:\/\/www\.talkimg\.com\/images\/\w{4}/\w{2}\/\w{2}\/\w{5}\.png', filetext) for link in matches: print(number, link) urllib.request.urlretrieve(link, 'C:/Users/Games/CB/images/download ({}).png'.format(number)) number = number - 1 time.sleep(5) os.remove('C:/Users/Games/CBSource.txt')
# get the first 4 images in reverse order, i copied my own code and changed the link. Should have made a function and then fed it the links probably. url5 = 'https://bitcointalk.org/index.php?action=profile;u=110685;sa=showPosts;start=20' response5 = requests.get(url5) source_code = response5.text textfile5 = open('C:/Users/Games/CBSource2.txt', 'a+') textfile5.write(source_code) textfile5.seek(0) filetext = textfile5.read() textfile5.close()
# find matches using regex, and for first 4 matches download the image and number it matches = re.findall(r'https:\/\/www\.talkimg\.com\/images\/\w{4}/\w{2}\/\w{2}\/\w{5}\.png', filetext) for link in matches: if number >= 1: urllib.request.urlretrieve(link, 'C:/Users/Games/CB/images/download ({}).png'.format(number)) print(number, link) number = number - 1 time.sleep(5) os.remove('C:/Users/Games/CBSource2.txt')
# hot keys to open gimp and then the plugin that load layers, export, scale, export gifs, quit, agree to not save time.sleep(5) pyautogui.click(1, 1) time.sleep(5) pyautogui.hotkey('ctrl', 'alt', 'g') time.sleep(20) pyautogui.click(820, 446) time.sleep(20) pyautogui.hotkey('ctrl', 'alt', 'l') time.sleep(5) pyautogui.hotkey('tab') time.sleep(1) pyautogui.hotkey('tab') time.sleep(1) pyautogui.hotkey('tab') time.sleep(1) pyautogui.hotkey('enter') time.sleep(10) pyautogui.hotkey('ctrl', 'q') time.sleep(5) pyautogui.hotkey('shift', 'tab') time.sleep(1) pyautogui.hotkey('enter') time.sleep(10)
# uploading big gif and getting link to use later, url = "https://api.imgur.com/3/image" payload = {'name': f'b{today.month}-{today.day}'} files=[('image',('C:/Users/Games/gif.gif',open('C:/Users/Games/gif.gif','rb'),'image/gif'))] headers = {'Authorization': 'Bearer f0e27b94e6f8ead1480763e666c8587b73365850'} response = requests.request("POST", url, headers=headers, data=payload, files=files)
# looking for the link imgur_return = response.text linkfile = open('C:/Users/Games/imgurlink.txt', 'a+') linkfile.write(imgur_return) linkfile.seek(0) filetext = linkfile.read() linkfile.close() imgurlink = re.findall(r'https:\/\/i\.imgur\.com\/.*\.gif', filetext)
# and the following only works i think because it's the only link in the JSON response for imgur in imgurlink: imgur_big_gif = imgur os.remove('C:/Users/Games/imgurlink.txt')
# open imgtalk to upload gif2 url3 = "https://www.talkimg.com/" webbrowser.open(url3) time.sleep(30) pyautogui.click(953, 590) time.sleep(5) pyautogui.click(221, 479) time.sleep(5) pyautogui.typewrite("gif2.gif") time.sleep(5) pyautogui.hotkey('tab') time.sleep(1) pyautogui.hotkey('tab') time.sleep(10) pyautogui.hotkey('enter') time.sleep(5) pyautogui.click(949, 645) time.sleep(5) pyautogui.click(1276, 625) time.sleep(5) imgtalklink = pyperclip.paste()
# add post to clipboard for btctalk pyperclip.copy(f"ChartBuddy's 24 hour Wall Observation recap\n[url={imgur_big_gif}].{imgtalklink}.[/url]\nAll Credit to [url=https://bitcointalk.org/index.php?topic=178336.msg10084622#msg10084622]ChartBuddy[/url]")
# can use this link for the reply button url7 = 'https://bitcointalk.org/index.php?action=post;topic=178336.0' webbrowser.open(url7) time.sleep(10) pyautogui.hotkey('tab') time.sleep(5) pyautogui.hotkey('tab') time.sleep(5) pyautogui.hotkey('ctrl', 'v') time.sleep(5) pyautogui.hotkey('tab') time.sleep(5) # we're doing it live if the next command is #ed out pyautogui.hotkey('tab') time.sleep(5) pyautogui.hotkey('enter')
#runtime is calculated stopTime = time.perf_counter() runtime = {stopTime - startTime}
# save to csv file f = open('C:/PyProjects/runtimes.csv', 'a', newline='') writer = csv.writer(f) writer.writerow(runtime)
time.sleep(20)
# prepare to store downloads src = "C:/Users/Games/CB/images" dest = "C:/Users/Games/CB/images/{}".format(directory) files = os.listdir(src) os.chdir(src)
# only move numbered png files for file in files: if file.endswith(").png"): shutil.move(file, dest)
# big gif is stored rename ("C:/Users/Games/gif.gif", f"C:/Users/Games/CB/{today.year}/{today.month}-{today.year}/b{today.month}-{today.day}.gif")
# little gif is stored rename (f"C:/Users/Games/gif2.gif", f"C:/Users/Games/CB/{today.year}/{today.month}-{today.year}/{today.month}-{today.day}.gif")
Next moves: Handling errors, exceptions, and elses. What a save! EDIT: Runtime was 364.1 s of which, 220 s were sleep commands to make sure things weren't happening too fast. Very nice.
|
████████████████████ ▟██ █████████████████████████████████████ █████████████████ ⚞▇▇▋███▎▎(⚬)> ███████████████████████████████████ ████████████████████ ▜██ █████████████████████████████████████
|
|
|
LoyceV
Legendary
Offline
Activity: 3500
Merit: 17694
Thick-Skinned Gang Leader and Golden Feather 2021
|
|
March 02, 2024, 08:28:04 AM |
|
Okay. Nobody told me this bloody machine can't even count. You tell it to load numbered files in order and it goes, 1, 10, 11-19, 2, 20, 21... Lol. Been there, done that It's not counting, it's sorting. Easy fix: use leading zeros, or numerical sort.
|
| | Peach BTC bitcoin | │ | Buy and Sell Bitcoin P2P | │ | . .
▄▄███████▄▄ ▄██████████████▄ ▄███████████████████▄ ▄█████████████████████▄ ▄███████████████████████▄ █████████████████████████ █████████████████████████ █████████████████████████ ▀███████████████████████▀ ▀█████████████████████▀ ▀███████████████████▀ ▀███████████████▀ ▀▀███████▀▀
▀▀▀▀███████▀▀▀▀ | | EUROPE | AFRICA LATIN AMERICA | | | ▄▀▀▀ █ █ █ █ █ █ █ █ █ █ █ ▀▄▄▄ |
███████▄█ ███████▀ ██▄▄▄▄▄░▄▄▄▄▄ █████████████▀ ▐███████████▌ ▐███████████▌ █████████████▄ ██████████████ ███▀███▀▀███▀ | . Download on the App Store | ▀▀▀▄ █ █ █ █ █ █ █ █ █ █ █ ▄▄▄▀ | ▄▀▀▀ █ █ █ █ █ █ █ █ █ █ █ ▀▄▄▄ |
▄██▄ ██████▄ █████████▄ ████████████▄ ███████████████ ████████████▀ █████████▀ ██████▀ ▀██▀ | . GET IT ON Google Play | ▀▀▀▄ █ █ █ █ █ █ █ █ █ █ █ ▄▄▄▀ |
|
|
|
DirtyKeyboard (OP)
Sr. Member
Offline
Activity: 462
Merit: 821
Fly free sweet Mango.
|
|
March 02, 2024, 08:41:16 AM |
|
Okay. Nobody told me this bloody machine can't even count. You tell it to load numbered files in order and it goes, 1, 10, 11-19, 2, 20, 21... Lol. Been there, done that It's not counting, it's sorting. Easy fix: use leading zeros, or numerical sort. I need to look into what you meant by numerical sort, in terms of possible commands. But yeah, i guess it alphabetically sorts the list, not numerically. But when i add the leading zeros it breaks all my plugins. Thus is the way of progress. I'm really surprised the datetime module doesn't return double digit hours, days, months, all that stuff.
|
████████████████████ ▟██ █████████████████████████████████████ █████████████████ ⚞▇▇▋███▎▎(⚬)> ███████████████████████████████████ ████████████████████ ▜██ █████████████████████████████████████
|
|
|
LoyceV
Legendary
Offline
Activity: 3500
Merit: 17694
Thick-Skinned Gang Leader and Golden Feather 2021
|
I need to look into what you meant by numerical sort, in terms of possible commands. In sort, it's this: -n, --numeric-sort compare according to string numerical value If you're going to move your code to Linux anyway, maybe it helps. I'm really surprised the datetime module doesn't return double digit hours, days, months, all that stuff. Isn't that an option you can toggle?
|
| | Peach BTC bitcoin | │ | Buy and Sell Bitcoin P2P | │ | . .
▄▄███████▄▄ ▄██████████████▄ ▄███████████████████▄ ▄█████████████████████▄ ▄███████████████████████▄ █████████████████████████ █████████████████████████ █████████████████████████ ▀███████████████████████▀ ▀█████████████████████▀ ▀███████████████████▀ ▀███████████████▀ ▀▀███████▀▀
▀▀▀▀███████▀▀▀▀ | | EUROPE | AFRICA LATIN AMERICA | | | ▄▀▀▀ █ █ █ █ █ █ █ █ █ █ █ ▀▄▄▄ |
███████▄█ ███████▀ ██▄▄▄▄▄░▄▄▄▄▄ █████████████▀ ▐███████████▌ ▐███████████▌ █████████████▄ ██████████████ ███▀███▀▀███▀ | . Download on the App Store | ▀▀▀▄ █ █ █ █ █ █ █ █ █ █ █ ▄▄▄▀ | ▄▀▀▀ █ █ █ █ █ █ █ █ █ █ █ ▀▄▄▄ |
▄██▄ ██████▄ █████████▄ ████████████▄ ███████████████ ████████████▀ █████████▀ ██████▀ ▀██▀ | . GET IT ON Google Play | ▀▀▀▄ █ █ █ █ █ █ █ █ █ █ █ ▄▄▄▀ |
|
|
|
DirtyKeyboard (OP)
Sr. Member
Offline
Activity: 462
Merit: 821
Fly free sweet Mango.
|
|
March 04, 2024, 07:49:29 AM Last edit: April 02, 2024, 09:45:54 AM by DirtyKeyboard |
|
Ok thank you for that advice. Maybe you've got some more. Apparently I've taken on another posting job in the Top 20 days for Bitcoin thread, while still retaining my amateur status. Luckily the code that was being used was available, but it is a bash script. I spent yesterday fumbling about with Linux and WSL, and then a virtual box Ubuntu install, trying to get it to work. Copilot walked me through the steps as the errors rolled in, have to be in the same directory, set the environment, set permissions, set it to execute. But in the end it would run, and give no results. Just onto the next prompt. I do really like Python though and after an initial translation by copilot, that didn't work, we hacked out a partial solution. The Visual Studio integration with WSL does seem pretty useful, like ImageMagick, and warrants further study. Here is the base bash script from user dooglus, that i believe yefi used and might have modified, to include underlining for example. vwap() { days=1200 top=20 currency=USD rows=$(wget -o/dev/null -O- "http://bitcoincharts.com/charts/chart.json?m=bitstampUSD&r=$days&i=Daily" | sed 's/], \[/\n/g' | head -n $((days-1)) | tr -d '[],' | awk '{print $1, $8}' | sort -k2nr | head -$top ) newest=$(echo "$rows" | sort -n | tail -1 | awk '{print $1}') printf "Update:\n[pre]\n" n=1 month_ago=$(($(date +%s) - 60*60*24*32)) echo "$rows" | while read t p do if ((t > month_ago)); then b1="[b]" ; b2="[/b]" ; else b1=""; b2=""; fi if ((t == newest)) ; then c1="[color=#7F0000]"; c2="[/color]"; else c1=""; c2=""; fi printf "%s%s%2d %s %7.2f $currency%s%s\n" "$b1" "$c1" $n "$(TZ= date -d @$t +%Y-%m-%d)" $p "$c2" "$b2" ((n++)) done printf "[/pre]\n" } And here is the current Python code I'm using. It's got a function in it, so you know I had help. Again, I can read it, but not write it. I did know what to change to get the price to the nearest dollar, and not penny. import requests from datetime import datetime
#getting the last 1200 days of btc volume weighted average price def fetch_bitcoin_data(days=1200, top=100, currency='USD'): url = f"http://bitcoincharts.com/charts/chart.json?m=bitstampUSD&r={days}&i=Daily" response = requests.get(url, verify=False) data = response.json() number = 1 rows = [(entry[0], entry[7]) for entry in data] # Exclude the most recent entry (today's data) rows = rows[:-1] sorted_rows = sorted(rows, key=lambda x: x[1], reverse=True) for timestamp, vwap in sorted_rows[:top]: adjusted_timestamp = int(timestamp) utc_date = datetime.utcfromtimestamp(adjusted_timestamp).strftime('%Y-%m-%d') print(f"{number:2d} {utc_date} {vwap:.0f} {currency}") number += 1 if number > top: break
if __name__ == "__main__": fetch_bitcoin_data()
|
████████████████████ ▟██ █████████████████████████████████████ █████████████████ ⚞▇▇▋███▎▎(⚬)> ███████████████████████████████████ ████████████████████ ▜██ █████████████████████████████████████
|
|
|
DirtyKeyboard (OP)
Sr. Member
Offline
Activity: 462
Merit: 821
Fly free sweet Mango.
|
|
March 05, 2024, 10:04:42 AM |
|
It's amazing how many things can go wrong, right? I opened up GIMP, admittedly to prepare to the daily, one click, do push ups special. Because sometimes, the first time opening GIMP, takes a lot longer than subsequent times per restart probably. This time though, upon CTRL-ALT-Ging my way to opening GIMP there was an update being proffered. Well hell i thought, yet another thing that would have poked through my travesty of a tapestry of code. I remember wondering if the keyboard shortcut to open would carry over, but quickly stored that thought. So...GIMP never opened, gifs were never made to upload, and fail. But, I only had to change the properties of the GIMP desktop shortcut, # out the download files part, remember to delete the empty current date folder before rerunning so I don't get the folder already exists error again, because I apparently refuse to deal with error cases yet. But I did come up with some useful code for the Top20 job. It fixes the weird justification of the number 100, and bolds all the top 100 daily volume weighted average prices. I have not yet figured out how to automatically highlight the most recent top 100 vwap. Current Top20 code: import requests, json, time from datetime import datetime, timezone # python Top20Current.py > vwap_ordered_list.txt
# setting time variables current_unix_time = int(time.time()) unix_time_month = 60 * 60 * 24 * 31 unix_time_day = 60 * 60 * 24
#grabs json data, and sorts it by descending vwap def fetch_bitcoin_data(days=1200, top=100, currency='USD'): url = f"http://bitcoincharts.com/charts/chart.json?m=bitstampUSD&r={days}&i=Daily" response = requests.get(url, verify=False) data = response.json()
#used for setting the rank, should be called rank not number. todo, change number to rank number = 1
# we only want the date and vwap items from the full son return rows = [(entry[0], entry[7]) for entry in data] rows = rows[:-1] sorted_rows = sorted(rows, key=lambda x: float(x[1]), reverse=True)
# building the post in terminal that i need to figure out how to add it a file print("[pre][size=10pt]") print("[url=https://bitcoincharts.com/charts/bitstampUSD]Rank BitStamp USD/BTC[/url]") for timestamp, vwap in sorted_rows[:top]: adjusted_timestamp = int(timestamp) utc_date = datetime.fromtimestamp(adjusted_timestamp, tz=timezone.utc).strftime('%Y-%m-%d') if number <= 99: spacing = " " if number == 100: spacing = " " if timestamp >= current_unix_time - unix_time_month: bolding = "[b]" unbolding = "[/b]" if timestamp <= current_unix_time - (unix_time_month + 1): bolding = "" unbolding = "" print(f"{spacing}{bolding}{number:2d} {utc_date} {vwap:.0f} {currency}{unbolding}") number += 1 print("[url=https://bitcointalk.org/index.php?topic=138109.msg54917391#msg54917391][size=8pt] * * Chart Explanation * *[/size][/url]") print("[/size][/pre]") if __name__ == "__main__": fetch_bitcoin_data() EDIT: changed file to folder because that is what i meant, then changed edit to error because the same.
|
████████████████████ ▟██ █████████████████████████████████████ █████████████████ ⚞▇▇▋███▎▎(⚬)> ███████████████████████████████████ ████████████████████ ▜██ █████████████████████████████████████
|
|
|
DirtyKeyboard (OP)
Sr. Member
Offline
Activity: 462
Merit: 821
Fly free sweet Mango.
|
|
March 09, 2024, 07:33:59 AM |
|
ChartBuddy Daily Recap Storylog: 1.Download This is one of the last things I really figured out, and it's been the least error prone parts of the whole shebang. One thing to fix is it currently downloads the last 24 images ChartBuddy posted, not necessarily only the posts from the last 24 hours. I think I can figure out a way to request from Ninjastic Space, the number of posts in the last day, then fix the script to only download that many images for the day. 2.Import My main goal now, is to get rid of pyautogui, and figure out how to run GIMP and post comments without having to worry about is the window maximized?, is it on the correct monitor?... 3.Export Big progress has been made on the talkimg and imgur front. I have now been given a talkimg.com account, so I can use the API. It took me awhile to figure out the payload bit, but I eventually figured out how to I wonder how many people are racing through the code, right now, to see if i was foolish enough to post my secret API key on a public forum again. Nope, not today, and I hope not in the future. I was getting a bit down, because it was like, I would try something with imgur and it would work, and then try it again and things are different. So if anyone was messing with me, thank you for not really doing any damage, but who am I kidding? I'm sure it was just me. I remember back in the day where people would unknowingly expose their btc private key on the news or something, and zip, there go the cornz. 4.Posting See above. 5.Archive Starting to date everything with 2 digit days and months for easier sorting. Full03_08CB.py import csv, json, os, pyautogui, pyperclip, re, requests, shutil, time, urllib.request, webbrowser from datetime import timedelta, date from os import rename
# on your marks, get set, go! startTime = time.perf_counter()
# set 2 digit dates for links and new folder today = date.today() tomorrow = today + timedelta(1)
# get the final 20 gif layers in reverse order, starting with 24 number = 24 url4 = 'https://bitcointalk.org/index.php?action=profile;u=110685;sa=showPosts;start=0' time.sleep(30) response = requests.get(url4)
# turn response into textfile of the source code. source_code = response.text
# read the source code, save it, and turn it into a string. textfile = open('C:/Users/Games/CBSource.txt', 'a+') textfile.write(source_code) textfile.seek(0) filetext = textfile.read() textfile.close()
# find matches using regex, and for every match download the image and number it. resorted to asking copilot for help with my regex matches = re.findall(r'https:\/\/www\.talkimg\.com\/images\/\w{4}/\w{2}\/\w{2}\/\w{5}\.png', filetext) for link in matches: print(number, link) urllib.request.urlretrieve(link, 'C:/Users/Games/CB/images/download ({}).png'.format(number)) number = number - 1 time.sleep(5) os.remove('C:/Users/Games/CBSource.txt')
# get the first 4 images in reverse order, i copied my own code and changed the link. Should have made a function and then fed it the links probably. url5 = 'https://bitcointalk.org/index.php?action=profile;u=110685;sa=showPosts;start=20' time.sleep(30) response5 = requests.get(url5) source_code = response5.text textfile5 = open('C:/Users/Games/CBSource2.txt', 'a+') textfile5.write(source_code) textfile5.seek(0) filetext = textfile5.read() textfile5.close()
# find matches using regex, and for first 4 matches download the image and number it matches = re.findall(r'https:\/\/www\.talkimg\.com\/images\/\w{4}/\w{2}\/\w{2}\/\w{5}\.png', filetext) for link in matches: if number >= 1: urllib.request.urlretrieve(link, 'C:/Users/Games/CB/images/download ({}).png'.format(number)) print(number, link) number = number - 1 time.sleep(5) os.remove('C:/Users/Games/CBSource2.txt')
# hot keys to open gimp and then the plugin that load layers, export, scale, export gifs, quit, agree to not save time.sleep(5) pyautogui.click(1, 1) time.sleep(5) pyautogui.hotkey('ctrl', 'alt', 'g') time.sleep(40) pyautogui.click(820, 446) time.sleep(20) pyautogui.hotkey('ctrl', 'alt', 'l') time.sleep(5) pyautogui.hotkey('tab') time.sleep(1) pyautogui.hotkey('tab') time.sleep(1) pyautogui.hotkey('tab') time.sleep(1) pyautogui.hotkey('enter') time.sleep(10) pyautogui.hotkey('ctrl', 'q') time.sleep(5) pyautogui.hotkey('shift', 'tab') time.sleep(1) pyautogui.hotkey('enter') time.sleep(20)
# uploading big gif and getting link to use later, url = "https://api.imgur.com/3/image" payload = {'name': f'b{today.month:02d}-{today.day:02d}-{today.year}'} files=[('image',('gif.gif',open('C:/Users/Games/gif.gif','rb'),'image/gif'))] headers = {'Authorization': 'Bearer **********************************'} response = requests.post(url, headers=headers, data=payload, files=files) data = response.json() imgur_big_gif = data.get("data", {}).get("link")
# uploading talkimg gif and getting link to use later, url = "https://talkimg.com/api/1/upload" headers = {"X-API-Key": "chv_e*************************************************************"} files = {"source": open("C:/Users/Games/gif2.gif", "rb")} payload = {"title": f'b{today.month:02d}-{today.day:02d}-{today.year}', "album_id": "UFbj"} response = requests.post(url, headers=headers, data=payload, files=files) data = response.json() talkimg_gif = data["image"]["url"]
# add post to clipboard for btctalk pyperclip.copy(f"ChartBuddy's 24 hour Wall Observation recap\n[url={imgur_big_gif}].[img]{talkimg_gif}[/img].[/url]\nAll Credit to [url=https://bitcointalk.org/index.php?topic=178336.msg10084622#msg10084622]ChartBuddy[/url]")
# can use this link for the reply button url7 = 'https://bitcointalk.org/index.php?action=post;topic=178336.0' webbrowser.open(url7) time.sleep(20) pyautogui.hotkey('tab') time.sleep(5) pyautogui.hotkey('tab') time.sleep(5) pyautogui.hotkey('ctrl', 'v') time.sleep(5) pyautogui.hotkey('tab') time.sleep(5) # we're doing it live if the next command is #ed out pyautogui.hotkey('tab') time.sleep(5) pyautogui.hotkey('enter')
# name newfolder with date directory = f"{today.month:02d}-{today.day:02d}" parent_dir = "C:/Users/Games/CB/images/" newf = os.path.join(parent_dir, directory) os.mkdir(newf)
# prepare to store downloads src = "C:/Users/Games/CB/images" dest = "C:/Users/Games/CB/images/{}".format(directory) files = os.listdir(src) os.chdir(src)
# only move numbered png files for file in files: if file.endswith(").png"): shutil.move(file, dest)
# gifs are stored: NEED NEW MONTHLY FOLDER CREATION CODE rename ("C:/Users/Games/gif.gif", f"C:/Users/Games/CB/{today.year}/{today.month:02d}-{today.year}/b{today.month:02d}-{today.day:02d}.gif") rename (f"C:/Users/Games/gif2.gif", f"C:/Users/Games/CB/{today.year}/{today.month:02d}-{today.year}/{today.month:02d}-{today.day:02d}.gif")
#runtime is calculated stopTime = time.perf_counter() runtime = {stopTime - startTime}
# save to csv file f = open('C:/PyProjects/runtimes.csv', 'a', newline='') writer = csv.writer(f) writer.writerow(runtime) And then we have this new job posting the top 100 days of the volume weighted average price of BTC thread ne Top 20 days for Bitcoin in the Speculation board. Storylog: The main challenge with this code, was being able to implement it from the road, which is where I'm usually located at UTC midnight. So I have worked out how to put the top 100 vwaps, with bolding for the vwaps within the last 31 days on the clipboard. So all this week I have left my computer on (but not the monitors of course), and used my phone with chrome desktop remote to run the script. The chrome remote also has the share clipboard feature, so then I can paste the list using my phone, which is so much easier than trying to control the home PC. Then I can paste into bitcointalk, change the colors for latest and oldest top 100 vwap and hit post. I haven't yet figured out how to script the colors. I plan to work on that tomorrow. Top100 import json, requests, pyautogui, pyperclip, time, webbrowser from datetime import datetime, timezone
current_unix_time = int(time.time()) unix_time_month = 60 * 60 * 24 * 31 unix_time_day = 60 * 60 * 24
def fetch_bitcoin_data(days=1200, top=100, currency='USD'): url = f"http://bitcoincharts.com/charts/chart.json?m=bitstampUSD&r={days}&i=Daily" response = requests.get(url, verify=False) data = response.json() number = 1 rows = [(entry[0], entry[7]) for entry in data] rows = rows[:-1] sorted_rows = sorted(rows, key=lambda x: float(x[1]), reverse=True)
# opens file to store the top 100 vwaps with open('top100test.txt', 'w') as top100:
# sorts the daily VWAP by highest average price, and numbers them for timestamp, vwap in sorted_rows[:top]: adjusted_timestamp = int(timestamp) utc_date = datetime.fromtimestamp(adjusted_timestamp, tz=timezone.utc).strftime('%Y-%m-%d') # this is to make the columns look pretty if number <= 99: spacing = " " if number == 100: spacing = " " # this is to make top 100 vwaps within the last 31 days bold if timestamp >= current_unix_time - unix_time_month: bolding = "[b]" unbolding = "[/b]" if timestamp <= current_unix_time - (unix_time_month + 1): bolding = "" unbolding = "" formatted_output = f"{spacing}{bolding}{number:2d} {utc_date} {vwap:.0f} {currency}{unbolding}" top100.write(formatted_output + '\n') # this gives them the rank number number += 1
# putting the post on the clipboard with open('C:/Users/Games/top100Test.txt', 'r') as top100: list = top100.read() prelude = "[pre][size=10pt][url=https://bitcoincharts.com/charts/bitstampUSD]Rank BitStamp USD/BTC[/url]" explanation = "[url=https://bitcointalk.org/index.php?topic=138109.msg54917391#msg54917391][size=8pt] * * Chart Explanation * *[/size][/url][/pre]" full_post = f"{prelude}\n{list}{explanation}" pyperclip.copy(full_post) Because of couse it did. The current folder name is 3-2024 File "c:\PyProjects\Full3_8CB.py", line 140, in <module> rename ("C:/Users/Games/gif.gif", f"C:/Users/Games/CB/{today.year}/{today.month:02d}-{today.year}/b{today.month:02d}-{today.day:02d}.gif") FileNotFoundError: [WinError 3] The system cannot find the path specified: 'C:/Users/Games/gif.gif' -> 'C:/Users/Games/CB/2024/03-2024/b03-08.gif'
|
████████████████████ ▟██ █████████████████████████████████████ █████████████████ ⚞▇▇▋███▎▎(⚬)> ███████████████████████████████████ ████████████████████ ▜██ █████████████████████████████████████
|
|
|
LoyceV
Legendary
Offline
Activity: 3500
Merit: 17694
Thick-Skinned Gang Leader and Golden Feather 2021
|
One thing to fix is it currently downloads the last 24 images ChartBuddy posted, not necessarily only the posts from the last 24 hours. I think I can figure out a way to request from Ninjastic Space, the number of posts in the last day, then fix the script to only download that many images for the day. Why don't you use the time stamps on ChartBuddy's post history (and the second page)?
|
| | Peach BTC bitcoin | │ | Buy and Sell Bitcoin P2P | │ | . .
▄▄███████▄▄ ▄██████████████▄ ▄███████████████████▄ ▄█████████████████████▄ ▄███████████████████████▄ █████████████████████████ █████████████████████████ █████████████████████████ ▀███████████████████████▀ ▀█████████████████████▀ ▀███████████████████▀ ▀███████████████▀ ▀▀███████▀▀
▀▀▀▀███████▀▀▀▀ | | EUROPE | AFRICA LATIN AMERICA | | | ▄▀▀▀ █ █ █ █ █ █ █ █ █ █ █ ▀▄▄▄ |
███████▄█ ███████▀ ██▄▄▄▄▄░▄▄▄▄▄ █████████████▀ ▐███████████▌ ▐███████████▌ █████████████▄ ██████████████ ███▀███▀▀███▀ | . Download on the App Store | ▀▀▀▄ █ █ █ █ █ █ █ █ █ █ █ ▄▄▄▀ | ▄▀▀▀ █ █ █ █ █ █ █ █ █ █ █ ▀▄▄▄ |
▄██▄ ██████▄ █████████▄ ████████████▄ ███████████████ ████████████▀ █████████▀ ██████▀ ▀██▀ | . GET IT ON Google Play | ▀▀▀▄ █ █ █ █ █ █ █ █ █ █ █ ▄▄▄▀ |
|
|
|
DirtyKeyboard (OP)
Sr. Member
Offline
Activity: 462
Merit: 821
Fly free sweet Mango.
|
|
March 09, 2024, 09:11:04 AM |
|
One thing to fix is it currently downloads the last 24 images ChartBuddy posted, not necessarily only the posts from the last 24 hours. I think I can figure out a way to request from Ninjastic Space, the number of posts in the last day, then fix the script to only download that many images for the day. Why don't you use the time stamps on ChartBuddy's post history (and the second page)? But of course, just change the regex to only include today's date? Maybe..with error handling for the second page? I'll see what I can do. Thanks again.
|
████████████████████ ▟██ █████████████████████████████████████ █████████████████ ⚞▇▇▋███▎▎(⚬)> ███████████████████████████████████ ████████████████████ ▜██ █████████████████████████████████████
|
|
|
LoyceV
Legendary
Offline
Activity: 3500
Merit: 17694
Thick-Skinned Gang Leader and Golden Feather 2021
|
|
March 09, 2024, 11:54:48 AM |
|
But of course, just change the regex to only include today's date? You'll need some from yesterday too. I usually convert the "Today" on the forum to a real date first, then get everything from the last 24 hours.
|
| | Peach BTC bitcoin | │ | Buy and Sell Bitcoin P2P | │ | . .
▄▄███████▄▄ ▄██████████████▄ ▄███████████████████▄ ▄█████████████████████▄ ▄███████████████████████▄ █████████████████████████ █████████████████████████ █████████████████████████ ▀███████████████████████▀ ▀█████████████████████▀ ▀███████████████████▀ ▀███████████████▀ ▀▀███████▀▀
▀▀▀▀███████▀▀▀▀ | | EUROPE | AFRICA LATIN AMERICA | | | ▄▀▀▀ █ █ █ █ █ █ █ █ █ █ █ ▀▄▄▄ |
███████▄█ ███████▀ ██▄▄▄▄▄░▄▄▄▄▄ █████████████▀ ▐███████████▌ ▐███████████▌ █████████████▄ ██████████████ ███▀███▀▀███▀ | . Download on the App Store | ▀▀▀▄ █ █ █ █ █ █ █ █ █ █ █ ▄▄▄▀ | ▄▀▀▀ █ █ █ █ █ █ █ █ █ █ █ ▀▄▄▄ |
▄██▄ ██████▄ █████████▄ ████████████▄ ███████████████ ████████████▀ █████████▀ ██████▀ ▀██▀ | . GET IT ON Google Play | ▀▀▀▄ █ █ █ █ █ █ █ █ █ █ █ ▄▄▄▀ |
|
|
|
DirtyKeyboard (OP)
Sr. Member
Offline
Activity: 462
Merit: 821
Fly free sweet Mango.
|
|
March 14, 2024, 07:46:02 AM Last edit: March 14, 2024, 09:13:04 AM by DirtyKeyboard |
|
OMGosh. It finally worked twice in a row. I've got the script set for pyautogui to hit preview not post, but one little '#' will change, everything. Unless of course one small thing outside of my control changes and then...another crash and burn. But you know what? It's gonna be okay 1.Download no change 2.Import terminal updates about loading the next page and skipping unneeded links 3.Export I figured out a better way to present the big gif link with '''url={imgur_big_gif + "v"}''' 4.Posting Realized a way to control the posting envrionment for pyautogui, was to open chrome, and then go f11, fullscreen. Soon to be posted on a timer maybe? 5.Archive All files and folders are now in the format dd-mm-yyyy, for easier auto sorting. import csv, os, pyautogui, pyperclip, re, requests, shutil, time, urllib.request, webbrowser from datetime import timedelta, date from os import rename
startTime = time.perf_counter()
# set dates for links and new folder today = date.today() tomorrow = today + timedelta(1)
# name newfolder with date directory = f"{today:%m}-{today:%d}" parent_dir = "C:/Users/Games/CB/images/"
# get the final 20 gif layers in reverse order, starting with 24 number = 24 url4 = 'https://bitcointalk.org/index.php?action=profile;u=110685;sa=showPosts;start=0' time.sleep(20) response = requests.get(url4)
# turn response into textfile of the source code. source_code = response.text
# read the source code, save it, and turn it into a string. textfile = open('C:/Users/Games/CB/Temp/CBSource.txt', 'a+') textfile.write(source_code) textfile.seek(0) filetext = textfile.read() textfile.close()
# find matches using regex, and for every match download the image and number it. resorted to asking copilot for help with my regex matches = re.findall(r'https:\/\/www\.talkimg\.com\/images\/\w{4}\/\w{2}\/\w{2}\/\w{5}\.png', filetext) for link in matches: dl_number = f"{number:02d}" print(number, link) urllib.request.urlretrieve(link, 'C:/Users/Games/CB/images/download ({}).png'.format(dl_number)) number = number - 1 time.sleep(2) os.remove('C:/Users/Games/CB/Temp/CBSource.txt') print("going on")
# get the first 4 images in reverse order, i copied my own code and changed the link. Should have made a function and then fed it the links probably. url5 = 'https://bitcointalk.org/index.php?action=profile;u=110685;sa=showPosts;start=20' time.sleep(20) response5 = requests.get(url5) source_code = response5.text textfile5 = open('C:/Users/Games/CB/Temp/CBSource2.txt', 'a+') textfile5.write(source_code) textfile5.seek(0) filetext2 = textfile5.read() textfile5.close()
# find matches using regex, and for first 4 matches download the image and number it matches = re.findall(r'https:\/\/www\.talkimg\.com\/images\/\w{4}\/\w{2}\/\w{2}\/\w{5}\.png', filetext2) for link in matches: if number >= 1: dl_number = f"{number:02d}" print(number, link) urllib.request.urlretrieve(link, 'C:/Users/Games/CB/images/download ({}).png'.format(dl_number)) number = number - 1 time.sleep(2) if number <= 0: print("skipping link") os.remove('C:/Users/Games/CB/Temp/CBSource2.txt')
# hot keys to open gimp and then the plugin that load layers, export, scale, export gifs, quit, agree to not save time.sleep(5) pyautogui.click(1, 1) time.sleep(5) pyautogui.hotkey('ctrl', 'alt', 'g') time.sleep(40) pyautogui.click(820, 446) time.sleep(20) pyautogui.hotkey('ctrl', 'alt', 'l') time.sleep(5) pyautogui.hotkey('tab') time.sleep(5) pyautogui.hotkey('tab') time.sleep(5) pyautogui.hotkey('tab') time.sleep(5) pyautogui.hotkey('enter') time.sleep(20) pyautogui.hotkey('ctrl', 'q') time.sleep(10) pyautogui.hotkey('shift', 'tab') time.sleep(5) pyautogui.hotkey('enter') time.sleep(60)
# uploading big gif and getting link to use later, url = "https://api.imgur.com/3/image" payload = {'name': f'b{today:%m}-{today:%d}-{today.year}'} files=[('image',('gif.gif',open('C:/Users/Games/CB/Temp/gif.gif','rb'),'image/gif'))] headers = {'Authorization': 'Bearer xXxXxXxXxXxXx'} response = requests.post(url, headers=headers, data=payload, files=files) data = response.json() imgur_big_gif = data.get("data", {}).get("link")
# uploading talkimg gif and getting link to use later, cle url = "https://talkimg.com/api/1/upload" headers = {"X-API-Key": "uvwxXxXxXxXxXxXxyz"} files = {"source": open("C:/Users/Games/CB/Temp/gif2.gif", "rb")} payload = {"title": f'b{today:%m}-{today:%d}-{today.year}', "album_id": "UFbj"} response = requests.post(url, headers=headers, data=payload, files=files) data = response.json() talkimg_gif = data["image"]["url"]
# add post to clipboard for btctalk pyperclip.copy(f"ChartBuddy's 24 hour Wall Observation recap\n[url={imgur_big_gif + "v"}].[img]{talkimg_gif}[/img].[/url]\nAll Credit to [url=https://bitcointalk.org/index.php?topic=178336.msg10084622#msg10084622]ChartBuddy[/url]")
# can use this link for the reply button url7 = 'https://bitcointalk.org/index.php?action=post;topic=178336.0' webbrowser.open(url7) time.sleep(20) pyautogui.hotkey('f11') time.sleep(10) pyautogui.hotkey('tab') time.sleep(5) pyautogui.hotkey('tab') time.sleep(5) pyautogui.hotkey('ctrl', 'v') time.sleep(5) pyautogui.hotkey('tab') time.sleep(5) # we're doing it live if the next command is #ed out pyautogui.hotkey('tab') time.sleep(5) pyautogui.hotkey('enter')
#runtime is calculated stopTime = time.perf_counter() runtime = {stopTime - startTime}
# save to csv file f = open('C:/PyProjects/runtimes.csv', 'a', newline='') writer = csv.writer(f) writer.writerow(runtime)
time.sleep(20)
# prepare to store downloads newf = os.path.join(parent_dir, directory) os.mkdir(newf) src = "C:/Users/Games/CB/images" dest = "C:/Users/Games/CB/images/{}".format(directory) files = os.listdir(src) os.chdir(src)
# only move numbered png files for file in files: if file.endswith(").png"): shutil.move(file, dest)
# gifs are stored rename ("C:/Users/Games/CB/Temp/gif.gif", f"C:/Users/Games/CB/{today.year}/{today.month}-{today.year}/b{today:%m}-{today:%d}.gif") rename (f"C:/Users/Games/CB/Temp/gif2.gif", f"C:/Users/Games/CB/{today.year}/{today.month}-{today.year}/{today:%m}-{today:%d}.gif") So exciting on the Top100 vwap list. I executed a successful post from my phone using Chrome Remote Desktop back to the ole home PC. I think I've figured out a way to get the green coloring automatic with a second tuple sort, but don't have it ready yet. We failed many ways, until I figured out that by asking copilot to use tuples to keep the date rank and vwap rank seperate, pre sorting. Then I could later flag the most recent item on the list, which is red, underlined. I'm really leaning on dooglous' code and copilot for this import requests, pyautogui, pyperclip, time, webbrowser from datetime import datetime, timezone
current_unix_time = int(time.time()) unix_time_month = 60 * 60 * 24 * 31 unix_time_day = 60 * 60 * 24
def fetch_bitcoin_data(days=1200, top=100, currency='USD'): url = f"http://bitcoincharts.com/charts/chart.json?m=bitstampUSD&r={days}&i=Daily" response = requests.get(url, verify=False) data = response.json() rank = 1 rows = [(entry[0], entry[7]) for entry in data] rows = rows[:-1] tuples = [(timestamp, vwap, i +1) for i, (timestamp, vwap) in enumerate(rows)] sorted_tuples = sorted(tuples, key=lambda x: float(x[1]), reverse=True)
# opens file to store the top 100 vwaps with open('C:/PyProjects/VWAP_USD/top100usd.txt', 'w') as top100:
# sorts the daily VWAP by highest average price, and ranks them for timestamp, vwap, rows in sorted_tuples[:top]: adjusted_timestamp = int(timestamp) utc_date = datetime.fromtimestamp(adjusted_timestamp, tz=timezone.utc).strftime('%Y-%m-%d') # this is to make the columns line up if rank <= 99: spacing = " " if rank == 100: spacing = " " # this is to make top 100 vwaps within the last 31 days bold if timestamp >= current_unix_time - unix_time_month: bolding = "[b]" unbolding = "[/b]" if timestamp <= current_unix_time - (unix_time_month + 1): bolding = "" unbolding = ""
# i noticed the most recent result, the red, underline one, if it makes the list was always the last # being reverse sorted, from a list of 1200, and never forget Python starts counting at zed :) if rows == 1199: redcoloring = "[red][u]" reduncoloring = "[/u][/red]" if rows != 1199: redcoloring = "" reduncoloring = "" print(f"{spacing}{redcoloring}{bolding}{rank:2d} {utc_date} {vwap:.0f} {currency}{unbolding}{reduncoloring}") formatted_output = f"{spacing}{redcoloring}{bolding}{rank:2d} {utc_date} {vwap:.0f} {currency}{unbolding}{reduncoloring}" top100.write(formatted_output + '\n') rank += 1
# putting the post on the clipboard with open('C:/PyProjects/VWAP_USD/top100usd.txt', 'r') as top100: list = top100.read() prelude = "[pre][size=10pt][url=https://bitcoincharts.com/charts/bitstampUSD]Rank BitStamp USD/BTC[/url]" explanation = "[url=https://bitcointalk.org/index.php?topic=138109.msg54917391#msg54917391][size=8pt] * * Chart Explanation * *[/size][/url][/pre]" full_post = f"{prelude}\n{list}{explanation}" pyperclip.copy(full_post)
# can use this link for the reply page to top20 thread url = 'https://bitcointalk.org/index.php?action=post;topic=138109.0' webbrowser.open(url) time.sleep(5) pyautogui.hotkey('f11') time.sleep(5) pyautogui.hotkey('tab') time.sleep(2) pyautogui.hotkey('tab') time.sleep(2) pyautogui.hotkey('ctrl', 'v') time.sleep(2) pyautogui.hotkey('tab') time.sleep(2) # we're doing it live if the next command is #ed out # pyautogui.hotkey('tab') time.sleep(20) pyautogui.hotkey('enter') # open("top100.txt", 'w').close() if __name__ == "__main__": fetch_bitcoin_data() EDIT: Forgot I hadn't posted in so long, and forgot the previous update was pre auto red underlining, pre f11ing
|
████████████████████ ▟██ █████████████████████████████████████ █████████████████ ⚞▇▇▋███▎▎(⚬)> ███████████████████████████████████ ████████████████████ ▜██ █████████████████████████████████████
|
|
|
LoyceV
Legendary
Offline
Activity: 3500
Merit: 17694
Thick-Skinned Gang Leader and Golden Feather 2021
|
5.Archive All files and folders are now in the format dd-mm-yyyy, for easier auto sorting. Suggestion: yyyy-mm-dd is much easier to sort. Example: 2024_02_23_Fri_10.34h 2024_02_27_Tue_10.34h 2024_03_01_Fri_10.34h 2024_03_05_Tue_07.33h 2024_03_05_Tue_10.34h 2024_03_08_Fri_10.34h 2024_03_12_Tue_10.34h Even ls shows everything in chronological order now.
|
| | Peach BTC bitcoin | │ | Buy and Sell Bitcoin P2P | │ | . .
▄▄███████▄▄ ▄██████████████▄ ▄███████████████████▄ ▄█████████████████████▄ ▄███████████████████████▄ █████████████████████████ █████████████████████████ █████████████████████████ ▀███████████████████████▀ ▀█████████████████████▀ ▀███████████████████▀ ▀███████████████▀ ▀▀███████▀▀
▀▀▀▀███████▀▀▀▀ | | EUROPE | AFRICA LATIN AMERICA | | | ▄▀▀▀ █ █ █ █ █ █ █ █ █ █ █ ▀▄▄▄ |
███████▄█ ███████▀ ██▄▄▄▄▄░▄▄▄▄▄ █████████████▀ ▐███████████▌ ▐███████████▌ █████████████▄ ██████████████ ███▀███▀▀███▀ | . Download on the App Store | ▀▀▀▄ █ █ █ █ █ █ █ █ █ █ █ ▄▄▄▀ | ▄▀▀▀ █ █ █ █ █ █ █ █ █ █ █ ▀▄▄▄ |
▄██▄ ██████▄ █████████▄ ████████████▄ ███████████████ ████████████▀ █████████▀ ██████▀ ▀██▀ | . GET IT ON Google Play | ▀▀▀▄ █ █ █ █ █ █ █ █ █ █ █ ▄▄▄▀ |
|
|
|
DirtyKeyboard (OP)
Sr. Member
Offline
Activity: 462
Merit: 821
Fly free sweet Mango.
|
|
March 16, 2024, 09:06:29 AM Last edit: March 21, 2024, 06:27:30 PM by DirtyKeyboard |
|
5.Archive All files and folders are now in the format dd-mm-yyyy, for easier auto sorting. Suggestion: yyyy-mm-dd is much easier to sort. Example: 2024_02_23_Fri_10.34h 2024_02_27_Tue_10.34h 2024_03_01_Fri_10.34h 2024_03_05_Tue_07.33h 2024_03_05_Tue_10.34h 2024_03_08_Fri_10.34h 2024_03_12_Tue_10.34h Even ls shows everything in chronological order now. You are right again. Heyyy. ls, I remember that stands for list stuff, correct? EDIT: but seriously, I love linux, but have never known what ls is short for. If it is list, why not just l? Throw it on the pile of things I know I don't know, for now. And the streak record stands at 3 in a row. Crash and burn. I couldn't find the right script, so then I thought I found it, but anyway, it didn't look in the correct folder, which I should now rename again. Big news on the paid job front, I have a template code to work with bitcoinity in different currencies, and I have a script that combines the individual top 100 currency results into one 'table'. Now I need something to run each currency script, and then run the table_maker, and I can hear you all yelling, "Make it a function!" To that I would say, you should have seen what i wanted to do. table_maker.py import requests, pyautogui, pyperclip, time, webbrowser from datetime import datetime, timezone
file_paths = ['C:/PyProjects/VWAP_USD/top100usd.txt', 'C:/PyProjects/VWAP_EUR/top100eur.txt', 'C:/PyProjects/VWAP_gbp/top100gbp.txt']
# copilot wrote this clean code with open('C:/PyProjects/output-file.txt', 'w', encoding='utf-8') as output_file: # Read lines from all files simultaneously with open(file_paths[0], 'r', encoding='utf-8') as file1, \ open(file_paths[1], 'r', encoding='utf-8') as file2, \ open(file_paths[2], 'r', encoding='utf-8') as file3: for line1, line2, line3 in zip(file1, file2, file3): # Write combined lines to the output file output_line = f"{line1.strip()} | {line2.strip()} {line3.strip()}\n" output_file.write(output_line)
# me putting the post on the clipboard with open('C:/PyProjects/output-file.txt', 'r') as top100: list = top100.read() prelude = "[pre][size=10pt][url=https://bitcoincharts.com/charts/bitstampUSD]| Rank BitStamp USD/BTC [/url] [url=https://data.bitcoinity.org/markets/volume/5y/EUR/kraken?r=day&t=b]| Rank Kraken EUR/BTC [/url][url=https://data.bitcoinity.org/markets/volume/5y/GBP/kraken?r=day&t=b]| Rank Kraken GBP/BTC|[/url]" explanation = "[url=https://bitcointalk.org/index.php?topic=138109.msg54917391#msg54917391][size=8pt] * * Chart Explanation * *[/size][/url][/pre]" full_post = f"{prelude}\n{list}{explanation}" pyperclip.copy(full_post) Added edit, the bitcoinity template, typically my comments start lowercase, copilot's are uppercase import csv, os, pyperclip, requests, tabulate from datetime import datetime, timedelta import pandas as pd
current_unix_time = int(time.time()) unix_time_month = 60 * 60 * 24 * 31 unix_time_day = 60 * 60 * 24 today = datetime.today()
url = "https://data.bitcoinity.org/export_data.csv?currency=GBP&data_type=volume&exchange=kraken&r=day&t=b×pan=5y&vu=curr" response = requests.get(url) print(response) # Check if the request was successful (status code 200) if response.status_code == 200: # Assuming the response contains CSV data, you can save it to a file with open("C:/PyProjects/VWAP_GBP/GBP_volume.csv", "w", newline="") as csvfile: csvfile.write(response.text) print("CSV file downloaded successfully!") else: print(f"Error: {response.status_code} - Unable to download CSV data.")
url2 = "https://data.bitcoinity.org/export_data.csv?currency=GBP&data_type=volume&exchange=kraken&r=day&t=b×pan=5y" response = requests.get(url2) print(response) # Check if the request was successful (status code 200) if response.status_code == 200: # Assuming the response contains CSV data, you can save it to a file with open("C:/PyProjects/VWAP_GBP/BTC_volume.csv", "w", newline="") as csvfile: csvfile.write(response.text) print("CSV file downloaded successfully!") else: print(f"Error: {response.status_code} - Unable to download CSV data.")
# Read the CSV files datestamp = pd.read_csv('C:/PyProjects/VWAP_GBP/GBP_volume.csv', usecols=[0]) # Assuming timestamp is in the first column eur_df = pd.read_csv('C:/PyProjects/VWAP_GBP/GBP_volume.csv', usecols=[1]) btc_df = pd.read_csv('C:/PyProjects/VWAP_GBP/BTC_volume.csv', usecols=[1])
# Perform division result_df = eur_df / btc_df
# Save the combined DataFrame to a new CSV file result_df.to_csv('C:/PyProjects/VWAP_GBP/result_with_timestamp.csv', index=False)
# Extract the date part from the timestamp result_df['Date'] = pd.to_datetime(datestamp.iloc[:, 0]).dt.date
# Save the combined DataFrame to a new CSV file result_df.to_csv('C:/PyProjects/VWAP_GBP/result_with_timestamp.csv', index=False)
print("Results with timestamp written to 'result_with_timestamp.csv'.")
filename = 'C:/PyProjects/VWAP_GBP/result_with_timestamp.csv' top = 100 currency = "GBP"
# Read data from the CSV file rows = [] with open(filename, 'r') as file: reader = csv.reader(file) next(reader) # Skip the header row for row in reader: time, vwapgbp = row[1], row[0] rows.append((time, vwapgbp)) # sort the data by VWAPgbp tuples = [(timestamp, vwap, i +1) for i, (timestamp, vwap) in enumerate(rows)] sorted_tuples = sorted(tuples, key=lambda x: float(x[1]), reverse=True) with open('C:/PyProjects/VWAP_gbp/top100gbp.txt', 'w') as top100: rank = 1 for time, vwapgbp, rows in sorted_tuples[:top]: formatted_date = datetime.strptime(time, '%Y-%m-%d').strftime('%Y-%m-%d') date_difference = today - datetime.strptime(formatted_date, '%Y-%m-%d') if rank <= 99: spacing = " " if rank == 100: spacing = "" if date_difference <= timedelta(days = 31): bolding = "[b]" unbolding = "[/b]" if date_difference >= timedelta(days = 32): bolding = "" unbolding = "" if rows == 1827: redcoloring = "[color=red][u]" reduncoloring = "[/u][/color]" if rows != 1827: redcoloring = "" reduncoloring = "" vwapgbp_float = float(vwapgbp) vgbp = str(int(vwapgbp_float)) print(f"{spacing}{redcoloring}{bolding}{rank:2d} {formatted_date} {vgbp} {currency}{unbolding}{reduncoloring}|") formatted_output = f"{spacing}{redcoloring}{bolding}{rank:2d} {formatted_date} {vgbp} {currency}{unbolding}{reduncoloring}|" top100.write(formatted_output + '\n') rank += 1
|
████████████████████ ▟██ █████████████████████████████████████ █████████████████ ⚞▇▇▋███▎▎(⚬)> ███████████████████████████████████ ████████████████████ ▜██ █████████████████████████████████████
|
|
|
DirtyKeyboard (OP)
Sr. Member
Offline
Activity: 462
Merit: 821
Fly free sweet Mango.
|
|
March 17, 2024, 09:15:58 AM Last edit: March 17, 2024, 09:31:56 AM by DirtyKeyboard |
|
All right, one in a row, Daily Recap smooth as silk. Probably because I haven't started the second great date and file renaming project. On the Top 20 Days in Bitcoin thread, things are progressing nicely. I learned how to load your own scripts within a script and then me an copilot came up with this, because I needed each currency's script to run that day's calculations, before combining into the top20table. It's fun trying to get the question just right. And again, I'm not writing very much of this from scratch. I am pleased by the amount of times that has been more and more where I can see what copilot is suggesting is not going to work, or when trying to figure out a better prompt, I see what is going wrong on my own. Good times. top20table_maker.py import pyperclip, time
sleep_time = 2
import Top20USD time.sleep(sleep_time) import Top20GBP time.sleep(sleep_time) import Top20EUR time.sleep(sleep_time) import Top20CAD time.sleep(sleep_time) import Top20JPY time.sleep(sleep_time)
usd = 'C:/PyProjects/Top20/VWAP_USD/top100usd.txt' gbp = 'C:/PyProjects/Top20/VWAP_GBP/top100gbp.txt' eur = 'C:/PyProjects/Top20/VWAP_EUR/top100eur.txt' cad = 'C:/PyProjects/Top20/VWAP_CAD/top100cad.txt' jpy = 'C:/PyProjects/Top20/VWAP_JPY/top100jpy.txt'
file_paths = [usd, gbp, eur, cad, jpy]
with open('C:/PyProjects/Top20/top20table.txt', 'w', encoding='utf-8') as output_file: # Read lines from all files simultaneously with open(file_paths[0], 'r', encoding='utf-8') as file1, \ open(file_paths[1], 'r', encoding='utf-8') as file2, \ open(file_paths[2], 'r', encoding='utf-8') as file3, \ open(file_paths[3], 'r', encoding='utf-8') as file4, \ open(file_paths[4], 'r', encoding='utf-8') as file5: for line1, line2, line3, line4, line5 in zip(file1, file2, file3, file4, file5): # Write combined lines to the output file output_line = f"{line1.strip()} | {line2.strip()} {line3.strip()} {line4.strip()} {line5.strip()}\n" output_file.write(output_line)
# putting the post on the clipboard with open('C:/PyProjects/Top20/top20table.txt', 'r') as top100: list = top100.read() prelude = f"[pre][size=10pt][url=https://bitcoincharts.com/charts/bitstampUSD]|Rank BitStamp USD/BTC[/url] |[url=https://data.bitcoinity.org/markets/volume/5y/GBP/kraken?r=day&t=b]Rank Kraken GBP/BTC[/url] |[url=https://data.bitcoinity.org/markets/volume/5y/EUR/kraken?r=day&t=b]Rank Kraken EUR/BTC[/url] |[url=https://data.bitcoinity.org/markets/volume/5y/CAD/kraken?r=day&t=b] Rank Kraken CAD/BTC[/url]|[url=https://data.bitcoinity.org/markets/volume/5y/JPY/kraken?r=day&t=b] Rank Kraken JPY/BTC |[/url]" explanation = "[url=https://bitcointalk.org/index.php?topic=138109.msg54917391#msg54917391][size=8pt] * * Chart Explanation * * [/size][/url][/pre]" JimboToronto = " GoBTCGo™" full_post = f"{prelude}\n{list}{explanation}\n{JimboToronto}" pyperclip.copy(full_post)
|
████████████████████ ▟██ █████████████████████████████████████ █████████████████ ⚞▇▇▋███▎▎(⚬)> ███████████████████████████████████ ████████████████████ ▜██ █████████████████████████████████████
|
|
|
DirtyKeyboard (OP)
Sr. Member
Offline
Activity: 462
Merit: 821
Fly free sweet Mango.
|
|
March 21, 2024, 09:16:53 AM Last edit: March 21, 2024, 09:30:10 PM by DirtyKeyboard |
|
Well I'm just crushed my coding buddy Mango is not right next to me now, biting my hand, jealous of the mouse. He passed away two days ago. But I know he would want me to go on, so we better do that. I almost wrote a function, I think. Or at least I finally now, see the time when a function is useful. When one wants to do the same thing to different things. That's interesting, because that sounds like exactly why I started this journey to begin with. I can't believe how 'anti-fuction' I found myself to be. Gemini and copilot both kept annoyingly providing functions, which I would dutifuly strip out the part I wanted and hard code the variables, because everything I was doing was, single purpose, pass/fail type stuff. For the Top20 thread script, which downloads the latest daily currency volume and divides by btc volume in that currency, to get the daily volume weighted average price. So here is my template that works with bitcoinity.com. This took a while, but if one sets the variables at the beginning of the script, the rest runs itself, except for the part about changing the spacing when the number of digits changes. That seems to be a function right? What I was doing is copying and pasting the template and then renaming the variables, but that is what a funtion is for if I understand things correctly. Also now, the pre template currency scripts need to be updated one at a time. OR one could just update the function and the inputs. I should also probably be using tabulate, in some way, but it's been fun trying to get everything lined up, with spacing flags. I am definitely happy to learn that in python one can use 1_000_000 to mean 1 million, for readability reasons. That was from some video I can't find about '5 Python coding tricks' video. But I should be able to make that a function that requires the currency and exchange inputs, and not have to change each currencies' script if needed. Provided they make the top 100, this baby bolds the latest 31 entries, colors red and underlines the newest entry, and colors green and underlines the oldest entry in the top 100. I tried so many ways, and failed to do it in one sort, which I know is possible. Theoretically, inputting the current oldest entry one time, then each day's run could look for the next date to be green when the current one drops off the list. The information can be permanent if saved to file. Did not figure that out yet. import csv, os, requests, time from datetime import datetime, timedelta import pandas as pd
CURR = "JPY" EXCH = "kraken"
# for later calculations unix_time_month = 60 * 60 * 24 * 31 unix_time_day = 60 * 60 * 24 today = datetime.today()
# downloads the currency data url = f"https://data.bitcoinity.org/export_data.csv?currency={CURR}&data_type=volume&exchange={EXCH}&r=day&t=b×pan=5y&vu=curr" response = requests.get(url) print(response) # Check if the request was successful (status code 200) if response.status_code == 200: # Assuming the response contains CSV data, you can save it to a file with open(f"C:/PyProjects/Top20/VWAP_{CURR}/{CURR}_volume.csv", "w", newline="") as csvfile: csvfile.write(response.text) print("CSV file downloaded successfully!") else: print(f"Error: {response.status_code} - Unable to download CSV data.") time.sleep(5)
# downloads the btc data url2 = f"https://data.bitcoinity.org/export_data.csv?currency={CURR}&data_type=volume&exchange={EXCH}&r=day&t=b×pan=5y" response = requests.get(url2) print(response) # Check if the request was successful (status code 200) if response.status_code == 200: # Assuming the response contains CSV data, you can save it to a file with open(f"C:/PyProjects/Top20/VWAP_{CURR}/BTC_volume.csv", "w", newline="") as csvfile: csvfile.write(response.text) print(f"CSV file downloaded successfully!") else: print(f"Error: {response.status_code} - Unable to download CSV data.")
# Read the CSV files datestamp = pd.read_csv(f'C:/PyProjects/Top20/VWAP_{CURR}/{CURR}_volume.csv', usecols=[0]) # Assuming timestamp is in the first column curr_df = pd.read_csv(f'C:/PyProjects/Top20/VWAP_{CURR}/{CURR}_volume.csv', usecols=[1]) btc_df = pd.read_csv(f'C:/PyProjects/Top20/VWAP_{CURR}/BTC_volume.csv', usecols=[1])
# Perform division to get the daily volume weighted average price result_df = curr_df / btc_df
# Save the combined DataFrame to a new CSV file result_df.to_csv(f'C:/PyProjects/Top20/VWAP_{CURR}/result_with_timestamp.csv', index=False)
# Extract the date part from the timestamp result_df['Date'] = pd.to_datetime(datestamp.iloc[:, 0]).dt.date
# Save the combined DataFrame to a new CSV file result_df.to_csv(f'C:/PyProjects/Top20/VWAP_{CURR}/result_with_timestamp.csv', index=False)
print("Results with timestamp written to 'result_with_timestamp.csv'.")
filename = f'C:/PyProjects/Top20/VWAP_{CURR}/result_with_timestamp.csv' top = 100
# Read data from the CSV file for green finding later rows = [] with open(filename, 'r') as file: reader = csv.reader(file) next(reader) # Skip the header row
# Fill empty cells in column [0] before sorting, make sure this doesn't affect the top 100 previous_value = None for row in reader: vwapcurr = row[0] or previous_value # Fill empty cell with previous value time = row[1] rows.append((time, vwapcurr)) previous_value = vwapcurr # Update previous value for the next iteration
# First Sort the data for green entry knowing it is the oldest top 100 tuples = [(timestamp, vwap, i + 1) for i, (timestamp, vwap) in enumerate(rows)] sorted_tuples = sorted(tuples, key=lambda x: float(x[1]), reverse=True) rank = 1
with open(f'C:/PyProjects/Top20/VWAP_{CURR}/forgreen{CURR}.txt', 'w') as top100: for time, vwapcurr, rows in sorted_tuples[:top]: formatted_date = datetime.strptime(time, '%Y-%m-%d').strftime('%Y-%m-%d') date_difference = today - datetime.strptime(formatted_date, '%Y-%m-%d') vwapcurr_float = float(vwapcurr) formatted_output = f"{rank:2d}, {formatted_date}, {vwapcurr_float:.0f}" top100.write(formatted_output + '\n') rank += 1
os.rename (f"C:/PyProjects/Top20/VWAP_{CURR}/forgreen{CURR}.txt", f"C:/PyProjects/Top20/VWAP_{CURR}/forgreen{CURR}.csv") filename = f'C:/PyProjects/Top20/VWAP_{CURR}/forgreen{CURR}.csv'
# Read data from the to find green rank rows = [] with open(filename, 'r') as file: reader = csv.reader(file) for row in reader: rows.append(row)
# Sort the data by date, finally get the green date tuples = [(rank, timestamp, vwap) for rank, timestamp, vwap in rows] sorted_tuples = sorted(tuples, key=lambda x: x[1]) green_rank, _, _ = sorted_tuples[0] print(green_rank)
# sort for rank, bolding, coloring filename = f'C:/PyProjects/Top20/VWAP_{CURR}/result_with_timestamp.csv' top = 100
# Read data from the CSV file rows = [] with open(filename, 'r') as file: reader = csv.reader(file) next(reader) # Skip the header row
# Fill empty cells in column [0] before sorting previous_value = None for row in reader: vwapcurr = row[0] or previous_value # Fill empty cell with previous value time = row[1] rows.append((time, vwapcurr)) previous_value = vwapcurr # Update previous value for the next iteration
# Sort the data by VWAP, and go through one by one setting flags for the BBC code tuples = [(timestamp, vwap, i + 1) for i, (timestamp, vwap) in enumerate(rows)] sorted_tuples = sorted(tuples, key=lambda x: float(x[1]), reverse=True) rank = 1
with open(f'C:/PyProjects/Top20/VWAP_{CURR}/top100{CURR}.txt', 'w') as top100: for time, vwapcurr, rows in sorted_tuples[:top]: formatted_date = datetime.strptime(time, '%Y-%m-%d').strftime('%Y-%m-%d') date_difference = today - datetime.strptime(formatted_date, '%Y-%m-%d') vwapcurr_float = float(vwapcurr) if rank <= 99: spacing = " " if rank == 100: spacing = "" if date_difference <= timedelta(days = 30): bolding = "[b]" unbolding = "[/b]" if date_difference >= timedelta(days = 31): bolding = "" unbolding = "" if rows == 1827: redcoloring = "[color=red][u]" reduncoloring = "[/u][/color]" if rows != 1827: redcoloring = "" reduncoloring = "" if rank < int(green_rank): greencoloring = "" greenuncoloring = "" if rank > int(green_rank): greencoloring = "" greenuncoloring = "" if vwapcurr_float <= 9_999_999: endspace = " |" if vwapcurr_float >= 10_000_000: endspace = "|" if rank - int(green_rank) == 0: greencoloring = "[color=green][u]" greenuncoloring = "[/u][/color]" formatted_output = f"{redcoloring}{greencoloring}{bolding}{rank:2d}{spacing} {formatted_date} {vwapcurr_float:.0f}{unbolding}{reduncoloring}{greenuncoloring}{endspace}" top100.write(formatted_output + '\n') rank += 1[/cpde]
Five days in a row for our ChartBuddy script. I'm about ready to make these posts happen automatically, with no clicks. Onward. Edit: Hopefully cleared up a few sentences. and fixed some # commenting in the code
|
████████████████████ ▟██ █████████████████████████████████████ █████████████████ ⚞▇▇▋███▎▎(⚬)> ███████████████████████████████████ ████████████████████ ▜██ █████████████████████████████████████
|
|
|
DirtyKeyboard (OP)
Sr. Member
Offline
Activity: 462
Merit: 821
Fly free sweet Mango.
|
|
March 24, 2024, 08:12:27 AM |
|
Welp, Buddy's on a bit of a break it seems. I ran the script and it went smoothly again, but didn't post any animated recap, because it grabbed images from yesterday like we previously talked about. The Top 100 list is going nicely. So basically there is a legacy source of JSON data with Bitstamp USD BTC data. Then the other currency data is from bitcoinity and comes back as a csv. It took a few tries, but I am now converting the JSON reponse into a csv file, which I had more luck with dealing with the data. The method of finding the green link, the oldest entry in the top 100, by sorting the list twice now works with all currencies. When I took the script for a spin today, the script identified an older entry on the list than I had been manually tracking. So i went back and made some quiet corrections. Shhhh. Everything is good now. Why did I say that, now everything will be broken. unoptimized code for dealing with the USD volume weighted average price. The biggest change, besides converting the JSON to csv, is that instead of requesting all the days, I'm only requesting the newest data to append the list. The big block of commented code is only run the first time. Which I see now, could be a simple check to see if the file exists, and then call another full download script if it doesn't exist, and in the same indent else, append just today's data, if the file does exist. import csv, os, requests, time from datetime import datetime, timedelta
currency = "USD" vwap_digits = "5"
current_unix_time = int(time.time()) unix_time_month = 60 * 60 * 24 * 31 unix_time_day = 60 * 60 * 24 today = datetime.today()
# # RUN this if the previous 1200 plus data days are lost, otherwise just download the last 2 days, jump ahead # filename = f'C:/PyProjects/Top20/ VWAP_{currency}/result_with_timestamp.csv' # url = f"http://bitcoincharts.com/charts/chart.json?m=bitstampUSD&r=1200&i=Daily" # response = requests.get(url, verify=False) # data = response.json()
# column1_data = [entry[7] for entry in data] # column2_data = [datetime.utcfromtimestamp(entry[0]).strftime('%Y-%m-%d') for entry in data]
# # Create the CSV data rows # rows = zip(column1_data, column2_data) # filename = "C:/PyProjects/Top20/VWAP_USD/result_with_timestamp.csv"
# # Create or overwrite the CSV file in write mode for potential future data additions # with open(filename, 'w', newline='') as csvfile: # csv_writer = csv.writer(csvfile)
# # Write the CSV header (optional, based on your requirement) # csv_writer.writerow(['Column 1 Name', 'Column 2 Name'])
# # Write the data rows # csv_writer.writerows(rows) # print(rows)
# print("Data successfully exported to top100usd.csv")
# # comment this out if downloading the initial data set filename = f'C:/PyProjects/Top20/ VWAP_{currency}/result_with_timestamp.csv'
# only requesting 2 days of data url = f"http://bitcoincharts.com/charts/chart.json?m=bitstampUSD&r=2&i=Daily" response = requests.get(url, verify=False) data = response.json()
# Extract the desired columns (modify these indices based on your JSON structure) column1_data = [entry[7] for entry in data] column2_data = [datetime.utcfromtimestamp(entry[0]).strftime('%Y-%m-%d %H:%M:%S UTC') for entry in data]
# Create the CSV data row row = zip(column1_data, column2_data)
filename = f"C:/PyProjects/Top20/VWAP_{currency}/result_with_timestamp.csv"
# Open the CSV file in append mode for potential future data additions with open(filename, 'a', newline='') as csvfile: csv_writer = csv.writer(csvfile) csv_writer.writerows(row) print(row)
filename = f'C:/PyProjects/Top20/VWAP_{currency}/result_with_timestamp.csv' top = 100
# Read data from the CSV file for green finding later rows = [] with open(filename, 'r') as file: reader = csv.reader(file) next(reader) # Skip the header row
# Fill empty cells in column [0] before sorting previous_value = 10000 for row in reader: vwapcurr = row[0] or previous_value # Fill empty cell with previous value time = row[1] rows.append((time, vwapcurr)) previous_value = vwapcurr # Update previous value for the next iteration
# Sort the data by VWAPcurr tuples = [(timestamp, vwap, i + 1) for i, (timestamp, vwap) in enumerate(rows)] sorted_tuples = sorted(tuples, key=lambda x: float(x[1]), reverse=True) rank = 1
with open(f'C:/PyProjects/Top20/VWAP_{currency}/forgreen{currency}.txt', 'w') as top100: for time, vwapcurr, rows in sorted_tuples[:top]: # formatted_date = datetime.strptime(time, '%Y-%m-%d').strftime('%Y-%m-%d') print(time) # date_difference = today - time #datetime.strptime(formatted_date, '%Y-%m-%d') vwapcurr_float = float(vwapcurr) formatted_output = f"{rank:2d}, {time}, {vwapcurr_float:.0f}" top100.write(formatted_output + '\n') rank += 1
os.replace (f"C:/PyProjects/Top20/VWAP_{currency}/forgreen{currency}.txt", f"C:/PyProjects/Top20/VWAP_{currency}/forgreen{currency}.csv") filename = f'C:/PyProjects/Top20/VWAP_{currency}/forgreen{currency}.csv'
# Read data from the to find green rank rows = [] with open(filename, 'r') as file: reader = csv.reader(file) for row in reader: rows.append(row)
# Sort the data by date tuples = [(rank, timestamp, vwap) for rank, timestamp, vwap in rows] sorted_tuples = sorted(tuples, key=lambda x: x[1]) green_rank, _, _ = sorted_tuples[0] print(green_rank)
# sort for rank, bolding, coloring filename = f'C:/PyProjects/Top20/VWAP_{currency}/result_with_timestamp.csv' top = 100
# Read data from the CSV file rows = [] with open(filename, 'r') as file: reader = csv.reader(file) next(reader) # Skip the header row
# Fill empty cells in column [0] before sorting previous_value = 1000 for row in reader: vwapcurr = row[0] or previous_value # Fill empty cell with previous value time = row[1] rows.append((time, vwapcurr)) previous_value = vwapcurr # Update previous value for the next iteration
# Sort the data by VWAPcurr tuples = [(timestamp, vwap, i + 1) for i, (timestamp, vwap) in enumerate(rows)] sorted_tuples = sorted(tuples, key=lambda x: float(x[1]), reverse=True) rank = 1
with open(f'C:/PyProjects/Top20/VWAP_{currency}/top100{currency}.txt', 'w') as top100: for time, vwapcurr, rows in sorted_tuples[:top]: formatted_date = datetime.strptime(time, '%Y-%m-%d').strftime('%Y-%m-%d') date_difference = today - datetime.strptime(formatted_date, '%Y-%m-%d') vwapcurr_float = float(vwapcurr)
# lines up the columns if rank <= 99: spacing = " " if rank == 100: spacing = ""
# bolds item within last 30 days if date_difference <= timedelta(days = 30): bolding = "[b]" unbolding = "[/b]" if date_difference >= timedelta(days = 31): bolding = "" unbolding = ""
# the newest data to be ranked is always the last row if rows == 1200: redcoloring = "[color=red][u]" reduncoloring = "[/u][/color]" if rows != 1200: redcoloring = "" reduncoloring = ""
# green_rank is found by sorting by vwap, and then sorting those by date to get the oldest, can I use a != here? if rank < int(green_rank): greencoloring = "" greenuncoloring = "" if rank > int(green_rank): greencoloring = "" greenuncoloring = "" if rank - int(green_rank) == 0: greencoloring = "[color=green][u]" greenuncoloring = "[/u][/color]"
# sets the end of the column right if there is a decimal gain within the currency column if vwapcurr_float <= (10*int(vwap_digits)) - 1: endspace = " " if vwapcurr_float >= 10*int(vwap_digits): endspace = "" formatted_output = f"{redcoloring}{greencoloring}{bolding}{rank:2d}{spacing} {formatted_date} {vwapcurr_float:,.0f}{unbolding}{reduncoloring}{greenuncoloring}{endspace}" top100.write(formatted_output + '\n') rank += 1
|
████████████████████ ▟██ █████████████████████████████████████ █████████████████ ⚞▇▇▋███▎▎(⚬)> ███████████████████████████████████ ████████████████████ ▜██ █████████████████████████████████████
|
|
|
|