Bitcoin Forum
April 26, 2024, 01:04:35 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: [1]
  Print  
Author Topic: C and Posix expert opinion needed: popen, fork, makefifo, signals  (Read 1903 times)
Hyena (OP)
Legendary
*
Offline Offline

Activity: 2114
Merit: 1011



View Profile WWW
March 25, 2017, 10:31:58 AM
Merited by ABCbits (3)
 #1

I am building a totally single-threaded C++ wallet manager that uses C signals. However, I also need to use Bitcoin RPCs in a non-blocking way, so I devised a clever way to utilize popen with ampersand & in the end of the system command where I call curl to do the RPC for me. The popen command eventually pipes the RPC response back to my wallet manager over netcat / TCP. The problem is that signrawtransaction sometimes can take an argument that is longer than ARG_MAX in Linux (maximum command line length). So popen will fail if I provide the raw transaction hex as an argument to popen. What are my options to overcome this limitation and still have my wallet manager single-threaded and signal safe?

I investigated the use of anonymous pipes so that before popen (write mode) I'd call fork and send the raw transaction hex to popen over its stdin instead of command line. However, conceptually fork would make my program multi-threaded so I don't like it. Also, the child process would inhert the signal handlers and would probably receive SIGALRM really often because my main process gets it 8 times per second. So, what are my options here? I could use connect and craft my own HTTP request for Bitcoin Wallet's RPC but I don't want to block my program until connect finishes and I don't want to implement HTTP protocol related stuff in my code.

The only option that I find the most suitable in my situation is to use named pipes (makefifo). I create a named pipe in the tmp folder with a random name, then call popen which takes its stdin from that named pipe, then I write the raw transaction hex into the pipe and continue with my main program immediately. The popen command has an ampersand & in the end so it does not make my main program hang. Is there anything badly wrong with such architectural choices? Is there anything that I might not have thought of? Perhaps named pipes are somehow discouraged or slow? What chmod should I use on the named pipe for maximum privacy? I still have to open the named pipe in my main program and the open system call itself could block. Is it still faster than using connect? My main concern here is slowing down the main process due to blocking system calls. The curl requests can be slow, I don't care about those, but the main process should use as little potentially blocking system calls as possible.

Thank you in advance, whoever you are who can give constructive feedback to my problem/solution.

★★★ CryptoGraffiti.info ★★★ Hidden Messages Found from the Block Chain (Thread)
1714093475
Hero Member
*
Offline Offline

Posts: 1714093475

View Profile Personal Message (Offline)

Ignore
1714093475
Reply with quote  #2

1714093475
Report to moderator
1714093475
Hero Member
*
Offline Offline

Posts: 1714093475

View Profile Personal Message (Offline)

Ignore
1714093475
Reply with quote  #2

1714093475
Report to moderator
If you see garbage posts (off-topic, trolling, spam, no point, etc.), use the "report to moderator" links. All reports are investigated, though you will rarely be contacted about your reports.
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1714093475
Hero Member
*
Offline Offline

Posts: 1714093475

View Profile Personal Message (Offline)

Ignore
1714093475
Reply with quote  #2

1714093475
Report to moderator
2112
Legendary
*
Offline Offline

Activity: 2128
Merit: 1065



View Profile
March 25, 2017, 08:50:02 PM
Merited by ABCbits (1)
 #2

curl (program) has a way of passing arguments through a file instead of a command line. Something like "curl -d @filename ...". I recall being able to easily pass megabyte-long JSON-RPC queries without a problem by writing them first into a temporary file. This wasn't anything Bitcoin-related, it was for a closed-source load-balancers from F5 Networks.

Your architecture certainly looks original, but I probably just don't understand your constraints well enough. popen() is just a wrapper around pipe(), fork() and exec() calls, seems like using them directly would make the whole thing easier to understand.

I'm writing this on a decidedly non-POSIX tablet, so I can't even look up my old notes.

Please comment, critique, criticize or ridicule BIP 2112: https://bitcointalk.org/index.php?topic=54382.0
Long-term mining prognosis: https://bitcointalk.org/index.php?topic=91101.0
Hyena (OP)
Legendary
*
Offline Offline

Activity: 2114
Merit: 1011



View Profile WWW
March 25, 2017, 09:22:32 PM
Merited by ABCbits (1)
 #3

curl (program) has a way of passing arguments through a file instead of a command line. Something like "curl -d @filename ...". I recall being able to easily pass megabyte-long JSON-RPC queries without a problem by writing them first into a temporary file. This wasn't anything Bitcoin-related, it was for a closed-source load-balancers from F5 Networks.

Your architecture certainly looks original, but I probably just don't understand your constraints well enough. popen() is just a wrapper around pipe(), fork() and exec() calls, seems like using them directly would make the whole thing easier to understand.

I'm writing this on a decidedly non-POSIX tablet, so I can't even look up my old notes.

Ok thanks for the input.

Here are my architectural constraints. I want to keep the program single-threaded on purpose. It's way easier to develop, maintain, debug and understand a single-threaded program than a multi-threaded one. I use a main loop and epoll to listen for incoming TCP connections from localhost. The wallet manager has a CLI over TCP/IP. The program has to be self-contained and with only trivial dependencies so I am not using any big phat libraries and what not. I use SIGALRM to interrupt epoll_pwait for example so that the main loop of my program could maintain a stable FPS. The latter makes it very important to avoid potentially indefinite blocking caused by some system calls. The project has to run on popular Linux platforms so I try to keep it POSIX compliant. I believe that I can still taste the benefits of parallelism even though my program is single-threaded. I achieve it with the help of OS and I'd rather spawn a new, independent and isolated process than a new thread to perform tasks in parallel.

Curl program can actually indeed get its request data and config parameters from stdin. I believe the filename would be - in that case (minus sign indicates stdin). I was sort of hoping that popen provides me more isolation from the hassles that come with multithreading. For example, I assume that popen does not expose me to file/socket descriptor leaking and various signal handling related pitfalls in multithreaded context. If I was to use fork and pipe approach I'd have to disable signal handlers in the child process and close the sockets and what not (gets real messy real fast). So that's why I'm using popen (with & in the end of command line). But if this is a fallacy I'd be glad if you enlightened about that.

Below is the problematic function:
Code:
void TREASURER::bitcoin_rpc(const char *method, const nlohmann::json* params) {
    /*
     *  Instead of making a blocking cURL request here we are spawning a child
     *  process with popen so that we can carry on with the main program while
     *  the request is being executed. When the child process finishes it will
     *  connect back to the main server providing us the response from Bitcoin RPC.
     *  This clever trick achieves asynchronous HTTP requests without using threads
     *  in our main process.
     */
    if (manager->get_global("auth-cookie") == nullptr) {
        manager->bug("Unable to execute Bitcoin RPC '%s': cookie not found.", method);
        return;
    }

    nlohmann::json json;
    json["jsonrpc"] = "1.0";
    json["id"] = method;
    json["method"] = method;
    if (params) json["params"] = *params;
    else        json["params"] = nlohmann::json::array();
    //std::cout << json.dump(4) << std::endl;

    std::string cfg;
    cfg.append("--url http://127.0.0.1:8332/\n");
    cfg.append("--max-time 10\n");
    cfg.append("-u ");
    cfg.append(manager->get_global("auth-cookie"));
    cfg.append(1, '\n');
    cfg.append("-H \"content-type: text/plain;\"\n");
    cfg.append("--data-binary @-\n");
    cfg.append(json.dump());

    std::string hex;
    str2hex(cfg.c_str(), &hex);

    std::string command = "printf \"%s\" \"";
    command.append(hex);
    command.append(1, '\"');
    command.append(" | xxd -p -r ");
    command.append(" | curl -s --config - ");
    command.append(" | xargs -0 printf 'su\nsend ");
    command.append(std::to_string(id));
    command.append(" %s\nexit\nexit\n'");
    command.append(" | netcat -q -1 localhost ");
    command.append(manager->get_tcp_port());
    command.append(" > /dev/null 2>/dev/null &");

    FILE *fp = popen(command.c_str(), "r"); // Open the command for reading.
    if (!fp) manager->bug("Unable to execute '%s'.\n", command.c_str());
    else {
        pclose(fp);
        manager->vlog("Bitcoin RPC ---> %s", method);
    }
}

The above code fails if the command length is too long. Strangely, the maximum length of the command is not equal to ARG_MAX. It's some 30 times less than that  Huh

If I could figure out how to programmatically get the real maximum command line length popen can handle then I could implement a fallback for really long Bitcoin RPC arguments (such as raw transaction hex). The fallback would first call makefifo to create a named pipe in /tmp , then spawn a curl process reading from that fifo with popen (& in the end) and then write the Bitcoin RPC into the fifo from the main program.

★★★ CryptoGraffiti.info ★★★ Hidden Messages Found from the Block Chain (Thread)
2112
Legendary
*
Offline Offline

Activity: 2128
Merit: 1065



View Profile
March 25, 2017, 10:04:16 PM
Last edit: March 25, 2017, 10:17:20 PM by 2112
Merited by ABCbits (3)
 #4

Here are my comments to your elaboration:

1) pipes (both anonymous and named) are a write-blocking calls beyond something like PIPE_MAX bytes. So of you are thinking of using them to avoid long blocks then you will be disappointed. IIRC this is something on the order of 16 kB.

2) while I understand your reservations about multithreading programming, I want to remind you that your architecture just pushes the problem up to multitasking programming. You may be pushing against the maximum number of processes per OS image (something like PROCESS_MAX in Posix.) You may also find yourself to be RAM-constrained in your multitasking solution earlier than in an equivalent multithreading solution. In your solution multitasking is just a more expensive way of achieving multithreading by sharing less resources and having more isolation.

Edit: I don't know the further specifics of your solution. In my case (F5 load-balancer with mixed operating systems: Linux plus proprietary TMOS (Traffic Management Operating System)) the net result of it was that high-end F5 boxes were pushed to their knees by a single rather low-end office Windows boxes with Pentium 4. The F5 solution forced multitasking architecture by their mixture of proprietary software and hardware whereas on Windows I would just use standard multithreaded primitives in the Microsoft-proprietary libraries plus whatever gains were available from the hyperthreading in Pentium 4.

Please comment, critique, criticize or ridicule BIP 2112: https://bitcointalk.org/index.php?topic=54382.0
Long-term mining prognosis: https://bitcointalk.org/index.php?topic=91101.0
Hyena (OP)
Legendary
*
Offline Offline

Activity: 2114
Merit: 1011



View Profile WWW
March 26, 2017, 05:47:08 PM
 #5

Here are my comments to your elaboration:

1) pipes (both anonymous and named) are a write-blocking calls beyond something like PIPE_MAX bytes. So of you are thinking of using them to avoid long blocks then you will be disappointed. IIRC this is something on the order of 16 kB.

This is true but since the spawned child process starts to immediately consume bytes from that FIFO it shouldn't have much of an impact.

2) while I understand your reservations about multithreading programming, I want to remind you that your architecture just pushes the problem up to multitasking programming. You may be pushing against the maximum number of processes per OS image (something like PROCESS_MAX in Posix.) You may also find yourself to be RAM-constrained in your multitasking solution earlier than in an equivalent multithreading solution. In your solution multitasking is just a more expensive way of achieving multithreading by sharing less resources and having more isolation.

Yes that's exactly what I'm doing. I'm sacrificing some resources to achieve greater isolation and a more simple codebase.

Edit: I don't know the further specifics of your solution. In my case (F5 load-balancer with mixed operating systems: Linux plus proprietary TMOS (Traffic Management Operating System)) the net result of it was that high-end F5 boxes were pushed to their knees by a single rather low-end office Windows boxes with Pentium 4. The F5 solution forced multitasking architecture by their mixture of proprietary software and hardware whereas on Windows I would just use standard multithreaded primitives in the Microsoft-proprietary libraries plus whatever gains were available from the hyperthreading in Pentium 4.

I think I'm sticking to popen and just implement a fallback to named pipes in case the argument given to popen would be too long (I'd still have to find out the way how to determine that maximum length though). In this project speed and memory are not so important because I'm not making very many RPCs anyway. Perhaps 10 per minute plus 4 per each incoming BTC TX. The good thing about this program is that adding more instances of it will trivially scale up the system for greater workload.

★★★ CryptoGraffiti.info ★★★ Hidden Messages Found from the Block Chain (Thread)
kano
Legendary
*
Offline Offline

Activity: 4466
Merit: 1798


Linux since 1997 RedHat 4


View Profile
March 28, 2017, 10:43:06 AM
 #6

Just write multi-threaded code ... ... ... then you can "really" trivially scale it up to whatever you like.

(Yes I do have a current multi-threaded large system I wrote managing my pool - ckdb - threads are adjustable at start and runtime by simply telling it what threads to add or subtract, and it's running at the moment, 1 process using 32 threads across 20 different sections of code, on a 12 core, 24 thread server)

Pool: https://kano.is - low 0.5% fee PPLNS 3 Days - Most reliable Solo with ONLY 0.5% fee   Bitcointalk thread: Forum
Discord support invite at https://kano.is/ Majority developer of the ckpool code - k for kano
The ONLY active original developer of cgminer. Original master git: https://github.com/kanoi/cgminer
Hyena (OP)
Legendary
*
Offline Offline

Activity: 2114
Merit: 1011



View Profile WWW
March 28, 2017, 10:56:02 AM
Merited by ABCbits (1)
 #7

Just write multi-threaded code ... ... ...

A Bitcoin wallet manager is not the kind of application where multithreading will give you any real advantage. Such programs should be designed to be modular and scalable by adding more instances of the same program rather than spawning more threads from a single process.

The server is deliberately a single-threaded process to avoid the excess complexity that comes with multi-threaded applications. We can safely rely on signals, easily debug the code with gdb and will most likely avoid a great deal of race conditions. To compensate the lack of threads we are spawning child processes with popen and let them connect back to us over TCP to report their results. This works well for HTTP requests. IF possible, let the OS will handle the dirty part of multitasking for us.

I have mostly finished with this issue actually. More about it here: https://www.allegro.cc/forums/thread/616818/1029331

I check the request length and if it exceeds the maximum argument length popen could handle I am falling back to named pipes.

★★★ CryptoGraffiti.info ★★★ Hidden Messages Found from the Block Chain (Thread)
kano
Legendary
*
Offline Offline

Activity: 4466
Merit: 1798


Linux since 1997 RedHat 4


View Profile
March 28, 2017, 11:02:43 AM
 #8

Sounds more like you are saying that you don't understand multi-threaded coding and are trying to work around that, rather than working out how to do it and thus have the advantage of it.

Pool: https://kano.is - low 0.5% fee PPLNS 3 Days - Most reliable Solo with ONLY 0.5% fee   Bitcointalk thread: Forum
Discord support invite at https://kano.is/ Majority developer of the ckpool code - k for kano
The ONLY active original developer of cgminer. Original master git: https://github.com/kanoi/cgminer
Hyena (OP)
Legendary
*
Offline Offline

Activity: 2114
Merit: 1011



View Profile WWW
March 28, 2017, 11:08:30 AM
 #9

Sounds more like you are saying that you don't understand multi-threaded coding and are trying to work around that, rather than working out how to do it and thus have the advantage of it.

I understand it rather well  Wink in fact my master's thesis in Software engineering relies heavily on multithreading in C++11.

But I get the point, you just don't like me probably because I like Bitcoin Unlimited and I hate SegWit, so you are being sour here.

★★★ CryptoGraffiti.info ★★★ Hidden Messages Found from the Block Chain (Thread)
kano
Legendary
*
Offline Offline

Activity: 4466
Merit: 1798


Linux since 1997 RedHat 4


View Profile
March 28, 2017, 11:47:15 AM
 #10

Sounds more like you are saying that you don't understand multi-threaded coding and are trying to work around that, rather than working out how to do it and thus have the advantage of it.

I understand it rather well  Wink in fact my master's thesis in Software engineering relies heavily on multithreading in C++11.

But I get the point, you just don't like me probably because I like Bitcoin Unlimited and I hate SegWit, so you are being sour here.
LOL - throwing around a "master's thesis" - to try make your bad choice look better Smiley

I'll take a guess that your thesis got pretty poor marks, since anyone who actually understands multi-threading would understand how to do it properly and the advantages of that, instead pretending to "know better" and making a bad choice of using multiple processes.

As for your BU inferiority complex, lulz I'm a rather loud opponent of SegWit, but I don't think much of BU either due to many reasons including it's stated low code quality level compared to core.
But again IMO core are pretty hopeless at using locking and multi-threading also.

Pool: https://kano.is - low 0.5% fee PPLNS 3 Days - Most reliable Solo with ONLY 0.5% fee   Bitcointalk thread: Forum
Discord support invite at https://kano.is/ Majority developer of the ckpool code - k for kano
The ONLY active original developer of cgminer. Original master git: https://github.com/kanoi/cgminer
Hyena (OP)
Legendary
*
Offline Offline

Activity: 2114
Merit: 1011



View Profile WWW
March 28, 2017, 12:32:10 PM
 #11

Sounds more like you are saying that you don't understand multi-threaded coding and are trying to work around that, rather than working out how to do it and thus have the advantage of it.

I understand it rather well  Wink in fact my master's thesis in Software engineering relies heavily on multithreading in C++11.

But I get the point, you just don't like me probably because I like Bitcoin Unlimited and I hate SegWit, so you are being sour here.
LOL - throwing around a "master's thesis" - to try make your bad choice look better Smiley

I'll take a guess that your thesis got pretty poor marks, since anyone who actually understands multi-threading would understand how to do it properly and the advantages of that, instead pretending to "know better" and making a bad choice of using multiple processes.

As for your BU inferiority complex, lulz I'm a rather loud opponent of SegWit, but I don't think much of BU either due to many reasons including it's stated low code quality level compared to core.
But again IMO core are pretty hopeless at using locking and multi-threading also.

I'm not much of a BU fan either, it's just that so far BU has been the only serious opponent to Core. So it's the lesser evil.

As for your remarks regarding my thesis, you are replying like a child, but who knows, maybe you're just suffering from autism?

★★★ CryptoGraffiti.info ★★★ Hidden Messages Found from the Block Chain (Thread)
theochino
Member
**
Offline Offline

Activity: 94
Merit: 10

Bitcoiner ....


View Profile WWW
April 01, 2017, 03:59:29 AM
 #12

I am building a totally single-threaded C++ wallet manager that uses C signals.

Just out of curiosity what is wrong with using Curl native ?
Quote
#include <curl/curl.h>

Hyena (OP)
Legendary
*
Offline Offline

Activity: 2114
Merit: 1011



View Profile WWW
April 02, 2017, 10:28:06 AM
Merited by ABCbits (2)
 #13

I am building a totally single-threaded C++ wallet manager that uses C signals.

Just out of curiosity what is wrong with using Curl native ?
Quote
#include <curl/curl.h>

Simply put, curl's native simple interface is blocking the calling thread until the request is complete. Also, it doesn't allow making multiple requests simultaneously. I figured that it would be really neat to spawn the curl requests with the help of the OS as independent child processes because my daemon accepts incoming TCP connections by default. The curl instances would just have to pipe their output to my daemon over TCP when the request finishes. It works really well for small requests and responses but whenever the size of the request or response exceeds the maximum command line argument length allowed by the OS then it gets complicated.

I was advised in another forum to use curl's multi interface.

This is actually very interesting idea and I would definitely utilize it if I had prior experience with curl's multi interface. However, since I haven't used this before I'm sort of reluctant to meet its learning curve without a guarantee that it will get things done the way I want.

If I don't get an elegant solution to this maximum command line argument length issue then as a last resort I am going to use 2 named pipes for each curl request. The first named pipe feeds the request parameters from the daemon to the curl instance. The second named pipe first gets a text prefix, then the response from the curl's instance and finally a text suffix. That second pipe is finally read by netcat which sends it to my daemon at localhost.

★★★ CryptoGraffiti.info ★★★ Hidden Messages Found from the Block Chain (Thread)
Pages: [1]
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!