Random useful(?) notes. |
Random Notes
Smart Pointers to Dumb Objects
This entry was prompted by my annoyance that a generic deleter template I encountered in some codebase only worked for void return types (second one in below examples), while the deleter function I wanted to call had a status return code. The usual pattern for resources/objects created by a C library is something like: Object *obj_create(/* args */); void obj_destroy(Object *); If you're using C++ to interface to a C library, you might want to use smart pointers to ensure cleanup of the objects (i.e. call obj_destroy once the pointer goes out of scope). Let's see how many ways we can do that by using std::unique_ptr ! Also, the goal here is to be: easy(-ish) to use, and with minimal overhead.First the definitions of the free functions: #include <stdio.h> #include <stdlib.h> #include <memory> s *s_new() { s *p = (s *)calloc(1, sizeof(s)); printf("s_new: %p\n", p); return p; } void s_free(s *p) { printf("s_free: %p\n", p); free(p); } int s_free_i(s *p) { // somehow returns a success code (which we ignore) printf("s_free_i: %p\n", p); free(p); return 0; } Then the first one: a specific deleter type that calls a hard-coded function struct s_deleter { void operator()(s *p) { s_free(p); } }; using up_del_struct = std::unique_ptr<s, s_deleter>; The second one: a generic deleter type that takes a non-type template parameter of a specific function type template <typename T, void (*del)(T *)> struct generic_deleter_voidret { void operator()(T *p) { del(p); } }; using up_del_generic_v = std::unique_ptr<s, generic_deleter_voidret<s, s_free>>; This won't work when the function does not return void: // error: could not convert template argument ‘s_free_i’ to ‘void (*)(s*)’ // using up_del_generic_i = std::unique_ptr<s, generic_deleter_voidret<s, s_free_i>>; The third one: a generic deleter type similar to the above, but the deleter function's return type is now generic template <typename T, typename R, R (*del)(T *)> struct generic_deleter_anyret { void operator()(T *p) { del(p); } }; using up_del_anyret = std::unique_ptr<s, generic_deleter_anyret<s, int, s_free_i>>; The fourth one: a unique pointer that has a function pointer associated with it using up_del_fn = std::unique_ptr<s, decltype(&s_free)>; The fifth one: one you can't use yet because your compiler doesn't support C++17 yet (this was tested on the compiler explorer) // C++17 #if __cpp_nontype_template_parameter_auto template <typename T, auto deleter_fn> struct generic_deleter_autofn { void operator()(T *p) { deleter_fn(p); } }; using up_del_autofn = std::unique_ptr<s, generic_deleter_autofn<s, s_free_i> >; #endif The sixth one: after learning that auto was added to save typing decltype (i.e. template<auto val> == template<typename T, T val> where T = decltype(val) ), this is what I came up with:template<typename> struct fn{}; template <typename Ret, typename Arg> // function pointer struct fn<Ret(*)(Arg)>{ using ret = Ret; using arg = Arg; }; template <typename Del, Del del> struct generic_deleter_fn { void operator()(typename fn<Del>::arg p) const { del(p); } }; template<typename Del, typename std::decay<Del>::type del> using up_del_genericfn = std::unique_ptr< typename std::remove_cv< typename std::remove_pointer< typename fn< decltype(del) >::arg >::type >::type, generic_deleter_fn<decltype(del), del> >; The fn...arg , remove_pointer , remove_cv chain is to get the plain argument type of the deleter function for use as the first template parameter to std::unique_ptr e.g. (void (*)(const somepointer *) -> const somepointer * -> const somepointer -> somepointer ).The std::decay is to ensure that the type of the second template parameter is Ret(*)(Arg) (pointer to function) instead of Ret(Arg) (just function). Otherwise we'd have to type something like up_del_genericfn<decltype(&func), func> instead of up_del_genericfn<decltype(func), func> (there cannot be a non-type parameter of type Ret(Arg) ).Now we can define the smart pointer type solely in terms of its deleter function. I'm not sure if I should be impressed or disgusted. Obviously the above only works for function pointers, not function objects. Since this is for wrapping C code, this will do just fine. Without auto for non-type template parameters there is no good way of typing this without mentioning the function twice, but that can be done only once in some type alias in a header file.And how you use them: int main() { { s *p = s_new(); s_free(p); } { up_del_struct p(s_new()); static_assert(sizeof(p) == sizeof(s *), "same size as raw pointer"); } { up_del_generic_v p(s_new()); static_assert(sizeof(p) == sizeof(s *), "same size as raw pointer"); } { up_del_anyret p(s_new()); static_assert(sizeof(p) == sizeof(s *), "same size as raw pointer"); } { // cannot just create with pointer up_del_fn p(s_new(), s_free);
static_assert(sizeof(p) > sizeof(s *), "larger than raw pointer"); } #if __cpp_nontype_template_parameter_auto { up_del_autofn p(s_new()); static_assert(sizeof(p) == sizeof(s *), "same size as raw pointer"); } #endif return 0; } Output here (without the C++17 bit): s_new: 0x2244010 // raw pointer Of the alternatives above, the easiest one to use is definitely the C++17 version - just name the function and no need to think too much about the type. The generic one with std::decay and type deduction based on the deleter's function type isn't too bad to use, but rather horrible to write. Other "brilliant" ideas include:
Not covered here is what happens if the C deleter function has more than one parameter e.g. freeResource(resource_type, resource) . I guess in C++ 17 you could use a constexpr lambda. Otherwise a small inline wrapper function could work.References:
|
Random SSH Agent Tricks
Random Ansible "Tricks"
Cloud vs Cloud vs Physical - Benchmarking cloud & server performance with PerfKitBenchmarker
These days many companies provide cloud services. From the big players like Amazon (AWS), Google (GCP), and Microsoft (Azure), to the smaller ones, to the traditional hosting companies. The multitude of competing offers and pricing models can be confusing. While I don't normally deal with the financial side, I am sometimes expected to help on the technical side.
The good folks at Google along with some other companies and academic institutions have created a tool called PerfKitBenchmarker. It is written in Python and uses the cloud vendor-provided CLI tools to provision instances, run benchmarks on them, and (most importantly) terminate them afterwards[*]. Here I will show how to configure and run some simple tests using PerfKitBenchmarker (let's call it PKB from here on). I assume you use Linux or something similar. These steps are tested on CentOS 7. The installation part is mostly just me restating the documentation, but maybe you will enjoy the example configuration?Installation$ virtualenv-2.7 ~/pkb-ve New python executable in /home/you/pkb-ve/bin/python Installing Setuptools...done. Installing Pip...done. $ . ~/pkb-ve/bin/activate (pkb-ve)$ pip install -U pip Downloading/unpacking pip from https://pypi.python.org/packages/[...snip...]/pip-8.1.2.tar.gz#md5=[...snip...] Downloading pip-8.1.2.tar.gz (1.1MB): 1.1MB downloaded Running setup.py egg_info for package pip warning: no previously-included files found matching [...snip...] no previously-included directories found matching [...snip...] Installing collected packages: pip Found existing installation: pip 1.4.1 Uninstalling pip: Successfully uninstalled pip Running setup.py install for pip warning: no previously-included files found matching [...snip...] no previously-included directories found matching [...snip...] Installing pip script to /home/you/pkb-ve/bin Installing pip2.7 script to /home/you/pkb-ve/bin Installing pip2 script to /home/you/pkb-ve/bin Successfully installed pip Cleaning up... (pkb-ve)$ The pip upgrade is mostly so we can use precompiled binaries (wheels) for some requirements.
Download the latest release of PKB from Github: https://github.com/GoogleCloudPlatform/PerfKitBenchmarker/releases and untar it somewhere, for this example
~/pkb .Now install the requirements:
(pkb-ve)$ cd ~/pkb (pkb-ve)$ pip install -r requirements.txt Collecting python-gflags==3.0.4 (from -r requirements.txt (line 14)) Using cached python-gflags-3.0.4.tar.gz [...snip...] Successfully installed MarkupSafe-0.23 PyYAML-3.11 blinker-1.4 colorama-0.3.7 colorlog-2.6.0 futures-3.0.5 jinja2-2.8 numpy-1.11.1 pandas-0.18.1 pint-0.7.2 python-dateutil-2.5.3 python-gflags-3.0.4 pytz-2016.6.1 six-1.10.0 (pkb-ve)$ As stated above, PKB uses vendor tools. Here we will use Azure because I got some credits for free from the Visual Studio Dev Essentials Program[**].
The Azure tools are (sadly?) written in Javascript and use node.js - so you can't just reuse the virtualenv from earlier. If you don't already have node.js, packages are available from EPEL and also from SCL. In this example I am using the EPEL package (version 0.10).
If you're OK with messing up your system you can of course sudo npm install azure-cli@0.9.9 -g but let's just mess up one part of our system at a time.(pkb-ve)$ cd ~/pkb-ve (pkb-ve)$ npm install azure-cli@0.9.9 [...snip downloading the Internet...] azure-cli@0.9.9 node_modules/azure-cli [...snip dependency tree...] (pkb-ve)$ ln -s ~/pkb-ve/node_modules/.bin/azure ~/pkb-ve/bin (pkb-ve)$ azure info: _ _____ _ ___ ___ info: /_\ |_ / | | | _ \ __| info: _ ___/ _ \__/ /| |_| | / _|___ _ _ info: (___ /_/ \_\/___|\___/|_|_\___| _____) info: (_______ _ _) _ ______ _)_ _ info: (______________ _ ) (___ _ _) info: info: Microsoft Azure: Microsoft's Cloud Platform [...snip help...] (pkb-ve)$ So we have installed azure-cli to a node_modules directory inside the virtualenv, and created a symlink to the CLI utility in the virtualenv's bin directory which is in the path when the virtualenv is active. PKB expects to find the CLI in the path.Now configure the CLI to use your account. Just follow the instructions given by the Azure CLI:
(pkb-ve)$ azure account download info: Executing command account download info: Launching browser to http://go.microsoft.com/fwlink/?LinkId=254432 help: Save the downloaded file, then execute the command help: account import <file> info: account download command OK (pkb-ve)$ azure account import ~/Downloads/blablabla.publishsettings info: Executing command account import info: account import command OK (pkb-ve)$ Now test if it works:
(pkb-ve)$ azure vm list + Getting virtual machines info: No VMs found info: vm list command OK (pkb-ve)$ Once the account is configured, we can move on to configuring the benchmark(s).
PKB has many benchmarks but in this case let's run a simple cross-region iperf test. PKB configuration uses YAML so create this file, let's call it
iperf.yaml :small_sea: &small_sea Azure: machine_type: Small zone: Southeast Asia small_us: &small_us Azure: machine_type: Small zone: West US iperf_azure: &iperf_azure flags: ip_addresses: EXTERNAL vm_groups: vm_1: cloud: Azure vm_spec: *small_us vm_2: cloud: Azure vm_spec: *small_sea benchmarks: - iperf: *iperf_azure The top-level keys that matter are (I think...)
benchmarks and benchmark_name (e.g. iperf , ...). Other keys are ignored so they are ideal to declare anchors we can reference later. If you want to see the configuration with the references expanded try putting it into http://www.yamllint.com/.Now we can run PKB using this file as configuration (interesting parts in bold):
(pkb-ve)$ cd ~/pkb (pkb-ve)$ ./pkb.py --benchmark_config_file=iperf.yaml 2016-07-31 02:53:10,071 e1ddd081 MainThread INFO Verbose logging to: /tmp/perfkitbenchmarker/runs/e1ddd081/pkb.log 2016-07-31 02:53:10,072 e1ddd081 MainThread INFO PerfKitBenchmarker version: unknown 2016-07-31 02:53:10,072 e1ddd081 MainThread INFO Flag values: --benchmark_config_file=iperf.yaml 2016-07-31 02:53:10,308 e1ddd081 MainThread WARNING The key "cloud" was not in the default config, but was in user overrides. This may indicate a typo. 2016-07-31 02:53:10,308 e1ddd081 MainThread WARNING The key "cloud" was not in the default config, but was in user overrides. This may indicate a typo. 2016-07-31 02:53:10,309 e1ddd081 MainThread WARNING The key "flags" was not in the default config, but was in user overrides. This may indicate a typo. 2016-07-31 02:53:10,347 e1ddd081 MainThread INFO Running: azure -v 2016-07-31 02:53:10,600 e1ddd081 MainThread iperf(1/1) INFO Provisioning resources for benchmark iperf 2016-07-31 02:53:10,606 e1ddd081 Thread-3 iperf(1/1) INFO Running: azure account affinity-group create --location=Southeast Asia --label=pkbe1ddd081c5e55b7222d7 pkbe1ddd081c5e55b7222d7 [...snip...] 2016-07-31 02:54:03,702 e1ddd081 Thread-2 iperf(1/1) INFO Ran azure network vnet create --affinity-group=pkbe1ddd0816d400dff5650 pkbe1ddd0816d400dff5650. Got return code (1). [...snip...] STDERR: error: An update to your network configuration is currently underway. Please try this operation again later. error: network vnet create command failed [...snip...] 2016-07-31 02:59:11,406 e1ddd081 MainThread iperf(1/1) INFO ssh to VMs in this benchmark by name with: ssh -F /tmp/perfkitbenchmarker/runs/e1ddd081/ssh_config <vm_name> ssh -F /tmp/perfkitbenchmarker/runs/e1ddd081/ssh_config vm<index> ssh -F /tmp/perfkitbenchmarker/runs/e1ddd081/ssh_config <group_name>-<index> 2016-07-31 02:59:11,411 e1ddd081 MainThread iperf(1/1) INFO Preparing benchmark iperf [...snip...] 2016-07-31 03:05:49,850 e1ddd081 MainThread iperf(1/1) INFO Ran ssh -A -p 22 perfkit@13.76.140.92 -2 -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o IdentitiesOnly=yes -o PreferredAuthentications=publickey -o PasswordAuthentication=no -o ConnectTimeout=5 -o GSSAPIAuthentication=no -o ServerAliveInterval=30 -o ServerAliveCountMax=10 -i /tmp/perfkitbenchmarker/runs/e1ddd081/perfkitbenchmarker_keyfile iperf --client 13.93.227.187 --port 20000 --format m --time 60 -P 1. Got return code (0). STDOUT: ------------------------------------------------------------ Client connecting to 13.93.227.187, TCP port 20000 TCP window size: 0.08 MByte (default) ------------------------------------------------------------ [ 3] local 10.32.0.4 port 40142 connected with 13.93.227.187 port 20000 [ ID] Interval Transfer Bandwidth [ 3] 0.0-60.0 sec 969 MBytes 135 Mbits/sec STDERR: Warning: Permanently added '13.76.140.92' (ECDSA) to the list of known hosts. 2016-07-31 03:05:49,855 e1ddd081 MainThread iperf(1/1) INFO Cleaning up benchmark iperf 2016-07-31 03:05:49,855 e1ddd081 MainThread iperf(1/1) INFO Tearing down resources for benchmark iperf 2016-07-31 03:05:49,857 e1ddd081 Thread-165 iperf(1/1) INFO Running: azure vm delete --quiet pkb-e1ddd081-0 [...snip...] 2016-07-31 03:11:02,215 e1ddd081 MainThread INFO -------------------------PerfKitBenchmarker Complete Results------------------------- [...snip...] -------------------------PerfKitBenchmarker Results Summary------------------------- IPERF: ip_type="external" receiving_machine_type="Small" runtime_in_seconds="60" sending_machine_type="Small" sending_thread_count="1" Throughput 133.000000 Mbits/sec (receiving_zone="Southeast Asia" sending_zone="West US") Throughput 135.000000 Mbits/sec (receiving_zone="West US" sending_zone="Southeast Asia") End to End Runtime 1071.605115 seconds ------------------------- For all tests: perfkitbenchmarker_version="unknown" vm_1_cloud="Azure" vm_1_image="b39f27a8b8c64d52b05eac6a62ebad85__Ubuntu-14_04_4-LTS-amd64-server-20160714-en-us-30GB" vm_1_machine_type="Small" vm_1_vm_count="1" vm_1_zone="West US" vm_2_cloud="Azure" vm_2_image="b39f27a8b8c64d52b05eac6a62ebad85__Ubuntu-14_04_4-LTS-amd64-server-20160714-en-us-30GB" vm_2_machine_type="Small" vm_2_vm_count="1" vm_2_zone="Southeast Asia" 2016-07-31 03:11:02,216 e1ddd081 MainThread INFO Publishing 3 samples to /tmp/perfkitbenchmarker/runs/e1ddd081/perfkitbenchmarker_results.json 2016-07-31 03:11:02,216 e1ddd081 MainThread INFO Benchmark run statuses: ------------------------ Name UID Status ------------------------ iperf iperf0 SUCCEEDED ------------------------ Success rate: 100.00% (1/1) 2016-07-31 03:11:02,217 e1ddd081 MainThread INFO Complete logs can be found at: /tmp/perfkitbenchmarker/runs/e1ddd081/pkb.log (pkb-ve)$ Just ignore the scary WARNING parts. The cloud and the flags are usually specified on the command line (the same
vm_spec can have configurations for multiple clouds). I'm not sure I like troubleshooting the cascading flags so I prefer to just configure everything in the config file[***].Since operations in "the Cloud" take some time to respond, some errors are possible (e.g. see above). PKB will retry the operations after a slight delay.
As you can see, it takes a long time to finish, but at least you didn't have to click on anything and it cleans up after itself.
Contrary to popular belief, not everyone has moved to "the Cloud" yet. You may have some machines in your own datacenter or on some hosting provider that is insufficiently "cloudy" and is unsupported by PKB. Fear not! You can run PKB on those machines by using the
static_vms feature ("static" probably means PKB will not provision them?). Here we will run the same iperf benchmark as above. This will connect to a VPS and run iperf between it and an Azure VM. Let's call this file iperf-static.yaml :small_sea: &small_sea Azure: machine_type: Small zone: Southeast Asia small_us: &small_us Azure: machine_type: Small zone: West US static_vms: us-vps: &us-vps ip_address: your.ip.here user_name: your_user ssh_private_key: /path/to/ssh/key/here # hopefully passwordless? ssh_port: 22 os_type: debian # It's actually Alpine Linux, not Debian, but... install_packages: False # I installed iperf and configured the firewall before running PKB. iperf_azure_vps: &iperf_azure_vps flags: ip_addresses: EXTERNAL vm_groups: vm_1: cloud: Azure vm_spec: *small_us vm_2: static_vms: - *us-vps benchmarks: - iperf: *iperf_azure_vps Run it like before (relevant parts and commands run on static VM highlighted):
$ ./pkb.py --benchmark_config_file=iperf-static.yaml 2016-07-31 03:46:31,497 558636a2 MainThread INFO Verbose logging to: /tmp/perfkitbenchmarker/runs/558636a2/pkb.log 2016-07-31 03:46:31,497 558636a2 MainThread INFO PerfKitBenchmarker version: unknown 2016-07-31 03:46:31,497 558636a2 MainThread INFO Flag values: --benchmark_config_file=iperf-static.yaml 2016-07-31 03:46:31,667 558636a2 MainThread WARNING The key "static_vms" was not in the default config, but was in user overrides. This may indicate a typo. 2016-07-31 03:46:31,668 558636a2 MainThread WARNING The key "cloud" was not in the default config, but was in user overrides. This may indicate a typo. 2016-07-31 03:46:31,668 558636a2 MainThread WARNING The key "flags" was not in the default config, but was in user overrides. This may indicate a typo. 2016-07-31 03:46:31,708 558636a2 MainThread INFO Running: azure -v 2016-07-31 03:46:31,925 558636a2 MainThread iperf(1/1) INFO Provisioning resources for benchmark iperf [...snip...] 2016-07-31 03:47:59,046 558636a2 Thread-9 iperf(1/1) INFO VM: your.ip.here 2016-07-31 03:47:59,048 558636a2 Thread-9 iperf(1/1) INFO Running: ssh -A -p 22 your_user@your.ip.here -2 -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o IdentitiesOnly=yes -o PreferredAuthentications=publickey -o PasswordAuthentication=no -o ConnectTimeout=5 -o GSSAPIAuthentication=no -o ServerAliveInterval=30 -o ServerAliveCountMax=10 -i /path/to/ssh/key/here hostname 2016-07-31 03:48:01,412 558636a2 Thread-9 iperf(1/1) INFO Running: ssh -A -p 22 your_user@your.ip.here -2 -o UserKnownHostsFile=/dev/null
-o StrictHostKeyChecking=no -o IdentitiesOnly=yes -o
PreferredAuthentications=publickey -o PasswordAuthentication=no -o
ConnectTimeout=5 -o GSSAPIAuthentication=no -o ServerAliveInterval=30 -o
ServerAliveCountMax=10 -i /path/to/ssh/key/here mkdir -p /tmp/pkb [...snip...] 2016-07-31 03:51:25,663 558636a2 MainThread iperf(1/1) INFO Preparing benchmark iperf [...snip...] 2016-07-31 03:54:54,948 558636a2 MainThread iperf(1/1) INFO Running: ssh -A -p 22 perfkit@137.135.47.113 -2 -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o IdentitiesOnly=yes -o PreferredAuthentications=publickey -o PasswordAuthentication=no -o ConnectTimeout=5 -o GSSAPIAuthentication=no -o ServerAliveInterval=30 -o ServerAliveCountMax=10 -i /tmp/perfkitbenchmarker/runs/558636a2/perfkitbenchmarker_keyfile nohup iperf --server --port 20000 &> /dev/null& echo $! 2016-07-31 03:54:58,539 558636a2 MainThread iperf(1/1) INFO Running: ssh -A -p 22 your_user@your.ip.here -2 -o UserKnownHostsFile=/dev/null
-o StrictHostKeyChecking=no -o IdentitiesOnly=yes -o
PreferredAuthentications=publickey -o PasswordAuthentication=no -o
ConnectTimeout=5 -o GSSAPIAuthentication=no -o ServerAliveInterval=30 -o
ServerAliveCountMax=10 -i /path/to/ssh/key/here nohup iperf --server --port 20000 &> /dev/null& echo $! 2016-07-31 03:55:00,982 558636a2 MainThread iperf(1/1) INFO Running benchmark iperf 2016-07-31 03:55:00,985 558636a2 MainThread iperf(1/1) INFO Iperf Results: 2016-07-31 03:55:00,986 558636a2 MainThread iperf(1/1) INFO Running: ssh -A -p 22 perfkit@137.135.47.113 -2 -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o IdentitiesOnly=yes -o PreferredAuthentications=publickey -o PasswordAuthentication=no -o ConnectTimeout=5 -o GSSAPIAuthentication=no -o ServerAliveInterval=30 -o ServerAliveCountMax=10 -i /tmp/perfkitbenchmarker/runs/558636a2/perfkitbenchmarker_keyfile iperf --client your.ip.here --port 20000 --format m --time 60 -P 1 2016-07-31 03:56:05,141 558636a2 MainThread iperf(1/1) INFO Ran ssh -A -p 22 perfkit@137.135.47.113 -2 -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o IdentitiesOnly=yes -o PreferredAuthentications=publickey -o PasswordAuthentication=no -o ConnectTimeout=5 -o GSSAPIAuthentication=no -o ServerAliveInterval=30 -o ServerAliveCountMax=10 -i /tmp/perfkitbenchmarker/runs/558636a2/perfkitbenchmarker_keyfile iperf --client your.ip.here --port 20000 --format m --time 60 -P 1. Got return code (0). STDOUT: ------------------------------------------------------------ Client connecting to your.ip.here, TCP port 20000 TCP window size: 0.08 MByte (default) ------------------------------------------------------------ [ 3] local 10.32.0.4 port 42062 connected with your.ip.here port 20000 [ ID] Interval Transfer Bandwidth [ 3] 0.0-60.0 sec 3576 MBytes 500 Mbits/sec STDERR: Warning: Permanently added '137.135.47.113' (ECDSA) to the list of known hosts. 2016-07-31 03:56:05,144 558636a2 MainThread iperf(1/1) INFO Running: ssh -A -p 22 your_user@your.ip.here -2 -o UserKnownHostsFile=/dev/null
-o StrictHostKeyChecking=no -o IdentitiesOnly=yes -o
PreferredAuthentications=publickey -o PasswordAuthentication=no -o
ConnectTimeout=5 -o GSSAPIAuthentication=no -o ServerAliveInterval=30 -o
ServerAliveCountMax=10 -i /path/to/ssh/key/here iperf --client 137.135.47.113 --port 20000 --format m --time 60 -P 1 2016-07-31 03:57:18,261 558636a2 MainThread iperf(1/1) INFO Ran ssh -A -p 22 your_user@your.ip.here -2 -o UserKnownHostsFile=/dev/null
-o StrictHostKeyChecking=no -o IdentitiesOnly=yes -o
PreferredAuthentications=publickey -o PasswordAuthentication=no -o
ConnectTimeout=5 -o GSSAPIAuthentication=no -o ServerAliveInterval=30 -o
ServerAliveCountMax=10 -i /path/to/ssh/key/here iperf --client 137.135.47.113 --port 20000 --format m --time 60 -P 1. Got return code (0). STDOUT: ------------------------------------------------------------ Client connecting to 137.135.47.113, TCP port 20000 TCP window size: 0.04 MByte (default) ------------------------------------------------------------ [ 3] local your.ip.here port 51766 connected with 137.135.47.113 port 20000 [ ID] Interval Transfer Bandwidth [ 3] 0.0-60.0 sec 7949 MBytes 1111 Mbits/sec STDERR: Warning: Permanently added 'your.ip.here' (ECDSA) to the list of known hosts. 2016-07-31 03:57:18,266 558636a2 MainThread iperf(1/1) INFO Cleaning up benchmark iperf 2016-07-31 03:57:18,266 558636a2 MainThread iperf(1/1) INFO Running: ssh -A -p 22 perfkit@137.135.47.113 -2 -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o IdentitiesOnly=yes -o PreferredAuthentications=publickey -o PasswordAuthentication=no -o ConnectTimeout=5 -o GSSAPIAuthentication=no -o ServerAliveInterval=30 -o ServerAliveCountMax=10 -i /tmp/perfkitbenchmarker/runs/558636a2/perfkitbenchmarker_keyfile kill -9 2361 2016-07-31 03:57:22,211 558636a2 MainThread iperf(1/1) INFO Running: ssh -A -p 22 your_user@your.ip.here -2 -o UserKnownHostsFile=/dev/null
-o StrictHostKeyChecking=no -o IdentitiesOnly=yes -o
PreferredAuthentications=publickey -o PasswordAuthentication=no -o
ConnectTimeout=5 -o GSSAPIAuthentication=no -o ServerAliveInterval=30 -o
ServerAliveCountMax=10 -i /path/to/ssh/key/here kill -9 2472 2016-07-31 03:57:28,819 558636a2 MainThread iperf(1/1) INFO Tearing down resources for benchmark iperf [...snip...] 2016-07-31 04:01:11,247 558636a2 MainThread INFO -------------------------PerfKitBenchmarker Complete Results------------------------- [...snip...] -------------------------PerfKitBenchmarker Results Summary------------------------- IPERF: ip_type="external" runtime_in_seconds="60" sending_thread_count="1" Throughput 500.000000 Mbits/sec (receiving_machine_type="None" receiving_zone="Static - your_user@your.ip.here" sending_machine_type="Small" sending_zone="West US") Throughput 1111.000000 Mbits/sec (receiving_machine_type="Small" receiving_zone="West US" sending_machine_type="None" sending_zone="Static - your_user@your.ip.here") End to End Runtime 879.311017 seconds ------------------------- For all tests: perfkitbenchmarker_version="unknown" vm_1_cloud="Azure" vm_1_image="b39f27a8b8c64d52b05eac6a62ebad85__Ubuntu-14_04_4-LTS-amd64-server-20160714-en-us-30GB" vm_1_machine_type="Small" vm_1_vm_count="1" vm_1_zone="West US" vm_2_cloud="Static" vm_2_image="None" vm_2_vm_count="1" vm_2_zone="Static - your_user@your.ip.here" 2016-07-31 04:01:11,249 558636a2 MainThread INFO Publishing 3 samples to /tmp/perfkitbenchmarker/runs/558636a2/perfkitbenchmarker_results.json 2016-07-31 04:01:11,250 558636a2 MainThread INFO Benchmark run statuses: ------------------------ Name UID Status ------------------------ iperf iperf0 SUCCEEDED ------------------------ Success rate: 100.00% (1/1) 2016-07-31 04:01:11,251 558636a2 MainThread INFO Complete logs can be found at: /tmp/perfkitbenchmarker/runs/558636a2/pkb.log (pkb-ve)$ The process is similar except that the static VM is neither provisioned nor cleaned up. If
install_packages is not False (and maybe even if it is true?), the user must have passwordless sudo access. Fortunately the iperf benchmark does not need any privileged commands.If your office network is still using Fast Ethernet, it has less bandwidth than a link between West US and Southeast Asia on the Internet.
For more information e.g. about configuration for the individual benchmarks, the actions each benchmark takes, please look at the source code.
References
[***] From the documentation (emphasis added):
--zones: A list of zones within which to run PerfKitBenchmarker. This is specific to the cloud provider you are running on. If multiple zones are given, PerfKitBenchmarker will create 1 VM in zone, until enough VMs are created as specified in each benchmark. The order in which this flag is applied to VMs is undefined. |
Generating Certificate Requests (CSRs) on Windows
I use Windows. If I want to generate a CSR using OpenSSL it is easy and there are lots of guides on the Internet you can copy commands from. In fact I have done it multiple times and can probably do it from memory. But what if I want to generate a CSR using the Windows GUI tools? Windows has decent support for using a "Certificate Enrollment Policy Server", but as this is going to be used by me to authenticate to my (Linux) VPS, that is not an option here.Steps:
|
Disabling gnome-keyring-daemon SSH agent on MATE Desktop
The mate-session-manager starts gnome-keyring-daemon by default with all components enabled, including the SSH agent. This is less than optimal since the SSH agent lacks a lot of features compared to OpenSSH's ssh-agent (e.g. support for ECDSA and Ed25519 keys). To stop this from happening you need to change the value of a key in gsettings:$ gsettings get org.mate.session gnome-compat-startup ['smproxy', 'keyring'] $ gsettings set org.mate.session gnome-compat-startup "['smproxy']" $ gsettings get org.mate.session gnome-compat-startup ['smproxy'] $ mate-session-properties # uncheck SSH Key Agent and maybe others. |
Android One Rdio download to SD card
Since my Android One phone has limited external memory, I thought buying a 16GB MicroSD card would help. I was wrong. Having not used Android since the Gingerbread days, the MicroSD seems a lot cleaner now after the changes. A. LOT. CLEANER. Which is a good thing, until something you want to put there doesn't get put there*, and that was the case with the Rdio cache. The app claims that it wants to put it in external storage, but that doesn't seem to work. Fortunately, it allows the user to explicitly specify a location (see previous link).To use this first, find out where the real external storage is mounted. On my phone it was on /storage/sdcard1 (and not e.g. /sdcard or /storage/sdcard0 , and obviously not /storage/emulated/legacy - no wonder the app got confused). The example on the screen (/sdcard ) would have you believe that what you put there should be the root path of the storage, but if you tried that, you would get errors on every download.After looking at logcat output to determine where exactly it is trying to create the data, the proper directory to put in the custom storage location is the Android/data directory inside your external storage, so in my case that would be /storage/sdcard1/Android/data .Cheap phones: for people whose time isn't worth much**. [*]: e.g. games with hundreds of megs in assets, but that can't be helped. [**]: saying "just root it" only proves the point. |
32 bit golang Trace/Breakpoint trap: modify_ldt ENOSYS
If you're like me: a bit too cheap to get a large VPS and too miserly to waste limited RAM by using a 64 bit OS you may run into something like this: Where gdrive is https://github.com/prasmussen/gdrive - it is written in the Go programming language, which would explain the rather interesting behavior below.vm:~$ gdrive Trace/breakpoint trap vm:~$ strace /usr/local/bin/gdrive execve("/usr/local/bin/gdrive", ["/usr/local/bin/gdrive"], [/* 11 vars */]) = 0 modify_ldt(1, {entry_number:7, base_addr:0x84d8548, limit:1048575, seg_32bit:1, contents:0, read_exec_only:0, limit_in_pages:1, seg_not_present:0, useable:1}, 1 6) = -1 ENOSYS (Function not implemented) --- SIGTRAP {si_signo=SIGTRAP, si_code=SI_KERNEL} --- +++ killed by SIGTRAP +++ Trace/breakpoint trap vm:~$ uname -a Linux vm 3.18.19-0-virtgrsec #1-Alpine SMP Fri Jul 31 11:09:05 GMT 2015 i686 Linux # echo 1 > /proc/sys/kernel/modify_ldt |
Docker on LVM Thin Pool
These are notes on how to run docker on top of an LVM thin pool, which is nice if your system already has LVM and you have some spare space in the VG. Tested on Arch. Instruction removes all docker data so maybe don't do it if you have data you care about?
Enjoy: Storage Driver: devicemapper Pool Name: vg_laptop-docker--thinpool Pool Blocksize: 65.54 kB Backing Filesystem: xfs Data file: Metadata file: Data Space Used: 21.76 MB Data Space Total: 21.47 GB Data Space Available: 21.45 GB Metadata Space Used: 266.2 kB Metadata Space Total: 2.147 GB Metadata Space Available: 2.147 GB Udev Sync Supported: true Deferred Removal Enabled: false Library Version: 1.02.100 (2015-06-30) References |
UEFI Linux Install on Acer Aspire ES1-111M
Some notes on installing Linux (in Insecure[*] UEFI boot mode) on this Acer netbook. If you are trying to do this, you should know what you're doing, so these are just notes, not instructions.
[*] Secure boot should also be possible, but when using a distribution that does not sign their kernels that seems to be more trouble than it's worth. |
1-10 of 64