Showing posts with label shell. Show all posts
Showing posts with label shell. Show all posts

Wednesday, 27 April 2022

Shell 101 - $(), ${}, env, export

https://stackoverflow.com/questions/27472540/difference-between-and-in-bash

 $() means: "first evaluate this, and then evaluate the rest of the line".

Ex :

echo $(pwd)/myFile.txt

will be interpreted as

echo /my/path/myFile.txt


On the other hand ${} expands a variable.

Ex:

MY_VAR=toto
echo ${MY_VAR}/myFile.txt

will be interpreted as

echo toto/myFile.txt


$ is same as ${}

// https://www.linuxtechi.com/variables-in-shell-scripting/

print list of all ENV
printenv command 

print ENV variable :
echo $HOME

// 
export is to export vairable for sub process(if used on terminal, put in bash_profile for current user session)

https://stackoverflow.com/questions/7411455/what-does-export-do-in-shell-programming


$ foo=bar
$ bash -c 'echo $foo'

$ export foo
$ bash -c 'echo $foo'
bar

Wednesday, 2 December 2020

CRUL USEFUL SAMPLE AND CURL scripts

 Useful CURL:

// POST

curl -i \

-H "content-type:application/json" \

-H "content-length:2" \

-H "Authorization:<JWT_TOKEN>" \

-X POST <URL> \

--trace-ascii /dev/stdout >> test.txt


// POST with data

curl -d '{"key1":"value1", "key2":"value2"}' -H "Content-Type: application/json" -X POST http://localhost:3000/data



// PUT

curl -i \

-u username:pwd \  //simple user name and pwd

-H "content-type:application/json" \

-X PUT -d '{"password":"test","tags":"administrator"}' \

http://192.168.20.23:15672/api/test \

--trace-ascii /dev/stdout > outputfile.txt


// GET

curl -i \

-H "content-type:application/json" \

-H "Authorization: <JWT_TOKEN>" \

-X GET https://myendpoint \

--trace-ascii /dev/stdout >> result.txt>&1


// List of flags and commands

curl --help


// Shell. sh

make # number of curl calls, calculate average

------------------------------------------------------------------------------------------

-------------------------------------------------------------------------------------------

#!/bin/bash


function usage() {

echo "Usage:  $0 host count size token"

echo "Example: $0 api.com/api/test 10 jwtToken";

}

# Check number of argument is equal to 3

if [ $# -ne 3 ]; then

usage;

exit;

fi


host=$1

count=$2

token=$3


let i=$count-1

tot=0

while [ $i -ge 0 ];

do

res=$(curl  -i \

-H "content-type:application/json" \

-H "Accept-Encoding:gzip" \

-H "content-length:2" \

-H "Authorization:$token" \

-w "$i: %{time_total} %{http_code} %{size_download} %{url_effective}\n" -o "/dev/null" -s https://$host)

echo $res


val=$(echo $res | cut -f2 -d' ')

# Add Float without bc

tot=$(awk "BEGIN {print $tot+$val}")

let i=i-1

done


# avg=`echo " ${tot}/${count}"`

# Divide float without bc

avg=$(awk "BEGIN {print $tot/$count}")

echo "   ........................."

echo "   AVG:  $avg sec"

------------------------------------------------------------------------------------------

To execute ./thisShell.sh  <api_end_point>  <number_of_api_call>  <jwt_token> > result.txt 2>$1

-------------------------------------------------------------------------------------------





Thursday, 29 October 2020

CURL Timeout

 https://unix.stackexchange.com/questions/94604/does-curl-have-a-timeout/94612#94612


Yes.

Timeout parameters

curl has two options: --connect-timeout and --max-time.

Quoting from the manpage:

--connect-timeout <seconds>
    Maximum  time  in  seconds  that you allow the connection to the
    server to take.  This only limits  the  connection  phase,  once
    curl has connected this option is of no more use.  Since 7.32.0,
    this option accepts decimal values, but the actual timeout  will
    decrease in accuracy as the specified timeout increases in deci‐
    mal precision. See also the -m, --max-time option.

    If this option is used several times, the last one will be used.

and:

-m, --max-time <seconds>
    Maximum  time  in  seconds that you allow the whole operation to
    take.  This is useful for preventing your batch jobs from  hang‐
    ing  for  hours due to slow networks or links going down.  Since
    7.32.0, this option accepts decimal values, but the actual time‐
    out will decrease in accuracy as the specified timeout increases
    in decimal precision.  See also the --connect-timeout option.

    If this option is used several times, the last one will be used.

Defaults

Here (on Debian) it stops trying to connect after 2 minutes, regardless of the time specified with --connect-timeout and although the default connect timeout value seems to be 5 minutes according to the DEFAULT_CONNECT_TIMEOUT macro in lib/connect.h.

A default value for --max-time doesn't seem to exist, making curl wait forever for a response if the initial connect succeeds.

What to use?

You are probably interested in the latter option, --max-time. For your case set it to 900 (15 minutes).

Specifying option --connect-timeout to something like 60 (one minute) might also be a good idea. Otherwise curl will try to connect again and again, apparently using some backoff algorithm.

Tuesday, 27 October 2020

Shell Scripting use of variable in string

 With braces: echo "${ANIMAL}s are the best."

With quotes: echo "$ANIMAL"'s are the best.'

With printf: printf '%ss are the best.\n' "$ANIMAL"



The best way is to use ${variable} in string because when you need multiple variable in a string, its easier


https://stackoverflow.com/questions/18320133/how-do-we-separate-variables-from-letters-in-shell-scripting

Monday, 5 October 2020

CURL sampler

 curl -i \

-u userName:test \

-H "content-type:application/json" \

-X PUT -d '{"json":"test","tags":"json"}' \

http://192.168.20.23:15672/api/users/test \

--trace-ascii /dev/stdout > outputfile.php




curl -i \

-H "content-type:application/json" \

-H "Authorization: jwtToken" \

-X GET https://myURL \

--trace-ascii /dev/stdout >> assetManagementAPITest_10_5_20.txt 2>&1




curl https://www.google.com/search?q=[1985-1990] -w "%{time_connect},%{time_total},%{speed_download},%{http_code},%{size_download},%{url_effective}\n" -o /dev/null -s


SHELL bench mark CURL API test

 https://www.badunetworks.com/performance-testing-curl-part-1-basics/

https://www.badunetworks.com/performance-testing-curl-part-2-scripting/

Overview

The cURL program is widely available across many different platforms, which makes it an obvious choice for network testing. It is simple, scriptable, and flexible – which is why it is so powerful. It supports many protocols, but we are going to focus on HTTP in this article.
The basic syntax for a cURL command is pretty straightforward – just add the destination URL:

$ curl http://google.com

For this simple command, curl will return the result. That usually means a bunch of HTML will be sent to your console. For the example command above, we get the following:

$ curl http://google.com
301 Moved
<h1>301 Moved</h1>
The document has moved
<a href="http://www.google.com/">here</a>.

Remember, curl is not a browser, so by default it doesn’t follow redirects. It simply executes the single command that you gave it (in this case, an HTTP GET). You can output the request headers by adding a -i flag to your command:

$ curl -i http://google.com
HTTP/1.1 301 Moved Permanently
Location: http://www.google.com/
Content-Type: text/html; charset=UTF-8
Date: Thu, 10 Aug 2017 23:29:44 GMT
Expires: Sat, 09 Sep 2017 23:29:44 GMT
Cache-Control: public, max-age=2592000
Server: gws
Content-Length: 219
X-XSS-Protection: 1; mode=block
X-Frame-Options: SAMEORIGIN

301 Moved
<h1>301 Moved</h1>
The document has moved
<a href=”http://www.google.com/”>here</a>.

If you do want curl to follow the redirect, just add the -L parameter.

Now, this may be interesting for one-off manual tests, but probably not for automated testing. Fortunately, curl allows us to customize and format the command output.

Output Formatting

Curl has a -w flag which tells curl to output a certain string of information after the transfer has completed. Here is a list of available variables:

  • content_type
  • filename_effective
  • ftp_entry_path
  • http_code
  • http_connect
  • http_version
  • local_ip
  • local_port
  • num_connects
  • num_redirects
  • proxy_ssl_verify_result
  • redirect_url
  • remote_ip
  • remote_port
  • scheme
  • size_download
  • size_header
  • size_request
  • size_upload
  • speed_download
  • speed_upload
  • ssl_verify_result
  • time_appconnect
  • time_connect
  • time_namelookup
  • time_pretransfer
  • time_redirect
  • time_starttransfer
  • time_total
  • url_effective

The man page for curl contains more detailed information about each variable (including units, etc.).

If we add a few output variables to our original example, we get the following:

$ curl http://google.com -w "%{time_connect},%{time_total},%{speed_download},%{http_code},%{size_download},%{url_effective}\n"
301 Moved
<h1>301 Moved</h1>
The document has moved
<a href="http://www.google.com/">here</a>.
0.011,0.047,4657.000,301,219,http://google.com/

Notice the -w parameter allows us to add in additional characters beyond simply the provided variables. This means we can have a nicely formatted CSV output at the end of our command.

But in a performance script, we probably wouldn’t want the actual page content. For that, we can add a -o flag and send the output to /dev/null (a.k.a., oblivion). Observe:

$ curl http://google.com -w "%{time_connect},%{time_total},%{speed_download},%{http_code},%{size_download},%{url_effective}\n" -o /dev/null
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 219 100 219 0 0 1256 0 --:--:-- --:--:-- --:--:-- 1258
0.140,0.174,1256.000,301,219,http://google.com/

Wait! That got rid of the content, but it replaced it with a progress table. That table is very useful when you are watching a long-running transfer. But not helpful for our automated scripting scenario. Fortunately, the -s (“silent”) option tells curl to keep that progress to itself.

$ curl http://google.com -w "%{time_connect},%{time_total},%{speed_download},%{http_code},%{size_download},%{url_effective}\n" -o /dev/null -s
0.202,0.237,924.000,301,219,http://google.com/

Perfect! Now we can append that output onto our results file:

$ curl http://google.com -w "%{time_connect},%{time_total},%{speed_download},%{http_code},%{size_download},%{url_effective}\n" -o /dev/null -s >> myResults.csv

Dynamic URLs

While single URLs can be useful, more often we have groups of URLs we want to test. Luckily, cURL has some built-in methods for providing multiple variations into a single command. Using brackets [] for ranges, and curly braces {} for sets, we can tell curl to do some interesting things. For example:

$ curl https://www.google.com/search?q=[1985-1990] -w "%{time_connect},%{time_total},%{speed_download},%{http_code},%{size_download},%{url_effective}\n" -o /dev/null -s
0.073,1.155,5008.000,403,5785,https://www.google.com/search?q=1985
0.000,1.045,5535.000,403,5785,https://www.google.com/search?q=1986
0.000,1.043,5548.000,403,5785,https://www.google.com/search?q=1987
0.000,1.044,5541.000,403,5785,https://www.google.com/search?q=1988
0.000,1.084,5336.000,403,5785,https://www.google.com/search?q=1989
0.000,1.285,4488.000,403,5768,https://www.google.com/search?q=1990

We just searched for every year from 1985 – 1990 with a single curl command. By specifying this range in the URL, curl simply goes through each value one at a time. We can also use brackets to create a list of queries:

$ curl "http://www.google.com/search?q={jurassic%20park,jumanji,armageddon}" -w "%{time_connect},%{time_total},%{speed_download},%{http_code},%{size_download},%{url_effective}\n" -o /dev/null -s
0.007,1.237,4367.000,403,5404,http://www.google.com/search?q=jurassic%20park
0.000,1.043,5161.000,403,5384,http://www.google.com/search?q=jumanji
0.000,1.107,4865.000,403,5387,http://www.google.com/search?q=armageddon

At this point, you may have noticed that we are getting 403 responses from Google. That’s because we don’t have a user agent, since we aren’t using a browser. If we add the –user-agent option, we get 200s instead:

$ curl "http://www.google.com/search?q={jurassic%20park,jumanji,armageddon}" -w "%{time_connect},%{time_total},%{speed_download},%{http_code},%{size_download},%{url_effective}\n" -o /dev/null -s --user-agent Tutorial
0.070,0.896,73551.000,200,65922,http://www.google.com/search?q=jurassic%20park
0.000,0.612,99314.000,200,60828,http://www.google.com/search?q=jumanji
0.000,0.615,99508.000,200,61172,http://www.google.com/search?q=armageddon

Note that in most cases, you’ll probably want to use a real user agent string, but this works for our purposes.

Uploads

So far we’ve only looked at downloads. What about uploads? The main difference is you need to specify a file to upload. This is done with the -F parameter. And of course, your URL needs to be one that accepts uploads. Let’s look at an example:

$ curl -F file=@test.file http://mytestserver.net/upload.php
{"status":"OK","message":"file uploaded","$_FILES":{"file":{"name":"test.file","type":"application\/octet-stream","tmp_name":"\/tmp\/phpMHNoBs","error":0,"size":22}}}

Note that the output you get will vary depending on what page you are hitting. In this case, we got a json object with some details about the upload. If we don’t want this output, we can go back to our output variables from earlier and get a nice CSV output (don’t forget to use speed_upload and size_upload now):

$ curl -F file=@test.file http://mytestserver.net/upload.php -w "%{time_connect},%{time_total},%{speed_upload},%{http_code},%{size_upload},%{url_effective}\n" -o "/dev/null" -s
0.024,0.066,3376.000,200,223,http://mytestserver.net/upload.php

Voila! We have uploaded a file with curl. But before we move on, just a quick note on -F : this emulates a form submission. We specified “file” as the name of the form-field we were filling with test.file. If it doesn’t match the form-field on the page, you will likely get errors. Make sure to read the man page for more details on form submission.

Timeouts

Sometimes transfers take a long time. Sometimes servers are unavailable. Default timeouts are often as high as 5 minutes. Fortunately, we can specify our own timeout values for curl to follow.

$ curl -F file=@server1.pcap.gz http://mytestserver.net/upload.php -w "%{time_connect},%{time_total},%{speed_upload},%{http_code},%{size_upload},%{url_effective}\n" -o "/dev/null" -s --connect-timeout 15 --max-time 30
0.154,0.719,7815804.000,200,5616984,http://mytestserver.net/upload.php

This is useful when you are dealing with larger files, or when you’d rather just timeout and move on to the next test. The two parameters above, –connect-timeout and –max-time, are quite useful. However, there are other time related parameters as well, such as:

--expect100-timeout
--keepalive-time

Parallel cURL Testing

To perform multiple curl transfers in parallel, we need to look at another tool: xargs.

If you aren’t familiar with xargs, it is a very powerful linux utility.  With it, we can execute multiple (dynamic) curl commands in parallel with very little overhead.  Example:


seq 1 3 | xargs -n1 -P3 bash -c 'i=$0; url="http://mytestserver.net/10m_test.html?run=${i}"; curl -O -s $url'

This code will run 3 curl commands in parallel.  The -P parameter allows you to set the desired number of parallel executions.  In this example, we are using the seq command to pass numerical arguments to our commands so that each URL is unique with a run number.  The -n parameter simply limits how many arguments are passed per execution. The -c parameter is where we specify our command to be run.

Note that this example doesn’t give any output, it simply runs the transfers.  If you want to save the output, you can use the previous discussion on output format to decide what you want to output and how to save it.

From here, you can expand the number of iterations, pass other interesting parameters (a list of URLs from a file, perhaps), and so on.  We often use this type of command when generating background traffic to simulate particular network conditions.

Automated cURL Testing

At some point, you will want to ramp up the number of iterations to improve the statistical significance of your test results.  Fortunately, it’s easy to script cURL for your test purposes.  We will go through some script examples in Bash that use many of the features we have discussed previously.

First, you have to decide what you want your output to be.  What stats do you care about?  HTTP Code?  Transfer Time? Connect Time? All of the above?

Next you need to decide what your output format will be.  CSV format?  Text output?  A summary only, or individual data points?

Let’s start with a simple example that gives a summary result:


#!/bin/bash

function usage() {
echo "Usage:  $0 host count size port"
}

if [ $# -ne 4 ]; then
usage;
exit;
fi

host=$1
count=$2
size=$3
port=$4

let i=$count-1
while [ $i -ge 0 ];
do
curl -w "$i: %{time_total} %{http_code} %{size_download} %{url_effective}\n" -o "/dev/null" -s http://${host}:${port}/${size}_test.html
let i=i-1
./usleep 1000
done

This simple script takes 4 parameters: hostcountsize, and port.  These values are then used to build the URL and run the command count times.  This assumes your server already has test files available for various pre-determined file sizes.  Here is a sample output from running the script:


$ ./curltest.sh mytestserver.net 10 10k 80
9: 0.037 200 10000 http://mytestserver.net:80/10k_test.html
8: 0.032 200 10000 http://mytestserver.net:80/10k_test.html
7: 0.034 200 10000 http://mytestserver.net:80/10k_test.html
6: 0.031 200 10000 http://mytestserver.net:80/10k_test.html
5: 0.034 200 10000 http://mytestserver.net:80/10k_test.html
4: 0.035 200 10000 http://mytestserver.net:80/10k_test.html
3: 0.036 200 10000 http://mytestserver.net:80/10k_test.html
2: 0.040 200 10000 http://mytestserver.net:80/10k_test.html
1: 0.033 200 10000 http://mytestserver.net:80/10k_test.html
0: 0.035 200 10000 http://mytestserver.net:80/10k_test.html

That’s useful, but it could be more useful if we had some averages in there.  That means we have to keep track of the results as we go.  Here is an updated version of the script:


#!/bin/bash

function usage() {
echo "Usage:  $0 host count size port"
echo "Example: $0 mytestserver.net 10 5k 80";
}

if [ $# -ne 4 ]; then
usage;
exit;
fi

host=$1
count=$2
size=$3
port=$4

let i=$count-1
tot=0
while [ $i -ge 0 ];
do
res=`curl -w "$i: %{time_total} %{http_code} %{size_download} %{url_effective}\n" -o "/dev/null" -s http://${host}:${port}/${size}_test.html`
echo $res
val=`echo $res | cut -f2 -d' '`
tot=`echo "scale=3;${tot}+${val}" | bc`
let i=i-1
./usleep 1000
done

avg=`echo "scale=3; ${tot}/${count}" |bc`
echo "   ........................."
echo "   AVG: $tot/$count = $avg"

Now if we run the above script, we get the following summary at the end:


$ ./curltest.sh mytestserver.net 10 10k 80
9: 0.033 200 10000 http://mytestserver.net:80/10k_test.html
8: 0.033 200 10000 http://mytestserver.net:80/10k_test.html
7: 0.040 200 10000 http://mytestserver.net:80/10k_test.html
6: 0.037 200 10000 http://mytestserver.net:80/10k_test.html
5: 0.040 200 10000 http://mytestserver.net:80/10k_test.html
4: 0.035 200 10000 http://mytestserver.net:80/10k_test.html
3: 0.033 200 10000 http://mytestserver.net:80/10k_test.html
2: 0.040 200 10000 http://mytestserver.net:80/10k_test.html
1: 0.034 200 10000 http://mytestserver.net:80/10k_test.html
0: 0.032 200 10000 http://mytestserver.net:80/10k_test.html
.........................
AVG: .357/10 = .035

Now we have an average, which is more useful for comparisons.  If we have alternate ports set up (for example, with one going through a Badu proxy), then running subsequent tests on the respective ports gives us a meaningful measurement of improvement.  To make that easier, we could further modify the script to accept multiple ports.  Then it could run all the different ports for us, and we could see an immediate comparison.

Here’s what that might look like:


#!/bin/bash

function usage() {
echo "Usage:  $0 host count size port(s)"
echo "Example: $0 mytestserver.net 20 10k 81 82";
}

if [ $# -lt 4 ]; then
usage;
exit;
fi

host=$1
count=$2
size=$3

shift;
shift;
shift;

for p in $*;
do
echo "------------"

let i=$count-1
tot=0
while [ $i -ge 0 ];
do
res=`curl -w "$i: %{time_total} %{http_code} %{size_download} %{url_effective}\n" -o "/dev/null" -s http://${host}:${p}/${size}_test.html`
echo $res
val=`echo $res | cut -f2 -d' '`
tot=`echo "scale=3;${tot}+${val}" | bc`
let i=i-1
./usleep 1000
done

avg=`echo "scale=3; ${tot}/${count}" |bc`
echo "   ........................."
echo "   AVG: $tot/$count = $avg"

done

Note that this implementation allows us to enter as many ports as we want.  Here is sample output from the updated script with two ports:


$ ./curltest mytestserver.net 10 10k 80 81
------------
9: 0.120 200 10000 http://mytestserver.net:80/10k_test.html
8: 0.035 200 10000 http://mytestserver.net:80/10k_test.html
7: 0.038 200 10000 http://mytestserver.net:80/10k_test.html
6: 0.035 200 10000 http://mytestserver.net:80/10k_test.html
5: 0.032 200 10000 http://mytestserver.net:80/10k_test.html
4: 0.032 200 10000 http://mytestserver.net:80/10k_test.html
3: 0.041 200 10000 http://mytestserver.net:80/10k_test.html
2: 0.039 200 10000 http://mytestserver.net:80/10k_test.html
1: 0.033 200 10000 http://mytestserver.net:80/10k_test.html
0: 0.030 200 10000 http://mytestserver.net:80/10k_test.html
.........................
AVG: .435/10 = .043
------------
9: 0.038 200 10000 http://mytestserver.net:81/10k_test.html
8: 0.040 200 10000 http://mytestserver.net:81/10k_test.html
7: 0.038 200 10000 http://mytestserver.net:81/10k_test.html
6: 0.032 200 10000 http://mytestserver.net:81/10k_test.html
5: 0.035 200 10000 http://mytestserver.net:81/10k_test.html
4: 0.033 200 10000 http://mytestserver.net:81/10k_test.html
3: 0.039 200 10000 http://mytestserver.net:81/10k_test.html
2: 0.031 200 10000 http://mytestserver.net:81/10k_test.html
1: 0.034 200 10000 http://mytestserver.net:81/10k_test.html
0: 0.038 200 10000 http://mytestserver.net:81/10k_test.html
.........................
AVG: .358/10 = .035

Now we can have a quick comparison of performance over two separate paths.  However, we’ve discussed elsewhere that running all the tests for one path, followed by all the tests for another path, is not the most accurate way to test.  Most accurate would be to run both paths in parallel, or to at least approximate this by alternating between the paths/ports.  Because network conditions change very rapidly, we want our test runs to be as similar as possible for a fair comparison.  This is especially true if your number of test runs is low.

With that in mind, here is a mostly rewritten script that alternates between two paths:


#!/bin/bash

function usage() {
echo "Usage:  $0 count size udelay host1 port1 host2 port2"
echo "Example: $0 10 50k 1000 mytestserver.net 80 mytestserver.net 81";
}

# check number of parameters
if [ $# -ne 7 ]; then
usage;
exit;
fi

# assign parameters to variables
count=$1
size=$2
delay=$3
host1=$4
port1=$5
host2=$6
port2=$7

# take the dns hit here
curl -w "$i: %{time_total} %{http_code} %{size_download} %{url_effective}\n" -o "/dev/null" -s http://${host1}:80/1k_test.html &> /dev/null
curl -w "$i: %{time_total} %{http_code} %{size_download} %{url_effective}\n" -o "/dev/null" -s http://${host2}:80/1k_test.html &> /dev/null

div="==================================================================="

# print commands to be run
printf "%s%s\n" $div $div
com1="$count: curl -s http://${host1}:${port1}/${size}_test.html"
com2="$count: curl -s http://${host2}:${port2}/${size}_test.html"
printf "%s\t\t%s\n" "$com1" "$com2"
printf "%s%s\n" $div $div

# perform tests
let i=$count-1
tot1=0
tot2=0
while [ $i -ge 0 ];
do
# tests for host1
res1=`curl -w "$i: %{time_total} %{speed_download} %{http_code} %{size_download} %{url_effective}\n" -o "/dev/null" -s http://${host1}:${port1}/${size}_test.html`
val1=`echo "${res1}" | cut -f2 -d' '`
tot1=`echo "scale=3;${tot1}+${val1}" | bc`

# tests for host2
res2=`curl -w "$i: %{time_total} %{speed_download} %{http_code} %{size_download} %{url_effective}\n" -o "/dev/null" -s http://${host2}:${port2}/${size}_test.html`
val2=`echo "${res2}" | cut -f2 -d' '`
tot2=`echo "scale=3;${tot2}+${val2}" | bc`

printf "%s\t%s\n" "$res1" "$res2"

let i=$i-1
./usleep $delay
done

# print summary
avg1=`echo "scale=3; ${tot1}/${count}" |bc`
avg2=`echo "scale=3; ${tot2}/${count}" |bc`
printf "%s%s\n" $div $div
printf "%s\t\t\t\t\t\t\t%s\n" "AVG: ${tot1}/$count = ${avg1}" "AVG: ${tot2}/$count = ${avg2}"

And now our output looks different to show both paths (looks best on a wide screen):


$ ./curltest 10 50k 1000 mytestserver.net 80 mytestserver.net 81
======================================================================================================================================
10: curl -s http://mytestserver.net:80/50k_test.html        10: curl -s http://mytestserver.net:81/50k_test.html
======================================================================================================================================
9: 0.043 1159070.000 200 50000 http://mytestserver.net:80/50k_test.html 9: 0.059 849300.000 200 50000 http://mytestserver.net:81/50k_test.html
8: 0.036 1400874.000 200 50000 http://mytestserver.net:80/50k_test.html 8: 0.059 846095.000 200 50000 http://mytestserver.net:81/50k_test.html
7: 0.035 1429388.000 200 50000 http://mytestserver.net:80/50k_test.html 7: 0.058 864872.000 200 50000 http://mytestserver.net:81/50k_test.html
6: 0.037 1366194.000 200 50000 http://mytestserver.net:80/50k_test.html 6: 0.056 889410.000 200 50000 http://mytestserver.net:81/50k_test.html
5: 0.036 1406944.000 200 50000 http://mytestserver.net:80/50k_test.html 5: 0.052 969612.000 200 50000 http://mytestserver.net:81/50k_test.html
4: 0.035 1419204.000 200 50000 http://mytestserver.net:80/50k_test.html 4: 0.072 698187.000 200 50000 http://mytestserver.net:81/50k_test.html
3: 0.033 1512447.000 200 50000 http://mytestserver.net:80/50k_test.html 3: 0.058 858295.000 200 50000 http://mytestserver.net:81/50k_test.html
2: 0.036 1403587.000 200 50000 http://mytestserver.net:80/50k_test.html 2: 0.060 839842.000 200 50000 http://mytestserver.net:81/50k_test.html
1: 0.033 1526717.000 200 50000 http://mytestserver.net:80/50k_test.html 1: 0.050 994233.000 200 50000 http://mytestserver.net:81/50k_test.html
0: 0.038 1325345.000 200 50000 http://mytestserver.net:80/50k_test.html 0: 0.055 915969.000 200 50000 http://mytestserver.net:81/50k_test.html
======================================================================================================================================
AVG: .362/10 = .036                         AVG: .579/10 = .057


My Sample Bash:


#!/bin/bash


function usage() {

echo "Usage:  $0 host count size token"

echo "Example: $0 yourURL 10 jwtToken";

}

# Check number of argument is equal to 3

if [ $# -ne 3 ]; then

usage;

exit;

fi


host=$1

count=$2

token=$3


let i=$count-1

tot=0

while [ $i -ge 0 ];

do

res=$(curl  -i \

-H "content-type:application/json" \

-H "Authorization:$token" \

-w "$i: %{time_total} %{http_code} %{size_download} %{url_effective}\n" -o "/dev/null" -s https://$host)

echo $res


val=$(echo $res | cut -f2 -d' ')

# Add Float without bc

tot=$(awk "BEGIN {print $tot+$val}")

let i=i-1

done


# avg=`echo " ${tot}/${count}"`

# Divide float without bc

avg=$(awk "BEGIN {print $tot/$count}")

echo "   ........................."

echo "   AVG:  $avg sec"




--retry-delay
--retry-max-time