tee into different variablesRead / write to the same file descriptor with shell redirectionHow do I use tee to redirect to greptee + cat: use an output several times and then concatenate resultsHow to echo output of multiple commands into md5sum?ENV variables in relation to xargs loopsAppend a file, output to screen and redirect from stderr all at once, without teeUsing 'ping', 'cut' and 'tee' togetherRedirect tty to standardusing stdout twice (but not tee as I know it)Embedded Linux -> Application output log captureGrep 3 Capital Letters and Digits into Two Variables

Why did Kant, Hegel, and Adorno leave some words and phrases in the Greek alphabet?

Greatest common substring

apt-get update is failing in debian

What's the purpose of "true" in bash "if sudo true; then"

Have I saved too much for retirement so far?

Is expanding the research of a group into machine learning as a PhD student risky?

How do I rename a LINUX host without needing to reboot for the rename to take effect?

Implement the Thanos sorting algorithm

Is there any reason not to eat food that's been dropped on the surface of the moon?

Go Pregnant or Go Home

What is the oldest known work of fiction?

Hostile work environment after whistle-blowing on coworker and our boss. What do I do?

What is difference between behavior and behaviour

How does it work when somebody invests in my business?

How do I keep an essay about "feeling flat" from feeling flat?

How can I replace every global instance of "x[2]" with "x_2"

What's a natural way to say that someone works somewhere (for a job)?

Is there an Impartial Brexit Deal comparison site?

What is the opposite of 'gravitas'?

Finding all intervals that match predicate in vector

Time travel short story where a man arrives in the late 19th century in a time machine and then sends the machine back into the past

Ways to speed up user implemented RK4

What is the term when two people sing in harmony, but they aren't singing the same notes?

Will it be accepted, if there is no ''Main Character" stereotype?



tee into different variables


Read / write to the same file descriptor with shell redirectionHow do I use tee to redirect to greptee + cat: use an output several times and then concatenate resultsHow to echo output of multiple commands into md5sum?ENV variables in relation to xargs loopsAppend a file, output to screen and redirect from stderr all at once, without teeUsing 'ping', 'cut' and 'tee' togetherRedirect tty to standardusing stdout twice (but not tee as I know it)Embedded Linux -> Application output log captureGrep 3 Capital Letters and Digits into Two Variables













3















From the bash code



command1 | tee >(command2) | command3


I want to capture the output of command2 in var2 and the output of command3 in var3.



command1 is I/O-bound and the other commands are costly but can start working before command1 finishes.



The order of outputs from command2 and command3 are not fixed. So I tried to use file-descriptors in



read -r var2 <<< var3=(command1 | tee >(command2 >&3) | command3) 3>&1


or



read -u 3 -r var2; read -r var3 <<< command1 | tee >(command2 >&3) | command3


but did not succed.



Is there a way to have the three commands run in parallel, store the results in different variables and not make temporary files?










share|improve this question
























  • It's hard to give a negative answer for sure, but I think that would require the shell reading from two pipes at a time, and I can't think of any shell feature that could do that (in Bash, that is). How large are your outputs?

    – ilkkachu
    Mar 18 at 21:20











  • @ilkkachu Thanks for the feedback! Each output is less than 4KB.

    – katosh
    Mar 18 at 21:23











  • interesting, I tried using named pipes, but result were lost due to backgrounding ( & ).

    – Archemar
    Mar 19 at 8:08















3















From the bash code



command1 | tee >(command2) | command3


I want to capture the output of command2 in var2 and the output of command3 in var3.



command1 is I/O-bound and the other commands are costly but can start working before command1 finishes.



The order of outputs from command2 and command3 are not fixed. So I tried to use file-descriptors in



read -r var2 <<< var3=(command1 | tee >(command2 >&3) | command3) 3>&1


or



read -u 3 -r var2; read -r var3 <<< command1 | tee >(command2 >&3) | command3


but did not succed.



Is there a way to have the three commands run in parallel, store the results in different variables and not make temporary files?










share|improve this question
























  • It's hard to give a negative answer for sure, but I think that would require the shell reading from two pipes at a time, and I can't think of any shell feature that could do that (in Bash, that is). How large are your outputs?

    – ilkkachu
    Mar 18 at 21:20











  • @ilkkachu Thanks for the feedback! Each output is less than 4KB.

    – katosh
    Mar 18 at 21:23











  • interesting, I tried using named pipes, but result were lost due to backgrounding ( & ).

    – Archemar
    Mar 19 at 8:08













3












3








3


1






From the bash code



command1 | tee >(command2) | command3


I want to capture the output of command2 in var2 and the output of command3 in var3.



command1 is I/O-bound and the other commands are costly but can start working before command1 finishes.



The order of outputs from command2 and command3 are not fixed. So I tried to use file-descriptors in



read -r var2 <<< var3=(command1 | tee >(command2 >&3) | command3) 3>&1


or



read -u 3 -r var2; read -r var3 <<< command1 | tee >(command2 >&3) | command3


but did not succed.



Is there a way to have the three commands run in parallel, store the results in different variables and not make temporary files?










share|improve this question
















From the bash code



command1 | tee >(command2) | command3


I want to capture the output of command2 in var2 and the output of command3 in var3.



command1 is I/O-bound and the other commands are costly but can start working before command1 finishes.



The order of outputs from command2 and command3 are not fixed. So I tried to use file-descriptors in



read -r var2 <<< var3=(command1 | tee >(command2 >&3) | command3) 3>&1


or



read -u 3 -r var2; read -r var3 <<< command1 | tee >(command2 >&3) | command3


but did not succed.



Is there a way to have the three commands run in parallel, store the results in different variables and not make temporary files?







bash variable file-descriptors tee






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Mar 18 at 21:07







katosh

















asked Mar 18 at 20:57









katoshkatosh

1619




1619












  • It's hard to give a negative answer for sure, but I think that would require the shell reading from two pipes at a time, and I can't think of any shell feature that could do that (in Bash, that is). How large are your outputs?

    – ilkkachu
    Mar 18 at 21:20











  • @ilkkachu Thanks for the feedback! Each output is less than 4KB.

    – katosh
    Mar 18 at 21:23











  • interesting, I tried using named pipes, but result were lost due to backgrounding ( & ).

    – Archemar
    Mar 19 at 8:08

















  • It's hard to give a negative answer for sure, but I think that would require the shell reading from two pipes at a time, and I can't think of any shell feature that could do that (in Bash, that is). How large are your outputs?

    – ilkkachu
    Mar 18 at 21:20











  • @ilkkachu Thanks for the feedback! Each output is less than 4KB.

    – katosh
    Mar 18 at 21:23











  • interesting, I tried using named pipes, but result were lost due to backgrounding ( & ).

    – Archemar
    Mar 19 at 8:08
















It's hard to give a negative answer for sure, but I think that would require the shell reading from two pipes at a time, and I can't think of any shell feature that could do that (in Bash, that is). How large are your outputs?

– ilkkachu
Mar 18 at 21:20





It's hard to give a negative answer for sure, but I think that would require the shell reading from two pipes at a time, and I can't think of any shell feature that could do that (in Bash, that is). How large are your outputs?

– ilkkachu
Mar 18 at 21:20













@ilkkachu Thanks for the feedback! Each output is less than 4KB.

– katosh
Mar 18 at 21:23





@ilkkachu Thanks for the feedback! Each output is less than 4KB.

– katosh
Mar 18 at 21:23













interesting, I tried using named pipes, but result were lost due to backgrounding ( & ).

– Archemar
Mar 19 at 8:08





interesting, I tried using named pipes, but result were lost due to backgrounding ( & ).

– Archemar
Mar 19 at 8:08










3 Answers
3






active

oldest

votes


















1














If I understood well all your requirements you could achieve that in bash by creating an unnamed pipe per command, then redirecting each command’s output to its respective unnamed pipe, and finally retrieving each output from its pipe into a separate variable.



As such, the solution might be like:



: pipe2<> <(:)
: pipe3<> <(:)

command1 | tee >( command2 ; echo EOF ; >&$pipe2) >( command3 ; echo EOF ; >&$pipe3) > /dev/null &
var2=$(while read -ru $pipe2 line ; do [ "$line" = EOF ] && break ; echo "$line" ; done)
var3=$(while read -ru $pipe3 line ; do [ "$line" = EOF ] && break ; echo "$line" ; done)

exec pipe2<&- pipe3<&-


Here note particularly:



  • the use of the <(:) construct; this is an undocumented Bash's trick to open "unnamed" pipes

  • the use of a simple echo EOF as a way to notify the while loops that no more output will come. This is necessary because it's no use to just close the unnamed pipes (which would normally end any while read loop) because those pipes are bidirectional, ie used for both writing and reading. I know no way to open (or convert) them into the usual couple of file-descriptors one being the read-end and the other its write-end.

In this example I used a pure-bash approach (beside the use of tee) to better clarify the basic algorithm that is required by the use of these unnamed pipes, but you could do the two assignments with a couple of sed in place of the while loops, as in var2="$(sed -ne '/^EOF$/q;p' <&$pipe2)" for variable2 and its respective for variable3, yielding the same result with quite less typing. That is, the whole thing would be:



Lean solution for small amount of data



: pipe2<> <(:)
: pipe3<> <(:)

command1 | tee >( command2 ; echo EOF ; >&$pipe2) >( command3 ; echo EOF ; >&$pipe3) > /dev/null &
var2="$(sed -ne '/^EOF$/q;p' <&$pipe2)"
var3="$(sed -ne '/^EOF$/q;p' <&$pipe3)"

exec pipe2<&- pipe3<&-


In order to display the destination variables, remember to disable word splitting by clearing IFS, like this:



IFS=
echo "$var2"
echo "$var3"


otherwise you’d lose newlines on output.



The above does look quite a clean solution indeed. Unfortunately it can only work for not-too-much output, and here your mileage may vary: on my tests I hit problems on around 530k of output. If you are within the (well very conservative) limit of 4k you should be all right.



The reason for that limit lies to the fact that two assignments like those, ie command substitution syntax, are synchronous operations, which means that the second assignment runs only after the first is finished, while on the contrary the tee feeds both commands simultaneously and blocking all of them if any happens to fill its receiving buffer. A deadlock.



The solution for this requires a slightly more complex script, in order to empty both buffers simultaneously. To this end, a while loop over the two pipes would come in handy.



A more standard solution for any amount of data



A more standard Bashism is like:



declare -a var2 var3
while read -r line ; do
case "$line" in
cmd2:*) var2+=("$line#cmd2:") ;;
cmd3:*) var3+=("$line#cmd3:") ;;
esac
done < <(
command1 | tee >(command2 | stdbuf -oL sed -re 's/^/cmd2:/') >(command3 | stdbuf -oL sed -re 's/^/cmd3:/') > /dev/null
)


Here you multiplex the lines from both commands onto the single standard “stdout” file-descriptor, and then subsequently demultiplex that merged output onto each respective variable.



Note particularly:



  • the use of indexed arrays as destination variables: this is because just appending to a normal variable becomes horribly slow in presence of lots of output

  • the use of sed commands to prepend each output line with the strings "cmd2:" or "cmd3:" (respectively) for the script to know which variable each line belongs to

  • the necessary use of stdbuf -oL to set line-buffering for commands’ output: this is because the two commands here share the same output file-descriptor, and as such they would easily override each other’s output in the most typical race condition if they happen to stream out data at the same time; line-buffering output helps avoiding that

  • note also that such use of stdbuf is only required for the last command of each chain, ie the one outputting directly to the shared file-descriptor, which in this case are the sed commands that prepend each commandX’s output with its distinguishing prefix

One safe way to properly display such indexed arrays can be like this:



for ((i = 0; i < $#var2[*]; i++)) ; do
echo "$var2[$i]"
done


Of course you can also just use "$var2[*]" as in:



echo "$var2[*]"


but that is not very efficient when there are many lines.






share|improve this answer

























  • This is very interesting but what makes it better than command1 | tee >(command2 | sed 's/^/cmd2:/') | command3 | sed 's/^/cmd3:/.?

    – katosh
    Mar 20 at 9:26











  • I tried to get it to work but failed to capture any output. How do I manage to store it in var2 and var3?

    – katosh
    Mar 20 at 11:17











  • @katosh: I thought you meant to have all commands run in parallel, including the shell launching the three commands. If I misunderstood that bit then the coproc is of no use. Also, I actually didn’t know about the <(:) construct.. it doesn’t appear anywhere in bash’s docs!! so coproc was the only way I knew to have an unnamed pipe in bash. However in your comment above you still need to at least not pipe tee’s output to command3, otherwise it receives command2’s output too (if you don’t also redirect that away)

    – LL3
    Mar 20 at 12:42












  • @katosh: As to how to scatter the multiplexed output into their respective variable, I’m going to update my post, also maybe including a version using the <(:) construct. I’d have also replied into your own Answer but I’m not yet allowed to comment other’s answers because I’m still below 50 reputation

    – LL3
    Mar 20 at 12:43


















4














So you want to pipe the output of cmd1 into both cmd2 and cmd3 and get both the output of cmd2 and cmd3 into different variables?



Then it seems you need two pipes from the shell, one connected to cmd2's output and one to cmd3's output, and the shell to use select()/poll() to read from those two pipes.



bash won't do for that, you'd need a more advanced shell like zsh. zsh doesn't have a raw interface to pipe(), but if on Linux, you can use the fact that /dev/fd/x on a regular pipe acts like a named pipe and use a similar approach as that used at Read / write to the same file descriptor with shell redirection



#! /bin/zsh -

cmd1() seq 20
cmd2() sed 's/1/<&>/g'
cmd3() tr 0-9 A-J

zmodload zsh/zselect
zmodload zsh/system
typeset -A done out

cmd1 > >(cmd2 >&3 3>&-) > >(cmd3 >&5 5>&-) 3>&- 5>&- &
exec 4< /dev/fd/3 6< /dev/fd/5 3>&- 5>&-
while ((! (done[4] && done[6]))) && zselect -A ready 4 6; do
for fd ($(k)ready[(R)*r*]) done[$fd]=1

done
3> >(:) 5> >(:)

printf '%s output: <%s>n' cmd2 "$out[4]" cmd3 "$out[6]"





share|improve this answer

























  • Why the - in the shebang?

    – terdon
    Mar 19 at 11:23







  • 1





    @terdon, see Why the "-" in the "#! /bin/sh -" shebang?

    – Stéphane Chazelas
    Mar 19 at 11:28











  • Thanks! I hadn't seen that.

    – terdon
    Mar 19 at 11:36











  • Thank you for that crazy solution. I did not expect the problem to be this complicated. Sadly zsh is not an option for the project since not all users of the script will have it installed. But I can learn something from it anyway!

    – katosh
    Mar 20 at 9:52


















0














I found something that seems to work nicely:



exec 3<> <(:)
var3=$(command1 | tee >(command2 >&3) | command3)
var2=$(while IFS= read -t .01 -r -u 3 line; do printf '%sn' "$line"; done)


It works by setting an anonymous pipe <(:) to the file-descriptor 3 and piping the output of command2 to it. var3 captures the output of command3 and the last line reads from the file-descriptor 3 until it does not receive any new data for 0.01 seconds.



It only works for an output of up to 65536 bytes of command2 which seems to be buffered by the anonymous pipe.



I do not like the last line of the solution. I would rather read in everything at once and not wait for 0.01 seconds but stop as soon as the buffer is empty. But I do not know any better way.






share|improve this answer























  • The problem in your last line is that fd 3 does not actually get closed at the end of output, hence the read does not sense the eof event. See also my own updated answer for more info.

    – LL3
    Mar 20 at 20:30










Your Answer








StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "106"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);













draft saved

draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f507061%2ftee-into-different-variables%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown

























3 Answers
3






active

oldest

votes








3 Answers
3






active

oldest

votes









active

oldest

votes






active

oldest

votes









1














If I understood well all your requirements you could achieve that in bash by creating an unnamed pipe per command, then redirecting each command’s output to its respective unnamed pipe, and finally retrieving each output from its pipe into a separate variable.



As such, the solution might be like:



: pipe2<> <(:)
: pipe3<> <(:)

command1 | tee >( command2 ; echo EOF ; >&$pipe2) >( command3 ; echo EOF ; >&$pipe3) > /dev/null &
var2=$(while read -ru $pipe2 line ; do [ "$line" = EOF ] && break ; echo "$line" ; done)
var3=$(while read -ru $pipe3 line ; do [ "$line" = EOF ] && break ; echo "$line" ; done)

exec pipe2<&- pipe3<&-


Here note particularly:



  • the use of the <(:) construct; this is an undocumented Bash's trick to open "unnamed" pipes

  • the use of a simple echo EOF as a way to notify the while loops that no more output will come. This is necessary because it's no use to just close the unnamed pipes (which would normally end any while read loop) because those pipes are bidirectional, ie used for both writing and reading. I know no way to open (or convert) them into the usual couple of file-descriptors one being the read-end and the other its write-end.

In this example I used a pure-bash approach (beside the use of tee) to better clarify the basic algorithm that is required by the use of these unnamed pipes, but you could do the two assignments with a couple of sed in place of the while loops, as in var2="$(sed -ne '/^EOF$/q;p' <&$pipe2)" for variable2 and its respective for variable3, yielding the same result with quite less typing. That is, the whole thing would be:



Lean solution for small amount of data



: pipe2<> <(:)
: pipe3<> <(:)

command1 | tee >( command2 ; echo EOF ; >&$pipe2) >( command3 ; echo EOF ; >&$pipe3) > /dev/null &
var2="$(sed -ne '/^EOF$/q;p' <&$pipe2)"
var3="$(sed -ne '/^EOF$/q;p' <&$pipe3)"

exec pipe2<&- pipe3<&-


In order to display the destination variables, remember to disable word splitting by clearing IFS, like this:



IFS=
echo "$var2"
echo "$var3"


otherwise you’d lose newlines on output.



The above does look quite a clean solution indeed. Unfortunately it can only work for not-too-much output, and here your mileage may vary: on my tests I hit problems on around 530k of output. If you are within the (well very conservative) limit of 4k you should be all right.



The reason for that limit lies to the fact that two assignments like those, ie command substitution syntax, are synchronous operations, which means that the second assignment runs only after the first is finished, while on the contrary the tee feeds both commands simultaneously and blocking all of them if any happens to fill its receiving buffer. A deadlock.



The solution for this requires a slightly more complex script, in order to empty both buffers simultaneously. To this end, a while loop over the two pipes would come in handy.



A more standard solution for any amount of data



A more standard Bashism is like:



declare -a var2 var3
while read -r line ; do
case "$line" in
cmd2:*) var2+=("$line#cmd2:") ;;
cmd3:*) var3+=("$line#cmd3:") ;;
esac
done < <(
command1 | tee >(command2 | stdbuf -oL sed -re 's/^/cmd2:/') >(command3 | stdbuf -oL sed -re 's/^/cmd3:/') > /dev/null
)


Here you multiplex the lines from both commands onto the single standard “stdout” file-descriptor, and then subsequently demultiplex that merged output onto each respective variable.



Note particularly:



  • the use of indexed arrays as destination variables: this is because just appending to a normal variable becomes horribly slow in presence of lots of output

  • the use of sed commands to prepend each output line with the strings "cmd2:" or "cmd3:" (respectively) for the script to know which variable each line belongs to

  • the necessary use of stdbuf -oL to set line-buffering for commands’ output: this is because the two commands here share the same output file-descriptor, and as such they would easily override each other’s output in the most typical race condition if they happen to stream out data at the same time; line-buffering output helps avoiding that

  • note also that such use of stdbuf is only required for the last command of each chain, ie the one outputting directly to the shared file-descriptor, which in this case are the sed commands that prepend each commandX’s output with its distinguishing prefix

One safe way to properly display such indexed arrays can be like this:



for ((i = 0; i < $#var2[*]; i++)) ; do
echo "$var2[$i]"
done


Of course you can also just use "$var2[*]" as in:



echo "$var2[*]"


but that is not very efficient when there are many lines.






share|improve this answer

























  • This is very interesting but what makes it better than command1 | tee >(command2 | sed 's/^/cmd2:/') | command3 | sed 's/^/cmd3:/.?

    – katosh
    Mar 20 at 9:26











  • I tried to get it to work but failed to capture any output. How do I manage to store it in var2 and var3?

    – katosh
    Mar 20 at 11:17











  • @katosh: I thought you meant to have all commands run in parallel, including the shell launching the three commands. If I misunderstood that bit then the coproc is of no use. Also, I actually didn’t know about the <(:) construct.. it doesn’t appear anywhere in bash’s docs!! so coproc was the only way I knew to have an unnamed pipe in bash. However in your comment above you still need to at least not pipe tee’s output to command3, otherwise it receives command2’s output too (if you don’t also redirect that away)

    – LL3
    Mar 20 at 12:42












  • @katosh: As to how to scatter the multiplexed output into their respective variable, I’m going to update my post, also maybe including a version using the <(:) construct. I’d have also replied into your own Answer but I’m not yet allowed to comment other’s answers because I’m still below 50 reputation

    – LL3
    Mar 20 at 12:43















1














If I understood well all your requirements you could achieve that in bash by creating an unnamed pipe per command, then redirecting each command’s output to its respective unnamed pipe, and finally retrieving each output from its pipe into a separate variable.



As such, the solution might be like:



: pipe2<> <(:)
: pipe3<> <(:)

command1 | tee >( command2 ; echo EOF ; >&$pipe2) >( command3 ; echo EOF ; >&$pipe3) > /dev/null &
var2=$(while read -ru $pipe2 line ; do [ "$line" = EOF ] && break ; echo "$line" ; done)
var3=$(while read -ru $pipe3 line ; do [ "$line" = EOF ] && break ; echo "$line" ; done)

exec pipe2<&- pipe3<&-


Here note particularly:



  • the use of the <(:) construct; this is an undocumented Bash's trick to open "unnamed" pipes

  • the use of a simple echo EOF as a way to notify the while loops that no more output will come. This is necessary because it's no use to just close the unnamed pipes (which would normally end any while read loop) because those pipes are bidirectional, ie used for both writing and reading. I know no way to open (or convert) them into the usual couple of file-descriptors one being the read-end and the other its write-end.

In this example I used a pure-bash approach (beside the use of tee) to better clarify the basic algorithm that is required by the use of these unnamed pipes, but you could do the two assignments with a couple of sed in place of the while loops, as in var2="$(sed -ne '/^EOF$/q;p' <&$pipe2)" for variable2 and its respective for variable3, yielding the same result with quite less typing. That is, the whole thing would be:



Lean solution for small amount of data



: pipe2<> <(:)
: pipe3<> <(:)

command1 | tee >( command2 ; echo EOF ; >&$pipe2) >( command3 ; echo EOF ; >&$pipe3) > /dev/null &
var2="$(sed -ne '/^EOF$/q;p' <&$pipe2)"
var3="$(sed -ne '/^EOF$/q;p' <&$pipe3)"

exec pipe2<&- pipe3<&-


In order to display the destination variables, remember to disable word splitting by clearing IFS, like this:



IFS=
echo "$var2"
echo "$var3"


otherwise you’d lose newlines on output.



The above does look quite a clean solution indeed. Unfortunately it can only work for not-too-much output, and here your mileage may vary: on my tests I hit problems on around 530k of output. If you are within the (well very conservative) limit of 4k you should be all right.



The reason for that limit lies to the fact that two assignments like those, ie command substitution syntax, are synchronous operations, which means that the second assignment runs only after the first is finished, while on the contrary the tee feeds both commands simultaneously and blocking all of them if any happens to fill its receiving buffer. A deadlock.



The solution for this requires a slightly more complex script, in order to empty both buffers simultaneously. To this end, a while loop over the two pipes would come in handy.



A more standard solution for any amount of data



A more standard Bashism is like:



declare -a var2 var3
while read -r line ; do
case "$line" in
cmd2:*) var2+=("$line#cmd2:") ;;
cmd3:*) var3+=("$line#cmd3:") ;;
esac
done < <(
command1 | tee >(command2 | stdbuf -oL sed -re 's/^/cmd2:/') >(command3 | stdbuf -oL sed -re 's/^/cmd3:/') > /dev/null
)


Here you multiplex the lines from both commands onto the single standard “stdout” file-descriptor, and then subsequently demultiplex that merged output onto each respective variable.



Note particularly:



  • the use of indexed arrays as destination variables: this is because just appending to a normal variable becomes horribly slow in presence of lots of output

  • the use of sed commands to prepend each output line with the strings "cmd2:" or "cmd3:" (respectively) for the script to know which variable each line belongs to

  • the necessary use of stdbuf -oL to set line-buffering for commands’ output: this is because the two commands here share the same output file-descriptor, and as such they would easily override each other’s output in the most typical race condition if they happen to stream out data at the same time; line-buffering output helps avoiding that

  • note also that such use of stdbuf is only required for the last command of each chain, ie the one outputting directly to the shared file-descriptor, which in this case are the sed commands that prepend each commandX’s output with its distinguishing prefix

One safe way to properly display such indexed arrays can be like this:



for ((i = 0; i < $#var2[*]; i++)) ; do
echo "$var2[$i]"
done


Of course you can also just use "$var2[*]" as in:



echo "$var2[*]"


but that is not very efficient when there are many lines.






share|improve this answer

























  • This is very interesting but what makes it better than command1 | tee >(command2 | sed 's/^/cmd2:/') | command3 | sed 's/^/cmd3:/.?

    – katosh
    Mar 20 at 9:26











  • I tried to get it to work but failed to capture any output. How do I manage to store it in var2 and var3?

    – katosh
    Mar 20 at 11:17











  • @katosh: I thought you meant to have all commands run in parallel, including the shell launching the three commands. If I misunderstood that bit then the coproc is of no use. Also, I actually didn’t know about the <(:) construct.. it doesn’t appear anywhere in bash’s docs!! so coproc was the only way I knew to have an unnamed pipe in bash. However in your comment above you still need to at least not pipe tee’s output to command3, otherwise it receives command2’s output too (if you don’t also redirect that away)

    – LL3
    Mar 20 at 12:42












  • @katosh: As to how to scatter the multiplexed output into their respective variable, I’m going to update my post, also maybe including a version using the <(:) construct. I’d have also replied into your own Answer but I’m not yet allowed to comment other’s answers because I’m still below 50 reputation

    – LL3
    Mar 20 at 12:43













1












1








1







If I understood well all your requirements you could achieve that in bash by creating an unnamed pipe per command, then redirecting each command’s output to its respective unnamed pipe, and finally retrieving each output from its pipe into a separate variable.



As such, the solution might be like:



: pipe2<> <(:)
: pipe3<> <(:)

command1 | tee >( command2 ; echo EOF ; >&$pipe2) >( command3 ; echo EOF ; >&$pipe3) > /dev/null &
var2=$(while read -ru $pipe2 line ; do [ "$line" = EOF ] && break ; echo "$line" ; done)
var3=$(while read -ru $pipe3 line ; do [ "$line" = EOF ] && break ; echo "$line" ; done)

exec pipe2<&- pipe3<&-


Here note particularly:



  • the use of the <(:) construct; this is an undocumented Bash's trick to open "unnamed" pipes

  • the use of a simple echo EOF as a way to notify the while loops that no more output will come. This is necessary because it's no use to just close the unnamed pipes (which would normally end any while read loop) because those pipes are bidirectional, ie used for both writing and reading. I know no way to open (or convert) them into the usual couple of file-descriptors one being the read-end and the other its write-end.

In this example I used a pure-bash approach (beside the use of tee) to better clarify the basic algorithm that is required by the use of these unnamed pipes, but you could do the two assignments with a couple of sed in place of the while loops, as in var2="$(sed -ne '/^EOF$/q;p' <&$pipe2)" for variable2 and its respective for variable3, yielding the same result with quite less typing. That is, the whole thing would be:



Lean solution for small amount of data



: pipe2<> <(:)
: pipe3<> <(:)

command1 | tee >( command2 ; echo EOF ; >&$pipe2) >( command3 ; echo EOF ; >&$pipe3) > /dev/null &
var2="$(sed -ne '/^EOF$/q;p' <&$pipe2)"
var3="$(sed -ne '/^EOF$/q;p' <&$pipe3)"

exec pipe2<&- pipe3<&-


In order to display the destination variables, remember to disable word splitting by clearing IFS, like this:



IFS=
echo "$var2"
echo "$var3"


otherwise you’d lose newlines on output.



The above does look quite a clean solution indeed. Unfortunately it can only work for not-too-much output, and here your mileage may vary: on my tests I hit problems on around 530k of output. If you are within the (well very conservative) limit of 4k you should be all right.



The reason for that limit lies to the fact that two assignments like those, ie command substitution syntax, are synchronous operations, which means that the second assignment runs only after the first is finished, while on the contrary the tee feeds both commands simultaneously and blocking all of them if any happens to fill its receiving buffer. A deadlock.



The solution for this requires a slightly more complex script, in order to empty both buffers simultaneously. To this end, a while loop over the two pipes would come in handy.



A more standard solution for any amount of data



A more standard Bashism is like:



declare -a var2 var3
while read -r line ; do
case "$line" in
cmd2:*) var2+=("$line#cmd2:") ;;
cmd3:*) var3+=("$line#cmd3:") ;;
esac
done < <(
command1 | tee >(command2 | stdbuf -oL sed -re 's/^/cmd2:/') >(command3 | stdbuf -oL sed -re 's/^/cmd3:/') > /dev/null
)


Here you multiplex the lines from both commands onto the single standard “stdout” file-descriptor, and then subsequently demultiplex that merged output onto each respective variable.



Note particularly:



  • the use of indexed arrays as destination variables: this is because just appending to a normal variable becomes horribly slow in presence of lots of output

  • the use of sed commands to prepend each output line with the strings "cmd2:" or "cmd3:" (respectively) for the script to know which variable each line belongs to

  • the necessary use of stdbuf -oL to set line-buffering for commands’ output: this is because the two commands here share the same output file-descriptor, and as such they would easily override each other’s output in the most typical race condition if they happen to stream out data at the same time; line-buffering output helps avoiding that

  • note also that such use of stdbuf is only required for the last command of each chain, ie the one outputting directly to the shared file-descriptor, which in this case are the sed commands that prepend each commandX’s output with its distinguishing prefix

One safe way to properly display such indexed arrays can be like this:



for ((i = 0; i < $#var2[*]; i++)) ; do
echo "$var2[$i]"
done


Of course you can also just use "$var2[*]" as in:



echo "$var2[*]"


but that is not very efficient when there are many lines.






share|improve this answer















If I understood well all your requirements you could achieve that in bash by creating an unnamed pipe per command, then redirecting each command’s output to its respective unnamed pipe, and finally retrieving each output from its pipe into a separate variable.



As such, the solution might be like:



: pipe2<> <(:)
: pipe3<> <(:)

command1 | tee >( command2 ; echo EOF ; >&$pipe2) >( command3 ; echo EOF ; >&$pipe3) > /dev/null &
var2=$(while read -ru $pipe2 line ; do [ "$line" = EOF ] && break ; echo "$line" ; done)
var3=$(while read -ru $pipe3 line ; do [ "$line" = EOF ] && break ; echo "$line" ; done)

exec pipe2<&- pipe3<&-


Here note particularly:



  • the use of the <(:) construct; this is an undocumented Bash's trick to open "unnamed" pipes

  • the use of a simple echo EOF as a way to notify the while loops that no more output will come. This is necessary because it's no use to just close the unnamed pipes (which would normally end any while read loop) because those pipes are bidirectional, ie used for both writing and reading. I know no way to open (or convert) them into the usual couple of file-descriptors one being the read-end and the other its write-end.

In this example I used a pure-bash approach (beside the use of tee) to better clarify the basic algorithm that is required by the use of these unnamed pipes, but you could do the two assignments with a couple of sed in place of the while loops, as in var2="$(sed -ne '/^EOF$/q;p' <&$pipe2)" for variable2 and its respective for variable3, yielding the same result with quite less typing. That is, the whole thing would be:



Lean solution for small amount of data



: pipe2<> <(:)
: pipe3<> <(:)

command1 | tee >( command2 ; echo EOF ; >&$pipe2) >( command3 ; echo EOF ; >&$pipe3) > /dev/null &
var2="$(sed -ne '/^EOF$/q;p' <&$pipe2)"
var3="$(sed -ne '/^EOF$/q;p' <&$pipe3)"

exec pipe2<&- pipe3<&-


In order to display the destination variables, remember to disable word splitting by clearing IFS, like this:



IFS=
echo "$var2"
echo "$var3"


otherwise you’d lose newlines on output.



The above does look quite a clean solution indeed. Unfortunately it can only work for not-too-much output, and here your mileage may vary: on my tests I hit problems on around 530k of output. If you are within the (well very conservative) limit of 4k you should be all right.



The reason for that limit lies to the fact that two assignments like those, ie command substitution syntax, are synchronous operations, which means that the second assignment runs only after the first is finished, while on the contrary the tee feeds both commands simultaneously and blocking all of them if any happens to fill its receiving buffer. A deadlock.



The solution for this requires a slightly more complex script, in order to empty both buffers simultaneously. To this end, a while loop over the two pipes would come in handy.



A more standard solution for any amount of data



A more standard Bashism is like:



declare -a var2 var3
while read -r line ; do
case "$line" in
cmd2:*) var2+=("$line#cmd2:") ;;
cmd3:*) var3+=("$line#cmd3:") ;;
esac
done < <(
command1 | tee >(command2 | stdbuf -oL sed -re 's/^/cmd2:/') >(command3 | stdbuf -oL sed -re 's/^/cmd3:/') > /dev/null
)


Here you multiplex the lines from both commands onto the single standard “stdout” file-descriptor, and then subsequently demultiplex that merged output onto each respective variable.



Note particularly:



  • the use of indexed arrays as destination variables: this is because just appending to a normal variable becomes horribly slow in presence of lots of output

  • the use of sed commands to prepend each output line with the strings "cmd2:" or "cmd3:" (respectively) for the script to know which variable each line belongs to

  • the necessary use of stdbuf -oL to set line-buffering for commands’ output: this is because the two commands here share the same output file-descriptor, and as such they would easily override each other’s output in the most typical race condition if they happen to stream out data at the same time; line-buffering output helps avoiding that

  • note also that such use of stdbuf is only required for the last command of each chain, ie the one outputting directly to the shared file-descriptor, which in this case are the sed commands that prepend each commandX’s output with its distinguishing prefix

One safe way to properly display such indexed arrays can be like this:



for ((i = 0; i < $#var2[*]; i++)) ; do
echo "$var2[$i]"
done


Of course you can also just use "$var2[*]" as in:



echo "$var2[*]"


but that is not very efficient when there are many lines.







share|improve this answer














share|improve this answer



share|improve this answer








edited 4 hours ago

























answered Mar 20 at 2:20









LL3LL3

1314




1314












  • This is very interesting but what makes it better than command1 | tee >(command2 | sed 's/^/cmd2:/') | command3 | sed 's/^/cmd3:/.?

    – katosh
    Mar 20 at 9:26











  • I tried to get it to work but failed to capture any output. How do I manage to store it in var2 and var3?

    – katosh
    Mar 20 at 11:17











  • @katosh: I thought you meant to have all commands run in parallel, including the shell launching the three commands. If I misunderstood that bit then the coproc is of no use. Also, I actually didn’t know about the <(:) construct.. it doesn’t appear anywhere in bash’s docs!! so coproc was the only way I knew to have an unnamed pipe in bash. However in your comment above you still need to at least not pipe tee’s output to command3, otherwise it receives command2’s output too (if you don’t also redirect that away)

    – LL3
    Mar 20 at 12:42












  • @katosh: As to how to scatter the multiplexed output into their respective variable, I’m going to update my post, also maybe including a version using the <(:) construct. I’d have also replied into your own Answer but I’m not yet allowed to comment other’s answers because I’m still below 50 reputation

    – LL3
    Mar 20 at 12:43

















  • This is very interesting but what makes it better than command1 | tee >(command2 | sed 's/^/cmd2:/') | command3 | sed 's/^/cmd3:/.?

    – katosh
    Mar 20 at 9:26











  • I tried to get it to work but failed to capture any output. How do I manage to store it in var2 and var3?

    – katosh
    Mar 20 at 11:17











  • @katosh: I thought you meant to have all commands run in parallel, including the shell launching the three commands. If I misunderstood that bit then the coproc is of no use. Also, I actually didn’t know about the <(:) construct.. it doesn’t appear anywhere in bash’s docs!! so coproc was the only way I knew to have an unnamed pipe in bash. However in your comment above you still need to at least not pipe tee’s output to command3, otherwise it receives command2’s output too (if you don’t also redirect that away)

    – LL3
    Mar 20 at 12:42












  • @katosh: As to how to scatter the multiplexed output into their respective variable, I’m going to update my post, also maybe including a version using the <(:) construct. I’d have also replied into your own Answer but I’m not yet allowed to comment other’s answers because I’m still below 50 reputation

    – LL3
    Mar 20 at 12:43
















This is very interesting but what makes it better than command1 | tee >(command2 | sed 's/^/cmd2:/') | command3 | sed 's/^/cmd3:/.?

– katosh
Mar 20 at 9:26





This is very interesting but what makes it better than command1 | tee >(command2 | sed 's/^/cmd2:/') | command3 | sed 's/^/cmd3:/.?

– katosh
Mar 20 at 9:26













I tried to get it to work but failed to capture any output. How do I manage to store it in var2 and var3?

– katosh
Mar 20 at 11:17





I tried to get it to work but failed to capture any output. How do I manage to store it in var2 and var3?

– katosh
Mar 20 at 11:17













@katosh: I thought you meant to have all commands run in parallel, including the shell launching the three commands. If I misunderstood that bit then the coproc is of no use. Also, I actually didn’t know about the <(:) construct.. it doesn’t appear anywhere in bash’s docs!! so coproc was the only way I knew to have an unnamed pipe in bash. However in your comment above you still need to at least not pipe tee’s output to command3, otherwise it receives command2’s output too (if you don’t also redirect that away)

– LL3
Mar 20 at 12:42






@katosh: I thought you meant to have all commands run in parallel, including the shell launching the three commands. If I misunderstood that bit then the coproc is of no use. Also, I actually didn’t know about the <(:) construct.. it doesn’t appear anywhere in bash’s docs!! so coproc was the only way I knew to have an unnamed pipe in bash. However in your comment above you still need to at least not pipe tee’s output to command3, otherwise it receives command2’s output too (if you don’t also redirect that away)

– LL3
Mar 20 at 12:42














@katosh: As to how to scatter the multiplexed output into their respective variable, I’m going to update my post, also maybe including a version using the <(:) construct. I’d have also replied into your own Answer but I’m not yet allowed to comment other’s answers because I’m still below 50 reputation

– LL3
Mar 20 at 12:43





@katosh: As to how to scatter the multiplexed output into their respective variable, I’m going to update my post, also maybe including a version using the <(:) construct. I’d have also replied into your own Answer but I’m not yet allowed to comment other’s answers because I’m still below 50 reputation

– LL3
Mar 20 at 12:43













4














So you want to pipe the output of cmd1 into both cmd2 and cmd3 and get both the output of cmd2 and cmd3 into different variables?



Then it seems you need two pipes from the shell, one connected to cmd2's output and one to cmd3's output, and the shell to use select()/poll() to read from those two pipes.



bash won't do for that, you'd need a more advanced shell like zsh. zsh doesn't have a raw interface to pipe(), but if on Linux, you can use the fact that /dev/fd/x on a regular pipe acts like a named pipe and use a similar approach as that used at Read / write to the same file descriptor with shell redirection



#! /bin/zsh -

cmd1() seq 20
cmd2() sed 's/1/<&>/g'
cmd3() tr 0-9 A-J

zmodload zsh/zselect
zmodload zsh/system
typeset -A done out

cmd1 > >(cmd2 >&3 3>&-) > >(cmd3 >&5 5>&-) 3>&- 5>&- &
exec 4< /dev/fd/3 6< /dev/fd/5 3>&- 5>&-
while ((! (done[4] && done[6]))) && zselect -A ready 4 6; do
for fd ($(k)ready[(R)*r*]) done[$fd]=1

done
3> >(:) 5> >(:)

printf '%s output: <%s>n' cmd2 "$out[4]" cmd3 "$out[6]"





share|improve this answer

























  • Why the - in the shebang?

    – terdon
    Mar 19 at 11:23







  • 1





    @terdon, see Why the "-" in the "#! /bin/sh -" shebang?

    – Stéphane Chazelas
    Mar 19 at 11:28











  • Thanks! I hadn't seen that.

    – terdon
    Mar 19 at 11:36











  • Thank you for that crazy solution. I did not expect the problem to be this complicated. Sadly zsh is not an option for the project since not all users of the script will have it installed. But I can learn something from it anyway!

    – katosh
    Mar 20 at 9:52















4














So you want to pipe the output of cmd1 into both cmd2 and cmd3 and get both the output of cmd2 and cmd3 into different variables?



Then it seems you need two pipes from the shell, one connected to cmd2's output and one to cmd3's output, and the shell to use select()/poll() to read from those two pipes.



bash won't do for that, you'd need a more advanced shell like zsh. zsh doesn't have a raw interface to pipe(), but if on Linux, you can use the fact that /dev/fd/x on a regular pipe acts like a named pipe and use a similar approach as that used at Read / write to the same file descriptor with shell redirection



#! /bin/zsh -

cmd1() seq 20
cmd2() sed 's/1/<&>/g'
cmd3() tr 0-9 A-J

zmodload zsh/zselect
zmodload zsh/system
typeset -A done out

cmd1 > >(cmd2 >&3 3>&-) > >(cmd3 >&5 5>&-) 3>&- 5>&- &
exec 4< /dev/fd/3 6< /dev/fd/5 3>&- 5>&-
while ((! (done[4] && done[6]))) && zselect -A ready 4 6; do
for fd ($(k)ready[(R)*r*]) done[$fd]=1

done
3> >(:) 5> >(:)

printf '%s output: <%s>n' cmd2 "$out[4]" cmd3 "$out[6]"





share|improve this answer

























  • Why the - in the shebang?

    – terdon
    Mar 19 at 11:23







  • 1





    @terdon, see Why the "-" in the "#! /bin/sh -" shebang?

    – Stéphane Chazelas
    Mar 19 at 11:28











  • Thanks! I hadn't seen that.

    – terdon
    Mar 19 at 11:36











  • Thank you for that crazy solution. I did not expect the problem to be this complicated. Sadly zsh is not an option for the project since not all users of the script will have it installed. But I can learn something from it anyway!

    – katosh
    Mar 20 at 9:52













4












4








4







So you want to pipe the output of cmd1 into both cmd2 and cmd3 and get both the output of cmd2 and cmd3 into different variables?



Then it seems you need two pipes from the shell, one connected to cmd2's output and one to cmd3's output, and the shell to use select()/poll() to read from those two pipes.



bash won't do for that, you'd need a more advanced shell like zsh. zsh doesn't have a raw interface to pipe(), but if on Linux, you can use the fact that /dev/fd/x on a regular pipe acts like a named pipe and use a similar approach as that used at Read / write to the same file descriptor with shell redirection



#! /bin/zsh -

cmd1() seq 20
cmd2() sed 's/1/<&>/g'
cmd3() tr 0-9 A-J

zmodload zsh/zselect
zmodload zsh/system
typeset -A done out

cmd1 > >(cmd2 >&3 3>&-) > >(cmd3 >&5 5>&-) 3>&- 5>&- &
exec 4< /dev/fd/3 6< /dev/fd/5 3>&- 5>&-
while ((! (done[4] && done[6]))) && zselect -A ready 4 6; do
for fd ($(k)ready[(R)*r*]) done[$fd]=1

done
3> >(:) 5> >(:)

printf '%s output: <%s>n' cmd2 "$out[4]" cmd3 "$out[6]"





share|improve this answer















So you want to pipe the output of cmd1 into both cmd2 and cmd3 and get both the output of cmd2 and cmd3 into different variables?



Then it seems you need two pipes from the shell, one connected to cmd2's output and one to cmd3's output, and the shell to use select()/poll() to read from those two pipes.



bash won't do for that, you'd need a more advanced shell like zsh. zsh doesn't have a raw interface to pipe(), but if on Linux, you can use the fact that /dev/fd/x on a regular pipe acts like a named pipe and use a similar approach as that used at Read / write to the same file descriptor with shell redirection



#! /bin/zsh -

cmd1() seq 20
cmd2() sed 's/1/<&>/g'
cmd3() tr 0-9 A-J

zmodload zsh/zselect
zmodload zsh/system
typeset -A done out

cmd1 > >(cmd2 >&3 3>&-) > >(cmd3 >&5 5>&-) 3>&- 5>&- &
exec 4< /dev/fd/3 6< /dev/fd/5 3>&- 5>&-
while ((! (done[4] && done[6]))) && zselect -A ready 4 6; do
for fd ($(k)ready[(R)*r*]) done[$fd]=1

done
3> >(:) 5> >(:)

printf '%s output: <%s>n' cmd2 "$out[4]" cmd3 "$out[6]"






share|improve this answer














share|improve this answer



share|improve this answer








edited Mar 19 at 11:23









terdon

133k32264444




133k32264444










answered Mar 18 at 22:08









Stéphane ChazelasStéphane Chazelas

311k57587945




311k57587945












  • Why the - in the shebang?

    – terdon
    Mar 19 at 11:23







  • 1





    @terdon, see Why the "-" in the "#! /bin/sh -" shebang?

    – Stéphane Chazelas
    Mar 19 at 11:28











  • Thanks! I hadn't seen that.

    – terdon
    Mar 19 at 11:36











  • Thank you for that crazy solution. I did not expect the problem to be this complicated. Sadly zsh is not an option for the project since not all users of the script will have it installed. But I can learn something from it anyway!

    – katosh
    Mar 20 at 9:52

















  • Why the - in the shebang?

    – terdon
    Mar 19 at 11:23







  • 1





    @terdon, see Why the "-" in the "#! /bin/sh -" shebang?

    – Stéphane Chazelas
    Mar 19 at 11:28











  • Thanks! I hadn't seen that.

    – terdon
    Mar 19 at 11:36











  • Thank you for that crazy solution. I did not expect the problem to be this complicated. Sadly zsh is not an option for the project since not all users of the script will have it installed. But I can learn something from it anyway!

    – katosh
    Mar 20 at 9:52
















Why the - in the shebang?

– terdon
Mar 19 at 11:23






Why the - in the shebang?

– terdon
Mar 19 at 11:23





1




1





@terdon, see Why the "-" in the "#! /bin/sh -" shebang?

– Stéphane Chazelas
Mar 19 at 11:28





@terdon, see Why the "-" in the "#! /bin/sh -" shebang?

– Stéphane Chazelas
Mar 19 at 11:28













Thanks! I hadn't seen that.

– terdon
Mar 19 at 11:36





Thanks! I hadn't seen that.

– terdon
Mar 19 at 11:36













Thank you for that crazy solution. I did not expect the problem to be this complicated. Sadly zsh is not an option for the project since not all users of the script will have it installed. But I can learn something from it anyway!

– katosh
Mar 20 at 9:52





Thank you for that crazy solution. I did not expect the problem to be this complicated. Sadly zsh is not an option for the project since not all users of the script will have it installed. But I can learn something from it anyway!

– katosh
Mar 20 at 9:52











0














I found something that seems to work nicely:



exec 3<> <(:)
var3=$(command1 | tee >(command2 >&3) | command3)
var2=$(while IFS= read -t .01 -r -u 3 line; do printf '%sn' "$line"; done)


It works by setting an anonymous pipe <(:) to the file-descriptor 3 and piping the output of command2 to it. var3 captures the output of command3 and the last line reads from the file-descriptor 3 until it does not receive any new data for 0.01 seconds.



It only works for an output of up to 65536 bytes of command2 which seems to be buffered by the anonymous pipe.



I do not like the last line of the solution. I would rather read in everything at once and not wait for 0.01 seconds but stop as soon as the buffer is empty. But I do not know any better way.






share|improve this answer























  • The problem in your last line is that fd 3 does not actually get closed at the end of output, hence the read does not sense the eof event. See also my own updated answer for more info.

    – LL3
    Mar 20 at 20:30















0














I found something that seems to work nicely:



exec 3<> <(:)
var3=$(command1 | tee >(command2 >&3) | command3)
var2=$(while IFS= read -t .01 -r -u 3 line; do printf '%sn' "$line"; done)


It works by setting an anonymous pipe <(:) to the file-descriptor 3 and piping the output of command2 to it. var3 captures the output of command3 and the last line reads from the file-descriptor 3 until it does not receive any new data for 0.01 seconds.



It only works for an output of up to 65536 bytes of command2 which seems to be buffered by the anonymous pipe.



I do not like the last line of the solution. I would rather read in everything at once and not wait for 0.01 seconds but stop as soon as the buffer is empty. But I do not know any better way.






share|improve this answer























  • The problem in your last line is that fd 3 does not actually get closed at the end of output, hence the read does not sense the eof event. See also my own updated answer for more info.

    – LL3
    Mar 20 at 20:30













0












0








0







I found something that seems to work nicely:



exec 3<> <(:)
var3=$(command1 | tee >(command2 >&3) | command3)
var2=$(while IFS= read -t .01 -r -u 3 line; do printf '%sn' "$line"; done)


It works by setting an anonymous pipe <(:) to the file-descriptor 3 and piping the output of command2 to it. var3 captures the output of command3 and the last line reads from the file-descriptor 3 until it does not receive any new data for 0.01 seconds.



It only works for an output of up to 65536 bytes of command2 which seems to be buffered by the anonymous pipe.



I do not like the last line of the solution. I would rather read in everything at once and not wait for 0.01 seconds but stop as soon as the buffer is empty. But I do not know any better way.






share|improve this answer













I found something that seems to work nicely:



exec 3<> <(:)
var3=$(command1 | tee >(command2 >&3) | command3)
var2=$(while IFS= read -t .01 -r -u 3 line; do printf '%sn' "$line"; done)


It works by setting an anonymous pipe <(:) to the file-descriptor 3 and piping the output of command2 to it. var3 captures the output of command3 and the last line reads from the file-descriptor 3 until it does not receive any new data for 0.01 seconds.



It only works for an output of up to 65536 bytes of command2 which seems to be buffered by the anonymous pipe.



I do not like the last line of the solution. I would rather read in everything at once and not wait for 0.01 seconds but stop as soon as the buffer is empty. But I do not know any better way.







share|improve this answer












share|improve this answer



share|improve this answer










answered Mar 20 at 10:18









katoshkatosh

1619




1619












  • The problem in your last line is that fd 3 does not actually get closed at the end of output, hence the read does not sense the eof event. See also my own updated answer for more info.

    – LL3
    Mar 20 at 20:30

















  • The problem in your last line is that fd 3 does not actually get closed at the end of output, hence the read does not sense the eof event. See also my own updated answer for more info.

    – LL3
    Mar 20 at 20:30
















The problem in your last line is that fd 3 does not actually get closed at the end of output, hence the read does not sense the eof event. See also my own updated answer for more info.

– LL3
Mar 20 at 20:30





The problem in your last line is that fd 3 does not actually get closed at the end of output, hence the read does not sense the eof event. See also my own updated answer for more info.

– LL3
Mar 20 at 20:30

















draft saved

draft discarded
















































Thanks for contributing an answer to Unix & Linux Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid


  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.

To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f507061%2ftee-into-different-variables%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







-bash, file-descriptors, tee, variable

Popular posts from this blog

Mobil Contents History Mobil brands Former Mobil brands Lukoil transaction Mobil UK Mobil Australia Mobil New Zealand Mobil Greece Mobil in Japan Mobil in Canada Mobil Egypt See also References External links Navigation menuwww.mobil.com"Mobil Corporation"the original"Our Houston campus""Business & Finance: Socony-Vacuum Corp.""Popular Mechanics""Lubrite Technologies""Exxon Mobil campus 'clearly happening'""Toledo Blade - Google News Archive Search""The Lion and the Moose - How 2 Executives Pulled off the Biggest Merger Ever""ExxonMobil Press Release""Lubricants""Archived copy"the original"Mobil 1™ and Mobil Super™ motor oil and synthetic motor oil - Mobil™ Motor Oils""Mobil Delvac""Mobil Industrial website""The State of Competition in Gasoline Marketing: The Effects of Refiner Operations at Retail""Mobil Travel Guide to become Forbes Travel Guide""Hotel Rankings: Forbes Merges with Mobil"the original"Jamieson oil industry history""Mobil news""Caltex pumps for control""Watchdog blocks Caltex bid""Exxon Mobil sells service station network""Mobil Oil New Zealand Limited is New Zealand's oldest oil company, with predecessor companies having first established a presence in the country in 1896""ExxonMobil subsidiaries have a business history in New Zealand stretching back more than 120 years. We are involved in petroleum refining and distribution and the marketing of fuels, lubricants and chemical products""Archived copy"the original"Exxon Mobil to Sell Its Japanese Arm for $3.9 Billion""Gas station merger will end Esso and Mobil's long run in Japan""Esso moves to affiliate itself with PC Optimum, no longer Aeroplan, in loyalty point switch""Mobil brand of gas stations to launch in Canada after deal for 213 Loblaws-owned locations""Mobil Nears Completion of Rebranding 200 Loblaw Gas Stations""Learn about ExxonMobil's operations in Egypt""Petrol and Diesel Service Stations in Egypt - Mobil"Official websiteExxon Mobil corporate websiteMobil Industrial official websiteeeeeeeeDA04275022275790-40000 0001 0860 5061n82045453134887257134887257

Frič See also Navigation menuinternal link

Identify plant with long narrow paired leaves and reddish stems Planned maintenance scheduled April 17/18, 2019 at 00:00UTC (8:00pm US/Eastern) Announcing the arrival of Valued Associate #679: Cesar Manara Unicorn Meta Zoo #1: Why another podcast?What is this plant with long sharp leaves? Is it a weed?What is this 3ft high, stalky plant, with mid sized narrow leaves?What is this young shrub with opposite ovate, crenate leaves and reddish stems?What is this plant with large broad serrated leaves?Identify this upright branching weed with long leaves and reddish stemsPlease help me identify this bulbous plant with long, broad leaves and white flowersWhat is this small annual with narrow gray/green leaves and rust colored daisy-type flowers?What is this chilli plant?Does anyone know what type of chilli plant this is?Help identify this plant