head 1.168; access; symbols PTH_2_0_7:1.167 PTH_2_0_6:1.166 PTH_2_0_5:1.166 PTH_2_0_4:1.166 PTH_2_0_3:1.165 PTH_2_0_2:1.163 PTH_2_0_1:1.163 PTH_2_0_0:1.162 PTH_2_0b2:1.161 PTH_2_0b1:1.161 PTH_2_0b0:1.160 PTH_1_4:1.151.0.2 PTH_1_4_1:1.151 PTH_1_4_0:1.145 PTH_1_3_7:1.135 PTH_1_4a3:1.138 PTH_1_3_6:1.135 PTH_1_4a2:1.137 PTH_1_3_5:1.135 PTH_1_4a1:1.137 PTH_1_3_4:1.135 PTH_1_3:1.135.0.2 PTH_1_3_3:1.135 PTH_1_3_2:1.132 PTH_1_3_1:1.131 PTH_1_3_0:1.130 PTH_1_3b3:1.127 PTH_1_2_3:1.112.2.3 PTH_1_3b2:1.126 PTH_1_3b1:1.120 PTH_1_3a5:1.119 PTH_1_3a4:1.118 PTH_1_3a3:1.118 PTH_1_2_2:1.112.2.2 PTH_1_3a2:1.117 PTH_1_2_1:1.112.2.1 PTH_1_3a1:1.114 PTH_1_2:1.112.0.2 PTH_1_2_0:1.112 PTH_1_2b8:1.110 PTH_1_2b7:1.109 PTH_1_1_6:1.104.2.1 PTH_1_2b6:1.107 PTH_1_2b5:1.107 PTH_1_2b4:1.106 PTH_1_2b3:1.106 PTH_1_2b2:1.104 PTH_1_2b1:1.104 PTH_1_1_5:1.104 PTH_1_0_6:1.94.2.1 PTH_1_0_5:1.94 PTH_1_0:1.94.0.2 PTH_1_1:1.104.0.2 PTH_1_1_4:1.104 PTH_1_1_3:1.104 PTH_1_1_2:1.104 PTH_1_1_1:1.103 PTH_1_1_0:1.102 PTH_1_1b7:1.100 PTH_1_1b6:1.100 PTH_1_1b5:1.99 PTH_1_1b4:1.98 PTH_1_1b3:1.98 PTH_1_1b2:1.98 PTH_1_1b1:1.98 PTH_1_0_4:1.94 PTH_1_0_3:1.93 PTH_1_0_2:1.90 PTH_1_0_1:1.87 PTH_1_0_0:1.85 PTH_1_0b8:1.81 PTH_1_0b7:1.81 PTH_1_0b6:1.81 PTH_1_0b5:1.80 PTH_1_0b4:1.75 PTH_1_0b3:1.68 PTH_1_0b2:1.66 PTH_1_0b1:1.59 PTH_0_9_21:1.52 PTH_0_9_20:1.50 PTH_0_9_19:1.43 PTH_0_9_18:1.37 PTH_0_9_17:1.32 PTH_0_9_16:1.32 PTH_0_9_15:1.29 PTH_0_9_14:1.27 PTH_0_9_13:1.26 PTH_0_9_12:1.24 PTH_0_9_11:1.23 PTH_0_9_10:1.23 PTH_0_9_9:1.23 PTH_0_9_8:1.17 PTH_0_9_7:1.7 PTH_0_9_6:1.6 PTH_0_9_5:1.4 PTH_0_9_4:1.4 PTH_0_9_3:1.3 PTH_0_9_2:1.3 PTH_0_9_1:1.3 PTH_0_9_0:1.1.1.1 RSE:1.1.1; locks; strict; comment @# @; 1.168 date 2007.01.01.18.23.52; author rse; state Exp; branches; next 1.167; commitid 9DhdiirNzQPBIP0s; 1.167 date 2006.06.08.17.54.52; author rse; state Exp; branches; next 1.166; commitid x8N3mLVdQgkbdeAr; 1.166 date 2004.12.31.19.34.44; author rse; state Exp; branches; next 1.165; 1.165 date 2004.12.03.16.21.08; author rse; state Exp; branches; next 1.164; 1.164 date 2004.10.08.16.17.02; author rse; state Exp; branches; next 1.163; 1.163 date 2004.07.13.10.50.49; author rse; state Exp; branches; next 1.162; 1.162 date 2003.01.01.15.49.11; author rse; state Exp; branches; next 1.161; 1.161 date 2002.11.08.16.05.55; author rse; state Exp; branches; next 1.160; 1.160 date 2002.11.05.19.39.09; author rse; state Exp; branches; next 1.159; 1.159 date 2002.11.03.11.15.04; author rse; state Exp; branches; next 1.158; 1.158 date 2002.11.03.09.59.33; author rse; state Exp; branches; next 1.157; 1.157 date 2002.10.25.11.56.16; author rse; state Exp; branches; next 1.156; 1.156 date 2002.10.23.14.04.00; author rse; state Exp; branches; next 1.155; 1.155 date 2002.10.23.13.55.49; author rse; state Exp; branches; next 1.154; 1.154 date 2002.10.15.20.34.22; author rse; state Exp; branches; next 1.153; 1.153 date 2002.04.27.11.18.57; author rse; state Exp; branches; next 1.152; 1.152 date 2002.01.30.12.54.24; author rse; state Exp; branches; next 1.151; 1.151 date 2002.01.27.11.03.40; author rse; state Exp; branches; next 1.150; 1.150 date 2002.01.17.12.29.33; author rse; state Exp; branches; next 1.149; 1.149 date 2001.11.26.20.08.30; author rse; state Exp; branches; next 1.148; 1.148 date 2001.08.06.17.35.38; author rse; state Exp; branches; next 1.147; 1.147 date 2001.07.12.07.20.04; author rse; state Exp; branches; next 1.146; 1.146 date 2001.03.27.15.34.29; author rse; state Exp; branches; next 1.145; 1.145 date 2001.03.24.14.51.04; author rse; state Exp; branches; next 1.144; 1.144 date 2001.03.24.13.49.06; author rse; state Exp; branches; next 1.143; 1.143 date 2000.10.03.09.26.47; author rse; state Exp; branches; next 1.142; 1.142 date 2000.09.30.08.00.18; author rse; state Exp; branches; next 1.141; 1.141 date 2000.08.18.08.47.51; author rse; state Exp; branches; next 1.140; 1.140 date 2000.08.18.08.35.29; author rse; state Exp; branches; next 1.139; 1.139 date 2000.08.01.06.13.15; author rse; state Exp; branches; next 1.138; 1.138 date 2000.07.10.06.12.34; author rse; state Exp; branches; next 1.137; 1.137 date 2000.03.23.15.54.50; author rse; state Exp; branches; next 1.136; 1.136 date 2000.03.12.19.13.42; author rse; state Exp; branches; next 1.135; 1.135 date 2000.03.10.09.32.38; author rse; state Exp; branches; next 1.134; 1.134 date 2000.03.09.12.13.24; author rse; state Exp; branches; next 1.133; 1.133 date 2000.03.03.15.42.10; author rse; state Exp; branches; next 1.132; 1.132 date 2000.02.24.12.35.00; author rse; state Exp; branches; next 1.131; 1.131 date 2000.02.20.11.42.45; author rse; state Exp; branches; next 1.130; 1.130 date 2000.02.17.16.58.39; author rse; state Exp; branches; next 1.129; 1.129 date 2000.02.15.07.43.14; author rse; state Exp; branches; next 1.128; 1.128 date 2000.02.15.07.41.36; author rse; state Exp; branches; next 1.127; 1.127 date 2000.02.13.17.24.02; author rse; state Exp; branches; next 1.126; 1.126 date 2000.01.27.17.59.23; author rse; state Exp; branches; next 1.125; 1.125 date 2000.01.27.17.48.19; author rse; state Exp; branches; next 1.124; 1.124 date 2000.01.27.16.09.39; author rse; state Exp; branches; next 1.123; 1.123 date 2000.01.27.13.55.54; author rse; state Exp; branches; next 1.122; 1.122 date 2000.01.27.12.53.01; author rse; state Exp; branches; next 1.121; 1.121 date 2000.01.26.13.06.38; author rse; state Exp; branches; next 1.120; 1.120 date 2000.01.26.10.02.26; author rse; state Exp; branches; next 1.119; 1.119 date 2000.01.13.07.23.36; author rse; state Exp; branches; next 1.118; 1.118 date 2000.01.07.22.35.49; author rse; state Exp; branches; next 1.117; 1.117 date 99.12.30.21.58.59; author rse; state Exp; branches; next 1.116; 1.116 date 99.12.17.07.30.57; author rse; state Exp; branches; next 1.115; 1.115 date 99.11.09.08.11.31; author rse; state Exp; branches; next 1.114; 1.114 date 99.11.03.13.09.53; author rse; state Exp; branches; next 1.113; 1.113 date 99.11.01.10.27.18; author rse; state Exp; branches; next 1.112; 1.112 date 99.10.31.14.53.48; author rse; state Exp; branches 1.112.2.1; next 1.111; 1.111 date 99.10.31.11.46.12; author rse; state Exp; branches; next 1.110; 1.110 date 99.10.26.14.24.27; author rse; state Exp; branches; next 1.109; 1.109 date 99.10.19.12.46.39; author rse; state Exp; branches; next 1.108; 1.108 date 99.10.03.17.49.15; author rse; state Exp; branches; next 1.107; 1.107 date 99.09.19.09.44.54; author rse; state Exp; branches; next 1.106; 1.106 date 99.09.17.08.11.09; author rse; state Exp; branches; next 1.105; 1.105 date 99.09.17.08.01.54; author rse; state Exp; branches; next 1.104; 1.104 date 99.08.23.12.00.53; author rse; state Exp; branches 1.104.2.1; next 1.103; 1.103 date 99.08.20.07.17.08; author rse; state Exp; branches; next 1.102; 1.102 date 99.08.19.15.08.52; author rse; state Exp; branches; next 1.101; 1.101 date 99.08.19.14.37.42; author rse; state Exp; branches; next 1.100; 1.100 date 99.08.18.12.59.33; author rse; state Exp; branches; next 1.99; 1.99 date 99.08.17.09.02.10; author rse; state Exp; branches; next 1.98; 1.98 date 99.08.03.12.56.35; author rse; state Exp; branches; next 1.97; 1.97 date 99.08.03.12.42.48; author rse; state Exp; branches; next 1.96; 1.96 date 99.08.03.12.28.42; author rse; state Exp; branches; next 1.95; 1.95 date 99.08.03.12.24.02; author rse; state Exp; branches; next 1.94; 1.94 date 99.08.01.14.07.17; author rse; state Exp; branches 1.94.2.1; next 1.93; 1.93 date 99.07.30.07.13.59; author rse; state Exp; branches; next 1.92; 1.92 date 99.07.30.07.06.38; author rse; state Exp; branches; next 1.91; 1.91 date 99.07.28.16.08.37; author rse; state Exp; branches; next 1.90; 1.90 date 99.07.25.08.38.53; author rse; state Exp; branches; next 1.89; 1.89 date 99.07.24.13.16.17; author rse; state Exp; branches; next 1.88; 1.88 date 99.07.23.13.39.42; author rse; state Exp; branches; next 1.87; 1.87 date 99.07.22.15.49.05; author rse; state Exp; branches; next 1.86; 1.86 date 99.07.22.14.55.09; author rse; state Exp; branches; next 1.85; 1.85 date 99.07.16.14.52.19; author rse; state Exp; branches; next 1.84; 1.84 date 99.07.16.11.40.11; author rse; state Exp; branches; next 1.83; 1.83 date 99.07.16.11.31.58; author rse; state Exp; branches; next 1.82; 1.82 date 99.07.16.11.15.57; author rse; state Exp; branches; next 1.81; 1.81 date 99.07.12.13.46.01; author rse; state Exp; branches; next 1.80; 1.80 date 99.07.11.15.39.39; author rse; state Exp; branches; next 1.79; 1.79 date 99.07.10.15.14.47; author rse; state Exp; branches; next 1.78; 1.78 date 99.07.10.14.21.17; author rse; state Exp; branches; next 1.77; 1.77 date 99.07.09.08.06.41; author rse; state Exp; branches; next 1.76; 1.76 date 99.07.08.15.01.17; author rse; state Exp; branches; next 1.75; 1.75 date 99.07.08.10.39.39; author rse; state Exp; branches; next 1.74; 1.74 date 99.07.08.10.34.00; author rse; state Exp; branches; next 1.73; 1.73 date 99.07.08.10.22.33; author rse; state Exp; branches; next 1.72; 1.72 date 99.07.08.10.20.05; author rse; state Exp; branches; next 1.71; 1.71 date 99.07.08.10.19.10; author rse; state Exp; branches; next 1.70; 1.70 date 99.07.08.10.17.03; author rse; state Exp; branches; next 1.69; 1.69 date 99.07.08.09.41.00; author rse; state Exp; branches; next 1.68; 1.68 date 99.07.07.19.02.28; author rse; state Exp; branches; next 1.67; 1.67 date 99.07.04.15.39.12; author rse; state Exp; branches; next 1.66; 1.66 date 99.07.04.13.00.51; author rse; state Exp; branches; next 1.65; 1.65 date 99.07.04.12.05.35; author rse; state Exp; branches; next 1.64; 1.64 date 99.07.04.12.01.43; author rse; state Exp; branches; next 1.63; 1.63 date 99.07.04.11.04.52; author rse; state Exp; branches; next 1.62; 1.62 date 99.07.01.08.28.47; author rse; state Exp; branches; next 1.61; 1.61 date 99.06.28.17.15.56; author rse; state Exp; branches; next 1.60; 1.60 date 99.06.28.17.12.19; author rse; state Exp; branches; next 1.59; 1.59 date 99.06.28.13.17.04; author rse; state Exp; branches; next 1.58; 1.58 date 99.06.28.11.36.26; author rse; state Exp; branches; next 1.57; 1.57 date 99.06.28.10.02.07; author rse; state Exp; branches; next 1.56; 1.56 date 99.06.28.09.45.24; author rse; state Exp; branches; next 1.55; 1.55 date 99.06.28.07.51.55; author rse; state Exp; branches; next 1.54; 1.54 date 99.06.27.15.40.30; author rse; state Exp; branches; next 1.53; 1.53 date 99.06.27.15.38.04; author rse; state Exp; branches; next 1.52; 1.52 date 99.06.26.12.58.14; author rse; state Exp; branches; next 1.51; 1.51 date 99.06.26.12.47.55; author rse; state Exp; branches; next 1.50; 1.50 date 99.06.25.15.28.13; author rse; state Exp; branches; next 1.49; 1.49 date 99.06.25.09.10.29; author rse; state Exp; branches; next 1.48; 1.48 date 99.06.24.12.24.46; author rse; state Exp; branches; next 1.47; 1.47 date 99.06.24.11.50.04; author rse; state Exp; branches; next 1.46; 1.46 date 99.06.24.10.54.29; author rse; state Exp; branches; next 1.45; 1.45 date 99.06.24.10.47.19; author rse; state Exp; branches; next 1.44; 1.44 date 99.06.21.15.47.19; author rse; state Exp; branches; next 1.43; 1.43 date 99.06.21.15.30.44; author rse; state Exp; branches; next 1.42; 1.42 date 99.06.21.15.24.04; author rse; state Exp; branches; next 1.41; 1.41 date 99.06.21.10.31.29; author rse; state Exp; branches; next 1.40; 1.40 date 99.06.21.10.11.44; author rse; state Exp; branches; next 1.39; 1.39 date 99.06.21.10.07.29; author rse; state Exp; branches; next 1.38; 1.38 date 99.06.21.08.32.07; author rse; state Exp; branches; next 1.37; 1.37 date 99.06.20.10.04.32; author rse; state Exp; branches; next 1.36; 1.36 date 99.06.20.09.52.02; author rse; state Exp; branches; next 1.35; 1.35 date 99.06.20.09.44.27; author rse; state Exp; branches; next 1.34; 1.34 date 99.06.19.11.47.14; author rse; state Exp; branches; next 1.33; 1.33 date 99.06.19.11.46.04; author rse; state Exp; branches; next 1.32; 1.32 date 99.06.09.06.51.19; author rse; state Exp; branches; next 1.31; 1.31 date 99.06.04.11.16.04; author rse; state Exp; branches; next 1.30; 1.30 date 99.06.04.11.14.36; author rse; state Exp; branches; next 1.29; 1.29 date 99.06.04.10.47.42; author rse; state Exp; branches; next 1.28; 1.28 date 99.06.03.09.31.47; author rse; state Exp; branches; next 1.27; 1.27 date 99.06.01.15.29.22; author rse; state Exp; branches; next 1.26; 1.26 date 99.06.01.09.55.26; author rse; state Exp; branches; next 1.25; 1.25 date 99.06.01.07.56.34; author rse; state Exp; branches; next 1.24; 1.24 date 99.05.30.09.33.47; author rse; state Exp; branches; next 1.23; 1.23 date 99.05.25.14.28.47; author rse; state Exp; branches; next 1.22; 1.22 date 99.05.25.14.24.16; author rse; state Exp; branches; next 1.21; 1.21 date 99.05.25.11.53.29; author rse; state Exp; branches; next 1.20; 1.20 date 99.05.24.15.50.20; author rse; state Exp; branches; next 1.19; 1.19 date 99.05.24.15.48.36; author rse; state Exp; branches; next 1.18; 1.18 date 99.05.24.15.21.30; author rse; state Exp; branches; next 1.17; 1.17 date 99.05.24.11.13.35; author rse; state Exp; branches; next 1.16; 1.16 date 99.05.24.11.10.23; author rse; state Exp; branches; next 1.15; 1.15 date 99.05.24.10.24.24; author rse; state Exp; branches; next 1.14; 1.14 date 99.05.24.10.18.53; author rse; state Exp; branches; next 1.13; 1.13 date 99.05.24.10.07.44; author rse; state Exp; branches; next 1.12; 1.12 date 99.05.24.08.06.52; author rse; state Exp; branches; next 1.11; 1.11 date 99.05.24.07.58.13; author rse; state Exp; branches; next 1.10; 1.10 date 99.05.24.07.45.56; author rse; state Exp; branches; next 1.9; 1.9 date 99.05.24.07.28.00; author rse; state Exp; branches; next 1.8; 1.8 date 99.05.23.16.09.06; author rse; state Exp; branches; next 1.7; 1.7 date 99.05.22.14.49.35; author rse; state Exp; branches; next 1.6; 1.6 date 99.05.22.14.37.52; author rse; state Exp; branches; next 1.5; 1.5 date 99.05.21.16.36.26; author rse; state Exp; branches; next 1.4; 1.4 date 99.05.21.09.44.10; author rse; state Exp; branches; next 1.3; 1.3 date 99.05.13.14.17.15; author rse; state Exp; branches; next 1.2; 1.2 date 99.05.13.14.03.07; author rse; state Exp; branches; next 1.1; 1.1 date 99.05.13.12.18.16; author rse; state Exp; branches 1.1.1.1; next ; 1.112.2.1 date 99.11.01.10.24.58; author rse; state Exp; branches; next 1.112.2.2; 1.112.2.2 date 2000.01.07.23.16.17; author rse; state Exp; branches; next 1.112.2.3; 1.112.2.3 date 2000.02.04.22.07.18; author rse; state Exp; branches; next ; 1.104.2.1 date 99.09.24.21.52.52; author rse; state Exp; branches; next ; 1.94.2.1 date 99.08.31.08.32.17; author rse; state Exp; branches; next ; 1.1.1.1 date 99.05.13.12.18.16; author rse; state Exp; branches; next ; desc @@ 1.168 log @Adjusted all copyright messages for new year 2007. @ text @## ## GNU Pth - The GNU Portable Threads ## Copyright (c) 1999-2007 Ralf S. Engelschall ## ## This file is part of GNU Pth, a non-preemptive thread scheduling ## library which can be found at http://www.gnu.org/software/pth/. ## ## This library is free software; you can redistribute it and/or ## modify it under the terms of the GNU Lesser General Public ## License as published by the Free Software Foundation; either ## version 2.1 of the License, or (at your option) any later version. ## ## This library is distributed in the hope that it will be useful, ## but WITHOUT ANY WARRANTY; without even the implied warranty of ## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU ## Lesser General Public License for more details. ## ## You should have received a copy of the GNU Lesser General Public ## License along with this library; if not, write to the Free Software ## Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 ## USA, or contact Ralf S. Engelschall . ## ## pth.pod: Pth manual page ## # ``Real programmers don't document. # Documentation is for wimps who can't # read the listings of the object deck.'' =pod =head1 NAME B - GNU Portable Threads =head1 VERSION GNU Pth PTH_VERSION_STR =head1 SYNOPSIS =over 4 =item B pth_init, pth_kill, pth_ctrl, pth_version. =item B pth_attr_of, pth_attr_new, pth_attr_init, pth_attr_set, pth_attr_get, pth_attr_destroy. =item B pth_spawn, pth_once, pth_self, pth_suspend, pth_resume, pth_yield, pth_nap, pth_wait, pth_cancel, pth_abort, pth_raise, pth_join, pth_exit. =item B pth_fdmode, pth_time, pth_timeout, pth_sfiodisc. =item B pth_cancel_point, pth_cancel_state. =item B pth_event, pth_event_typeof, pth_event_extract, pth_event_concat, pth_event_isolate, pth_event_walk, pth_event_status, pth_event_free. =item B pth_key_create, pth_key_delete, pth_key_setdata, pth_key_getdata. =item B pth_msgport_create, pth_msgport_destroy, pth_msgport_find, pth_msgport_pending, pth_msgport_put, pth_msgport_get, pth_msgport_reply. =item B pth_cleanup_push, pth_cleanup_pop. =item B pth_atfork_push, pth_atfork_pop, pth_fork. =item B pth_mutex_init, pth_mutex_acquire, pth_mutex_release, pth_rwlock_init, pth_rwlock_acquire, pth_rwlock_release, pth_cond_init, pth_cond_await, pth_cond_notify, pth_barrier_init, pth_barrier_reach. =item B pth_uctx_create, pth_uctx_make, pth_uctx_switch, pth_uctx_destroy. =item B pth_sigwait_ev, pth_accept_ev, pth_connect_ev, pth_select_ev, pth_poll_ev, pth_read_ev, pth_readv_ev, pth_write_ev, pth_writev_ev, pth_recv_ev, pth_recvfrom_ev, pth_send_ev, pth_sendto_ev. =item B pth_nanosleep, pth_usleep, pth_sleep, pth_waitpid, pth_system, pth_sigmask, pth_sigwait, pth_accept, pth_connect, pth_select, pth_pselect, pth_poll, pth_read, pth_readv, pth_write, pth_writev, pth_pread, pth_pwrite, pth_recv, pth_recvfrom, pth_send, pth_sendto. =back =head1 DESCRIPTION ____ _ _ | _ \| |_| |__ | |_) | __| '_ \ ``Only those who attempt | __/| |_| | | | the absurd can achieve |_| \__|_| |_| the impossible.'' B is a very portable POSIX/ANSI-C based library for Unix platforms which provides non-preemptive priority-based scheduling for multiple threads of execution (aka `multithreading') inside event-driven applications. All threads run in the same address space of the application process, but each thread has its own individual program counter, run-time stack, signal mask and C variable. The thread scheduling itself is done in a cooperative way, i.e., the threads are managed and dispatched by a priority- and event-driven non-preemptive scheduler. The intention is that this way both better portability and run-time performance is achieved than with preemptive scheduling. The event facility allows threads to wait until various types of internal and external events occur, including pending I/O on file descriptors, asynchronous signals, elapsed timers, pending I/O on message ports, thread and process termination, and even results of customized callback functions. B also provides an optional emulation API for POSIX.1c threads (`Pthreads') which can be used for backward compatibility to existing multithreaded applications. See B's pthread(3) manual page for details. =head2 Threading Background When programming event-driven applications, usually servers, lots of regular jobs and one-shot requests have to be processed in parallel. To efficiently simulate this parallel processing on uniprocessor machines, we use `multitasking' -- that is, we have the application ask the operating system to spawn multiple instances of itself. On Unix, typically the kernel implements multitasking in a preemptive and priority-based way through heavy-weight processes spawned with fork(2). These processes usually do I share a common address space. Instead they are clearly separated from each other, and are created by direct cloning a process address space (although modern kernels use memory segment mapping and copy-on-write semantics to avoid unnecessary copying of physical memory). The drawbacks are obvious: Sharing data between the processes is complicated, and can usually only be done efficiently through shared memory (but which itself is not very portable). Synchronization is complicated because of the preemptive nature of the Unix scheduler (one has to use I locks, etc). The machine's resources can be exhausted very quickly when the server application has to serve too many long-running requests (heavy-weight processes cost memory). And when each request spawns a sub-process to handle it, the server performance and responsiveness is horrible (heavy-weight processes cost time to spawn). Finally, the server application doesn't scale very well with the load because of these resource problems. In practice, lots of tricks are usually used to overcome these problems - ranging from pre-forked sub-process pools to semi-serialized processing, etc. One of the most elegant ways to solve these resource- and data-sharing problems is to have multiple I threads of execution inside a single (heavy-weight) process, i.e., to use I. Those I usually improve responsiveness and performance of the application, often improve and simplify the internal program structure, and most important, require less system resources than heavy-weight processes. Threads are neither the optimal run-time facility for all types of applications, nor can all applications benefit from them. But at least event-driven server applications usually benefit greatly from using threads. =head2 The World of Threading Even though lots of documents exists which describe and define the world of threading, to understand B, you need only basic knowledge about threading. The following definitions of thread-related terms should at least help you understand thread programming enough to allow you to use B. =over 2 =item B B vs. B A process on Unix systems consists of at least the following fundamental ingredients: I, I, I, I, I, I, I, I. On every process switch, the kernel saves and restores these ingredients for the individual processes. On the other hand, a thread consists of only a private program counter, stack memory, stack pointer and signal table. All other ingredients, in particular the virtual memory, it shares with the other threads of the same process. =item B B vs. B threading Threads on a Unix platform traditionally can be implemented either inside kernel-space or user-space. When threads are implemented by the kernel, the thread context switches are performed by the kernel without the application's knowledge. Similarly, when threads are implemented in user-space, the thread context switches are performed by an application library, without the kernel's knowledge. There also are hybrid threading approaches where, typically, a user-space library binds one or more user-space threads to one or more kernel-space threads (there usually called light-weight processes - or in short LWPs). User-space threads are usually more portable and can perform faster and cheaper context switches (for instance via swapcontext(2) or setjmp(3)/longjmp(3)) than kernel based threads. On the other hand, kernel-space threads can take advantage of multiprocessor machines and don't have any inherent I/O blocking problems. Kernel-space threads are usually scheduled in preemptive way side-by-side with the underlying processes. User-space threads on the other hand use either preemptive or non-preemptive scheduling. =item B B vs. B thread scheduling In preemptive scheduling, the scheduler lets a thread execute until a blocking situation occurs (usually a function call which would block) or the assigned timeslice elapses. Then it detracts control from the thread without a chance for the thread to object. This is usually realized by interrupting the thread through a hardware interrupt signal (for kernel-space threads) or a software interrupt signal (for user-space threads), like C or C. In non-preemptive scheduling, once a thread received control from the scheduler it keeps it until either a blocking situation occurs (again a function call which would block and instead switches back to the scheduler) or the thread explicitly yields control back to the scheduler in a cooperative way. =item B B vs. B Concurrency exists when at least two threads are I at the same time. Parallelism arises when at least two threads are I simultaneously. Real parallelism can be only achieved on multiprocessor machines, of course. But one also usually speaks of parallelism or I in the context of preemptive thread scheduling and of I in the context of non-preemptive thread scheduling. =item B B The responsiveness of a system can be described by the user visible delay until the system responses to an external request. When this delay is small enough and the user doesn't recognize a noticeable delay, the responsiveness of the system is considered good. When the user recognizes or is even annoyed by the delay, the responsiveness of the system is considered bad. =item B B, B and B functions A reentrant function is one that behaves correctly if it is called simultaneously by several threads and then also executes simultaneously. Functions that access global state, such as memory or files, of course, need to be carefully designed in order to be reentrant. Two traditional approaches to solve these problems are caller-supplied states and thread-specific data. Thread-safety is the avoidance of I, i.e., situations in which data is set to either correct or incorrect value depending upon the (unpredictable) order in which multiple threads access and modify the data. So a function is thread-safe when it still behaves semantically correct when called simultaneously by several threads (it is not required that the functions also execute simultaneously). The traditional approach to achieve thread-safety is to wrap a function body with an internal mutual exclusion lock (aka `mutex'). As you should recognize, reentrant is a stronger attribute than thread-safe, because it is harder to achieve and results especially in no run-time contention between threads. So, a reentrant function is always thread-safe, but not vice versa. Additionally there is a related attribute for functions named asynchronous-safe, which comes into play in conjunction with signal handlers. This is very related to the problem of reentrant functions. An asynchronous-safe function is one that can be called safe and without side-effects from within a signal handler context. Usually very few functions are of this type, because an application is very restricted in what it can perform from within a signal handler (especially what system functions it is allowed to call). The reason mainly is, because only a few system functions are officially declared by POSIX as guaranteed to be asynchronous-safe. Asynchronous-safe functions usually have to be already reentrant. =back =head2 User-Space Threads User-space threads can be implemented in various way. The two traditional approaches are: =over 3 =item B<1.> B Here the global procedures of the application are split into small execution units (each is required to not run for more than a few milliseconds) and those units are implemented by separate functions. Then a global matrix is defined which describes the execution (and perhaps even dependency) order of these functions. The main server procedure then just dispatches between these units by calling one function after each other controlled by this matrix. The threads are created by more than one jump-trail through this matrix and by switching between these jump-trails controlled by corresponding occurred events. This approach gives the best possible performance, because one can fine-tune the threads of execution by adjusting the matrix, and the scheduling is done explicitly by the application itself. It is also very portable, because the matrix is just an ordinary data structure, and functions are a standard feature of ANSI C. The disadvantage of this approach is that it is complicated to write large applications with this approach, because in those applications one quickly gets hundreds(!) of execution units and the control flow inside such an application is very hard to understand (because it is interrupted by function borders and one always has to remember the global dispatching matrix to follow it). Additionally, all threads operate on the same execution stack. Although this saves memory, it is often nasty, because one cannot switch between threads in the middle of a function. Thus the scheduling borders are the function borders. =item B<2.> B Here the idea is that one programs the application as with forked processes, i.e., one spawns a thread of execution and this runs from the begin to the end without an interrupted control flow. But the control flow can be still interrupted - even in the middle of a function. Actually in a preemptive way, similar to what the kernel does for the heavy-weight processes, i.e., every few milliseconds the user-space scheduler switches between the threads of execution. But the thread itself doesn't recognize this and usually (except for synchronization issues) doesn't have to care about this. The advantage of this approach is that it's very easy to program, because the control flow and context of a thread directly follows a procedure without forced interrupts through function borders. Additionally, the programming is very similar to a traditional and well understood fork(2) based approach. The disadvantage is that although the general performance is increased, compared to using approaches based on heavy-weight processes, it is decreased compared to the matrix-approach above. Because the implicit preemptive scheduling does usually a lot more context switches (every user-space context switch costs some overhead even when it is a lot cheaper than a kernel-level context switch) than the explicit cooperative/non-preemptive scheduling. Finally, there is no really portable POSIX/ANSI-C based way to implement user-space preemptive threading. Either the platform already has threads, or one has to hope that some semi-portable package exists for it. And even those semi-portable packages usually have to deal with assembler code and other nasty internals and are not easy to port to forthcoming platforms. =back So, in short: the matrix-dispatching approach is portable and fast, but nasty to program. The thread scheduling approach is easy to program, but suffers from synchronization and portability problems caused by its preemptive nature. =head2 The Compromise of Pth But why not combine the good aspects of both approaches while avoiding their bad aspects? That's the goal of B. B implements easy-to-program threads of execution, but avoids the problems of preemptive scheduling by using non-preemptive scheduling instead. This sounds like, and is, a useful approach. Nevertheless, one has to keep the implications of non-preemptive thread scheduling in mind when working with B. The following list summarizes a few essential points: =over 2 =item B B. This is, because it uses a nifty and portable POSIX/ANSI-C approach for thread creation (and this way doesn't require any platform dependent assembler hacks) and schedules the threads in non-preemptive way (which doesn't require unportable facilities like C). On the other hand, this way not all fancy threading features can be implemented. Nevertheless the available facilities are enough to provide a robust and full-featured threading system. =item B B. The reason is the non-preemptive scheduling. Number-crunching applications usually require preemptive scheduling to achieve concurrency because of their long CPU bursts. For them, non-preemptive scheduling (even together with explicit yielding) provides only the old concept of `coroutines'. On the other hand, event driven applications benefit greatly from non-preemptive scheduling. They have only short CPU bursts and lots of events to wait on, and this way run faster under non-preemptive scheduling because no unnecessary context switching occurs, as it is the case for preemptive scheduling. That's why B is mainly intended for server type applications, although there is no technical restriction. =item B B. This nice fact exists again because of the nature of non-preemptive scheduling, where a function isn't interrupted and this way cannot be reentered before it returned. This is a great portability benefit, because thread-safety can be achieved more easily than reentrance possibility. Especially this means that under B more existing third-party libraries can be used without side-effects than it's the case for other threading systems. =item B B. This means that B runs on almost all Unix kernels, because the kernel does not need to be aware of the B threads (because they are implemented entirely in user-space). On the other hand, it cannot benefit from the existence of multiprocessors, because for this, kernel support would be needed. In practice, this is no problem, because multiprocessor systems are rare, and portability is almost more important than highest concurrency. =back =head2 The life cycle of a thread To understand the B Application Programming Interface (API), it helps to first understand the life cycle of a thread in the B threading system. It can be illustrated with the following directed graph: NEW | V +---> READY ---+ | ^ | | | V WAITING <--+-- RUNNING | : V SUSPENDED DEAD When a new thread is created, it is moved into the B queue of the scheduler. On the next dispatching for this thread, the scheduler picks it up from there and moves it to the B queue. This is a queue containing all threads which want to perform a CPU burst. There they are queued in priority order. On each dispatching step, the scheduler always removes the thread with the highest priority only. It then increases the priority of all remaining threads by 1, to prevent them from `starving'. The thread which was removed from the B queue is the new B thread (there is always just one B thread, of course). The B thread is assigned execution control. After this thread yields execution (either explicitly by yielding execution or implicitly by calling a function which would block) there are three possibilities: Either it has terminated, then it is moved to the B queue, or it has events on which it wants to wait, then it is moved into the B queue. Else it is assumed it wants to perform more CPU bursts and immediately enters the B queue again. Before the next thread is taken out of the B queue, the B queue is checked for pending events. If one or more events occurred, the threads that are waiting on them are immediately moved to the B queue. The purpose of the B queue has to do with the fact that in B a thread never directly switches to another thread. A thread always yields execution to the scheduler and the scheduler dispatches to the next thread. So a freshly spawned thread has to be kept somewhere until the scheduler gets a chance to pick it up for scheduling. That is what the B queue is for. The purpose of the B queue is to support thread joining. When a thread is marked to be unjoinable, it is directly kicked out of the system after it terminated. But when it is joinable, it enters the B queue. There it remains until another thread joins it. Finally, there is a special separated queue named B, to where threads can be manually moved from the B, B or B queues by the application. The purpose of this special queue is to temporarily absorb suspended threads until they are again resumed by the application. Suspended threads do not cost scheduling or event handling resources, because they are temporarily completely out of the scheduler's scope. If a thread is resumed, it is moved back to the queue from where it originally came and this way again enters the schedulers scope. =head1 APPLICATION PROGRAMMING INTERFACE (API) In the following the B I (API) is discussed in detail. With the knowledge given above, it should now be easy to understand how to program threads with this API. In good Unix tradition, B functions use special return values (C in pointer context, C in boolean context and C<-1> in integer context) to indicate an error condition and set (or pass through) the C system variable to pass more details about the error to the caller. =head2 Global Library Management The following functions act on the library as a whole. They are used to initialize and shutdown the scheduler and fetch information from it. =over 4 =item int B(void); This initializes the B library. It has to be the first B API function call in an application, and is mandatory. It's usually done at the begin of the main() function of the application. This implicitly spawns the internal scheduler thread and transforms the single execution unit of the current process into a thread (the `main' thread). It returns C on success and C on error. =item int B(void); This kills the B library. It should be the last B API function call in an application, but is not really required. It's usually done at the end of the main function of the application. At least, it has to be called from within the main thread. It implicitly kills all threads and transforms back the calling thread into the single execution unit of the underlying process. The usual way to terminate a B application is either a simple `C' in the main thread (which waits for all other threads to terminate, kills the threading system and then terminates the process) or a `C' (which immediately kills the threading system and terminates the process). The pth_kill() return immediately with a return code of C if it is not called from within the main thread. Else it kills the threading system and returns C. =item long B(unsigned long I, ...); This is a generalized query/control function for the B library. The argument I is a bitmask formed out of one or more CI queries. Currently the following queries are supported: =over 4 =item C This returns the total number of threads currently in existence. This query actually is formed out of the combination of queries for threads in a particular state, i.e., the C query is equal to the OR-combination of all the following specialized queries: C for the number of threads in the new queue (threads created via pth_spawn(3) but still not scheduled once), C for the number of threads in the ready queue (threads who want to do CPU bursts), C for the number of running threads (always just one thread!), C for the number of threads in the waiting queue (threads waiting for events), C for the number of threads in the suspended queue (threads waiting to be resumed) and C for the number of threads in the new queue (terminated threads waiting for a join). =item C This requires a second argument of type `C' (pointer to a floating point variable). It stores a floating point value describing the exponential averaged load of the scheduler in this variable. The load is a function from the number of threads in the ready queue of the schedulers dispatching unit. So a load around 1.0 means there is only one ready thread (the standard situation when the application has no high load). A higher load value means there a more threads ready who want to do CPU bursts. The average load value updates once per second only. The return value for this query is always 0. =item C This requires a second argument of type `C' which identifies a thread. It returns the priority (ranging from C to C) of the given thread. =item C This requires a second argument of type `C' which identifies a thread. It returns the name of the given thread, i.e., the return value of pth_ctrl(3) should be casted to a `C'. =item C This requires a second argument of type `C' to which a summary of the internal B library state is written to. The main information which is currently written out is the current state of the thread pool. =item C This requires a second argument of type `C' which specified whether the B scheduler favours new threads on startup, i.e., whether they are moved from the new queue to the top (argument is C) or middle (argument is C) of the ready queue. The default is to favour new threads to make sure they do not starve already at startup, although this slightly violates the strict priority based scheduling. =back The function returns C<-1> on error. =item long B(void); This function returns a hex-value `0xIIII' which describes the current B library version. I is the version, I the revisions, I the level and I the type of the level (alphalevel=0, betalevel=1, patchlevel=2, etc). For instance B version 1.0b1 is encoded as 0x100101. The reason for this unusual mapping is that this way the version number is steadily I. The same value is also available under compile time as C. =back =head2 Thread Attribute Handling Attribute objects are used in B for two things: First stand-alone/unbound attribute objects are used to store attributes for to be spawned threads. Bounded attribute objects are used to modify attributes of already existing threads. The following attribute fields exists in attribute objects: =over 4 =item C (read-write) [C] Thread Priority between C and C. The default is C. =item C (read-write) [C] Name of thread (up to 40 characters are stored only), mainly for debugging purposes. =item C (read-write) [C] In bounded attribute objects, this field is incremented every time the context is switched to the associated thread. =item C (read-write> [C] The thread detachment type, C indicates a joinable thread, C indicates a detached thread. When a thread is detached, after termination it is immediately kicked out of the system instead of inserted into the dead queue. =item C (read-write) [C] The thread cancellation state, i.e., a combination of C or C and C or C. =item C (read-write) [C] The thread stack size in bytes. Use lower values than 64 KB with great care! =item C (read-write) [C] A pointer to the lower address of a chunk of malloc(3)'ed memory for the stack. =item C (read-only) [C] The time when the thread was spawned. This can be queried only when the attribute object is bound to a thread. =item C (read-only) [C] The time when the thread was last dispatched. This can be queried only when the attribute object is bound to a thread. =item C (read-only) [C] The total time the thread was running. This can be queried only when the attribute object is bound to a thread. =item C (read-only) [C] The thread start function. This can be queried only when the attribute object is bound to a thread. =item C (read-only) [C] The thread start argument. This can be queried only when the attribute object is bound to a thread. =item C (read-only) [C] The scheduling state of the thread, i.e., either C, C, C, or C This can be queried only when the attribute object is bound to a thread. =item C (read-only) [C] The event ring the thread is waiting for. This can be queried only when the attribute object is bound to a thread. =item C (read-only) [C] Whether the attribute object is bound (C) to a thread or not (C). =back The following API functions can be used to handle the attribute objects: =over 4 =item pth_attr_t B(pth_t I); This returns a new attribute object I to thread I. Any queries on this object directly fetch attributes from I. And attribute modifications directly change I. Use such attribute objects to modify existing threads. =item pth_attr_t B(void); This returns a new I attribute object. An implicit pth_attr_init() is done on it. Any queries on this object just fetch stored attributes from it. And attribute modifications just change the stored attributes. Use such attribute objects to pre-configure attributes for to be spawned threads. =item int B(pth_attr_t I); This initializes an attribute object I to the default values: C := C, C := `C', C := C<0>, C := C, C := C, C := 64*1024 and C := C. All other C attributes are read-only attributes and don't receive default values in I, because they exists only for bounded attribute objects. =item int B(pth_attr_t I, int I, ...); This sets the attribute field I in I to a value specified as an additional argument on the variable argument list. The following attribute I and argument pairs can be used: PTH_ATTR_PRIO int PTH_ATTR_NAME char * PTH_ATTR_DISPATCHES int PTH_ATTR_JOINABLE int PTH_ATTR_CANCEL_STATE unsigned int PTH_ATTR_STACK_SIZE unsigned int PTH_ATTR_STACK_ADDR char * =item int B(pth_attr_t I, int I, ...); This retrieves the attribute field I in I and stores its value in the variable specified through a pointer in an additional argument on the variable argument list. The following I and argument pairs can be used: PTH_ATTR_PRIO int * PTH_ATTR_NAME char ** PTH_ATTR_DISPATCHES int * PTH_ATTR_JOINABLE int * PTH_ATTR_CANCEL_STATE unsigned int * PTH_ATTR_STACK_SIZE unsigned int * PTH_ATTR_STACK_ADDR char ** PTH_ATTR_TIME_SPAWN pth_time_t * PTH_ATTR_TIME_LAST pth_time_t * PTH_ATTR_TIME_RAN pth_time_t * PTH_ATTR_START_FUNC void *(**)(void *) PTH_ATTR_START_ARG void ** PTH_ATTR_STATE pth_state_t * PTH_ATTR_EVENTS pth_event_t * PTH_ATTR_BOUND int * =item int B(pth_attr_t I); This destroys a attribute object I. After this I is no longer a valid attribute object. =back =head2 Thread Control The following functions control the threading itself and make up the main API of the B library. =over 4 =item pth_t B(pth_attr_t I, void *(*I)(void *), void *I); This spawns a new thread with the attributes given in I (or C for default attributes - which means that thread priority, joinability and cancel state are inherited from the current thread) with the starting point at routine I; the dispatch count is not inherited from the current thread if I is not specified - rather, it is initialized to zero. This entry routine is called as `pth_exit(I(I))' inside the new thread unit, i.e., I's return value is fed to an implicit pth_exit(3). So the thread can also exit by just returning. Nevertheless the thread can also exit explicitly at any time by calling pth_exit(3). But keep in mind that calling the POSIX function exit(3) still terminates the complete process and not just the current thread. There is no B-internal limit on the number of threads one can spawn, except the limit implied by the available virtual memory. B internally keeps track of thread in dynamic data structures. The function returns C on error. =item int B(pth_once_t *I, void (*I)(void *), void *I); This is a convenience function which uses a control variable of type C to make sure a constructor function I is called only once as `I(I)' in the system. In other words: Only the first call to pth_once(3) by any thread in the system succeeds. The variable referenced via I should be declared as `C I = C;' before calling this function. =item pth_t B(void); This just returns the unique thread handle of the currently running thread. This handle itself has to be treated as an opaque entity by the application. It's usually used as an argument to other functions who require an argument of type C. =item int B(pth_t I); This suspends a thread I until it is manually resumed again via pth_resume(3). For this, the thread is moved to the B queue and this way is completely out of the scheduler's event handling and thread dispatching scope. Suspending the current thread is not allowed. The function returns C on success and C on errors. =item int B(pth_t I); This function resumes a previously suspended thread I, i.e. I has to stay on the B queue. The thread is moved to the B, B or B queue (dependent on what its state was when the pth_suspend(3) call were made) and this way again enters the event handling and thread dispatching scope of the scheduler. The function returns C on success and C on errors. =item int B(pth_t I, int I) This function raises a signal for delivery to thread I only. When one just raises a signal via raise(3) or kill(2), its delivered to an arbitrary thread which has this signal not blocked. With pth_raise(3) one can send a signal to a thread and its guarantees that only this thread gets the signal delivered. But keep in mind that nevertheless the signals I is still configured I-wide. When I is 0 plain thread checking is performed, i.e., `C' returns C when thread I still exists in the B system but doesn't send any signal to it. =item int B(pth_t I); This explicitly yields back the execution control to the scheduler thread. Usually the execution is implicitly transferred back to the scheduler when a thread waits for an event. But when a thread has to do larger CPU bursts, it can be reasonable to interrupt it explicitly by doing a few pth_yield(3) calls to give other threads a chance to execute, too. This obviously is the cooperating part of B. A thread I to yield execution, of course. But when you want to program a server application with good response times the threads should be cooperative, i.e., when they should split their CPU bursts into smaller units with this call. Usually one specifies I as C to indicate to the scheduler that it can freely decide which thread to dispatch next. But if one wants to indicate to the scheduler that a particular thread should be favored on the next dispatching step, one can specify this thread explicitly. This allows the usage of the old concept of I where a thread/routine switches to a particular cooperating thread. If I is not C and points to a I or I thread, it is guaranteed that this thread receives execution control on the next dispatching step. If I is in a different state (that is, not in C or C) an error is reported. The function usually returns C for success and only C (with C set to C) if I specified an invalid or still not new or ready thread. =item int B(pth_time_t I); This functions suspends the execution of the current thread until I is elapsed. I is of type C and this way has theoretically a resolution of one microsecond. In practice you should neither rely on this nor that the thread is awakened exactly after I has elapsed. It's only guarantees that the thread will sleep at least I. But because of the non-preemptive nature of B it can last longer (when another thread kept the CPU for a long time). Additionally the resolution is dependent of the implementation of timers by the operating system and these usually have only a resolution of 10 microseconds or larger. But usually this isn't important for an application unless it tries to use this facility for real time tasks. =item int B(pth_event_t I); This is the link between the scheduler and the event facility (see below for the various pth_event_xxx() functions). It's modeled like select(2), i.e., one gives this function one or more events (in the event ring specified by I) on which the current thread wants to wait. The scheduler awakes the thread when one ore more of them occurred or failed after tagging them as such. The I argument is a I to an event ring which isn't changed except for the tagging. pth_wait(3) returns the number of occurred or failed events and the application can use pth_event_status(3) to test which events occurred or failed. =item int B(pth_t I); This cancels a thread I. How the cancellation is done depends on the cancellation state of I which the thread can configure itself. When its state is C a cancellation request is just made pending. When it is C it depends on the cancellation type what is performed. When its C again the cancellation request is just made pending. But when its C the thread is immediately canceled before pth_cancel(3) returns. The effect of a thread cancellation is equal to implicitly forcing the thread to call `C' at one of his cancellation points. In B thread enter a cancellation point either explicitly via pth_cancel_point(3) or implicitly by waiting for an event. =item int B(pth_t I); This is the cruel way to cancel a thread I. When it's already dead and waits to be joined it just joins it (via `CIC<, NULL)>') and this way kicks it out of the system. Else it forces the thread to be not joinable and to allow asynchronous cancellation and then cancels it via `CIC<)>'. =item int B(pth_t I, void **I); This joins the current thread with the thread specified via I. It first suspends the current thread until the I thread has terminated. Then it is awakened and stores the value of I's pth_exit(3) call into *I (if I and not C) and returns to the caller. A thread can be joined only when it has the attribute C set to C (the default). A thread can only be joined once, i.e., after the pth_join(3) call the thread I is completely removed from the system. =item void B(void *I); This terminates the current thread. Whether it's immediately removed from the system or inserted into the dead queue of the scheduler depends on its join type which was specified at spawning time. If it has the attribute C set to C, it's immediately removed and I is ignored. Else the thread is inserted into the dead queue and I remembered for a subsequent pth_join(3) call by another thread. =back =head2 Utilities Utility functions. =over 4 =item int B(int I, int I); This switches the non-blocking mode flag on file descriptor I. The argument I can be C for switching I into blocking I/O mode, C for switching I into non-blocking I/O mode or C for just polling the current mode. The current mode is returned (either C or C) or C on error. Keep in mind that since B 1.1 there is no longer a requirement to manually switch a file descriptor into non-blocking mode in order to use it. This is automatically done temporarily inside B. Instead when you now switch a file descriptor explicitly into non-blocking mode, pth_read(3) or pth_write(3) will never block the current thread. =item pth_time_t B(long I, long I); This is a constructor for a C structure which is a convenient function to avoid temporary structure values. It returns a I structure which holds the absolute time value specified by I and I. =item pth_time_t B(long I, long I); This is a constructor for a C structure which is a convenient function to avoid temporary structure values. It returns a I structure which holds the absolute time value calculated by adding I and I to the current time. =item Sfdisc_t *B(void); This functions is always available, but only reasonably usable when B was built with B support (C<--with-sfio> option) and C is then defined by C. It is useful for applications which want to use the comprehensive B I/O library with the B threading library. Then this function can be used to get an B discipline structure (C) which can be pushed onto B streams (C) in order to let this stream use pth_read(3)/pth_write(2) instead of read(2)/write(2). The benefit is that this way I/O on the B stream does only block the current thread instead of the whole process. The application has to free(3) the C structure when it is no longer needed. The Sfio package can be found at http://www.research.att.com/sw/tools/sfio/. =back =head2 Cancellation Management B supports POSIX style thread cancellation via pth_cancel(3) and the following two related functions: =over 4 =item void B(int I, int *I); This manages the cancellation state of the current thread. When I is not C the function stores the old cancellation state under the variable pointed to by I. When I is not C<0> it sets the new cancellation state. I is created before I is set. A state is a combination of C or C and C or C. C (or C) is the default state where cancellation is possible but only at cancellation points. Use C to complete disable cancellation for a thread and C for allowing asynchronous cancellations, i.e., cancellations which can happen at any time. =item void B(void); This explicitly enter a cancellation point. When the current cancellation state is C or no cancellation request is pending, this has no side-effect and returns immediately. Else it calls `C'. =back =head2 Event Handling B has a very flexible event facility which is linked into the scheduler through the pth_wait(3) function. The following functions provide the handling of event rings. =over 4 =item pth_event_t B(unsigned long I, ...); This creates a new event ring consisting of a single initial event. The type of the generated event is specified by I. The following types are available: =over 4 =item C This is a file descriptor event. One or more of C, C or C have to be OR-ed into I to specify on which state of the file descriptor you want to wait. The file descriptor itself has to be given as an additional argument. Example: `C'. =item C This is a multiple file descriptor event modeled directly after the select(2) call (actually it is also used to implement pth_select(3) internally). It's a convenient way to wait for a large set of file descriptors at once and at each file descriptor for a different type of state. Additionally as a nice side-effect one receives the number of file descriptors which causes the event to be occurred (using BSD semantics, i.e., when a file descriptor occurred in two sets it's counted twice). The arguments correspond directly to the select(2) function arguments except that there is no timeout argument (because timeouts already can be handled via C events). Example: `C' where C has to be of type `C', C has to be of type `C' and C, C and C have to be of type `C' (see select(2)). The number of occurred file descriptors are stored in C. =item C This is a signal set event. The two additional arguments have to be a pointer to a signal set (type `C') and a pointer to a signal number variable (type `C'). This event waits until one of the signals in the signal set occurred. As a result the occurred signal number is stored in the second additional argument. Keep in mind that the B scheduler doesn't block signals automatically. So when you want to wait for a signal with this event you've to block it via sigprocmask(2) or it will be delivered without your notice. Example: `C'. =item C This is a time point event. The additional argument has to be of type C (usually on-the-fly generated via pth_time(3)). This events waits until the specified time point has elapsed. Keep in mind that the value is an absolute time point and not an offset. When you want to wait for a specified amount of time, you've to add the current time to the offset (usually on-the-fly achieved via pth_timeout(3)). Example: `C'. =item C This is a message port event. The additional argument has to be of type C. This events waits until one or more messages were received on the specified message port. Example: `C'. =item C This is a thread event. The additional argument has to be of type C. One of C, C, C or C has to be OR-ed into I to specify on which state of the thread you want to wait. Example: `C'. =item C This is a custom callback function event. Three additional arguments have to be given with the following types: `C', `C' and `C'. The first is a function pointer to a check function and the second argument is a user-supplied context value which is passed to this function. The scheduler calls this function on a regular basis (on his own scheduler stack, so be very careful!) and the thread is kept sleeping while the function returns C. Once it returned C the thread will be awakened. The check interval is defined by the third argument, i.e., the check function is polled again not until this amount of time elapsed. Example: `C'. =back =item unsigned long B(pth_event_t I); This returns the type of event I. It's a combination of the describing C and C value. This is especially useful to know which arguments have to be supplied to the pth_event_extract(3) function. =item int B(pth_event_t I, ...); When pth_event(3) is treated like sprintf(3), then this function is sscanf(3), i.e., it is the inverse operation of pth_event(3). This means that it can be used to extract the ingredients of an event. The ingredients are stored into variables which are given as pointers on the variable argument list. Which pointers have to be present depends on the event type and has to be determined by the caller before via pth_event_typeof(3). To make it clear, when you constructed I via `C' you have to extract it via `C', etc. For multiple arguments of an event the order of the pointer arguments is the same as for pth_event(3). But always keep in mind that you have to always supply I to I and these variables have to be of the same type as the argument of pth_event(3) required. =item pth_event_t B(pth_event_t I, ...); This concatenates one or more additional event rings to the event ring I and returns I. The end of the argument list has to be marked with a C argument. Use this function to create real events rings out of the single-event rings created by pth_event(3). =item pth_event_t B(pth_event_t I); This isolates the event I from possibly appended events in the event ring. When in I only one event exists, this returns C. When remaining events exists, they form a new event ring which is returned. =item pth_event_t B(pth_event_t I, int I); This walks to the next (when I is C) or previews (when I is C) event in the event ring I and returns this new reached event. Additionally C can be OR-ed into I to walk to the next/previous occurred event in the ring I. =item pth_status_t B(pth_event_t I); This returns the status of event I. This is a fast operation because only a tag on I is checked which was either set or still not set by the scheduler. In other words: This doesn't check the event itself, it just checks the last knowledge of the scheduler. The possible returned status codes are: C (event is still pending), C (event successfully occurred), C (event failed). =item int B(pth_event_t I, int I); This deallocates the event I (when I is C) or all events appended to the event ring under I (when I is C). =back =head2 Key-Based Storage The following functions provide thread-local storage through unique keys similar to the POSIX B API. Use this for thread specific global data. =over 4 =item int B(pth_key_t *I, void (*I)(void *)); This created a new unique key and stores it in I. Additionally I can specify a destructor function which is called on the current threads termination with the I. =item int B(pth_key_t I); This explicitly destroys a key I. =item int B(pth_key_t I, const void *I); This stores I under I. =item void *B(pth_key_t I); This retrieves the value under I. =back =head2 Message Port Communication The following functions provide message ports which can be used for efficient and flexible inter-thread communication. =over 4 =item pth_msgport_t B(const char *I); This returns a pointer to a new message port. If name I is not C, the I can be used by other threads via pth_msgport_find(3) to find the message port in case they do not know directly the pointer to the message port. =item void B(pth_msgport_t I); This destroys a message port I. Before all pending messages on it are replied to their origin message port. =item pth_msgport_t B(const char *I); This finds a message port in the system by I and returns the pointer to it. =item int B(pth_msgport_t I); This returns the number of pending messages on message port I. =item int B(pth_msgport_t I, pth_message_t *I); This puts (or sends) a message I to message port I. =item pth_message_t *B(pth_msgport_t I); This gets (or receives) the top message from message port I. Incoming messages are always kept in a queue, so there can be more pending messages, of course. =item int B(pth_message_t *I); This replies a message I to the message port of the sender. =back =head2 Thread Cleanups Per-thread cleanup functions. =over 4 =item int B(void (*I)(void *), void *I); This pushes the routine I onto the stack of cleanup routines for the current thread. These routines are called in LIFO order when the thread terminates. =item int B(int I); This pops the top-most routine from the stack of cleanup routines for the current thread. When I is C the routine is additionally called. =back =head2 Process Forking The following functions provide some special support for process forking situations inside the threading environment. =over 4 =item int B(void (*I)(void *), void (*)(void *I), void (*)(void *I), void *I); This function declares forking handlers to be called before and after pth_fork(3), in the context of the thread that called pth_fork(3). The I handler is called before fork(2) processing commences. The I handler is called after fork(2) processing completes in the parent process. The I handler is called after fork(2) processing completed in the child process. If no handling is desired at one or more of these three points, the corresponding handler can be given as C. Each handler is called with I as the argument. The order of calls to pth_atfork_push(3) is significant. The I and I handlers are called in the order in which they were established by calls to pth_atfork_push(3), i.e., FIFO. The I fork handlers are called in the opposite order, i.e., LIFO. =item int B(void); This removes the top-most handlers on the forking handler stack which were established with the last pth_atfork_push(3) call. It returns C when no more handlers couldn't be removed from the stack. =item pid_t B(void); This is a variant of fork(2) with the difference that the current thread only is forked into a separate process, i.e., in the parent process nothing changes while in the child process all threads are gone except for the scheduler and the calling thread. When you really want to duplicate all threads in the current process you should use fork(2) directly. But this is usually not reasonable. Additionally this function takes care of forking handlers as established by pth_fork_push(3). =back =head2 Synchronization The following functions provide synchronization support via mutual exclusion locks (B), read-write locks (B), condition variables (B) and barriers (B). Keep in mind that in a non-preemptive threading system like B this might sound unnecessary at the first look, because a thread isn't interrupted by the system. Actually when you have a critical code section which doesn't contain any pth_xxx() functions, you don't need any mutex to protect it, of course. But when your critical code section contains any pth_xxx() function the chance is high that these temporarily switch to the scheduler. And this way other threads can make progress and enter your critical code section, too. This is especially true for critical code sections which implicitly or explicitly use the event mechanism. =over 4 =item int B(pth_mutex_t *I); This dynamically initializes a mutex variable of type `C'. Alternatively one can also use static initialization via `C'. =item int B(pth_mutex_t *I, int I, pth_event_t I); This acquires a mutex I. If the mutex is already locked by another thread, the current threads execution is suspended until the mutex is unlocked again or additionally the extra events in I occurred (when I is not C). Recursive locking is explicitly supported, i.e., a thread is allowed to acquire a mutex more than once before its released. But it then also has be released the same number of times until the mutex is again lockable by others. When I is C this function never suspends execution. Instead it returns C with C set to C. =item int B(pth_mutex_t *I); This decrements the recursion locking count on I and when it is zero it releases the mutex I. =item int B(pth_rwlock_t *I); This dynamically initializes a read-write lock variable of type `C'. Alternatively one can also use static initialization via `C'. =item int B(pth_rwlock_t *I, int I, int I, pth_event_t I); This acquires a read-only (when I is C) or a read-write (when I is C) lock I. When the lock is only locked by other threads in read-only mode, the lock succeeds. But when one thread holds a read-write lock, all locking attempts suspend the current thread until this lock is released again. Additionally in I events can be given to let the locking timeout, etc. When I is C this function never suspends execution. Instead it returns C with C set to C. =item int B(pth_rwlock_t *I); This releases a previously acquired (read-only or read-write) lock. =item int B(pth_cond_t *I); This dynamically initializes a condition variable variable of type `C'. Alternatively one can also use static initialization via `C'. =item int B(pth_cond_t *I, pth_mutex_t *I, pth_event_t I); This awaits a condition situation. The caller has to follow the semantics of the POSIX condition variables: I has to be acquired before this function is called. The execution of the current thread is then suspended either until the events in I occurred (when I is not C) or I was notified by another thread via pth_cond_notify(3). While the thread is waiting, I is released. Before it returns I is reacquired. =item int B(pth_cond_t *I, int I); This notified one or all threads which are waiting on I. When I is C all thread are notified, else only a single (unspecified) one. =item int B(pth_barrier_t *I, int I); This dynamically initializes a barrier variable of type `C'. Alternatively one can also use static initialization via `CIC<)>'. =item int B(pth_barrier_t *I); This function reaches a barrier I. If this is the last thread (as specified by I on init of I) all threads are awakened. Else the current thread is suspended until the last thread reached the barrier and this way awakes all threads. The function returns (beside C on error) the value C for any thread which neither reached the barrier as the first nor the last thread; C for the thread which reached the barrier as the first thread and C for the thread which reached the barrier as the last thread. =back =head2 User-Space Context The following functions provide a stand-alone sub-API for user-space context switching. It internally is based on the same underlying machine context switching mechanism the threads in B are based on. Hence these functions you can use for implementing your own simple user-space threads. The C context is somewhat modeled after POSIX ucontext(3). The time required to create (via pth_uctx_make(3)) a user-space context can range from just a few microseconds up to a more dramatical time (depending on the machine context switching method which is available on the platform). On the other hand, the raw performance in switching the user-space contexts is always very good (nearly independent of the used machine context switching method). For instance, on an Intel Pentium-III CPU with 800Mhz running under FreeBSD 4 one usually achieves about 260,000 user-space context switches (via pth_uctx_switch(3)) per second. =over 4 =item int B(pth_uctx_t *I); This function creates a user-space context and stores it into I. There is still no underlying user-space context configured. You still have to do this with pth_uctx_make(3). On success, this function returns C, else C. =item int B(pth_uctx_t I, char *I, size_t I, const sigset_t *I, void (*I)(void *), void *I, pth_uctx_t I); This function makes a new user-space context in I which will operate on the run-time stack I (which is of maximum size I), with the signals in I blocked (if I is not C) and starting to execute with the call I(I). If I is C, a stack is dynamically allocated. The stack size I has to be at least 16384 (16KB). If the start function I returns and I is not C, an implicit user-space context switch to this context is performed. Else (if I is C) the process is terminated with exit(3). This function is somewhat modeled after POSIX makecontext(3). On success, this function returns C, else C. =item int B(pth_uctx_t I, pth_uctx_t I); This function saves the current user-space context in I for later restoring by another call to pth_uctx_switch(3) and restores the new user-space context from I, which previously had to be set with either a previous call to pth_uctx_switch(3) or initially by pth_uctx_make(3). This function is somewhat modeled after POSIX swapcontext(3). If I or I are C or if I contains no valid user-space context, C is returned instead of C. These are the only errors possible. =item int B(pth_uctx_t I); This function destroys the user-space context in I. The run-time stack associated with the user-space context is deallocated only if it was not given by the application (see I of pth_uctx_create(3)). If I is C, C is returned instead of C. This is the only error possible. =back =head2 Generalized POSIX Replacement API The following functions are generalized replacements functions for the POSIX API, i.e., they are similar to the functions under `B' but all have an additional event argument which can be used for timeouts, etc. =over 4 =item int B(const sigset_t *I, int *I, pth_event_t I); This is equal to pth_sigwait(3) (see below), but has an additional event argument I. When pth_sigwait(3) suspends the current threads execution it usually only uses the signal event on I to awake. With this function any number of extra events can be used to awake the current thread (remember that I actually is an event I). =item int B(int I, const struct sockaddr *I, socklen_t I, pth_event_t I); This is equal to pth_connect(3) (see below), but has an additional event argument I. When pth_connect(3) suspends the current threads execution it usually only uses the I/O event on I to awake. With this function any number of extra events can be used to awake the current thread (remember that I actually is an event I). =item int B(int I, struct sockaddr *I, socklen_t *I, pth_event_t I); This is equal to pth_accept(3) (see below), but has an additional event argument I. When pth_accept(3) suspends the current threads execution it usually only uses the I/O event on I to awake. With this function any number of extra events can be used to awake the current thread (remember that I actually is an event I). =item int B(int I, fd_set *I, fd_set *I, fd_set *I, struct timeval *I, pth_event_t I); This is equal to pth_select(3) (see below), but has an additional event argument I. When pth_select(3) suspends the current threads execution it usually only uses the I/O event on I, I and I to awake. With this function any number of extra events can be used to awake the current thread (remember that I actually is an event I). =item int B(struct pollfd *I, unsigned int I, int I, pth_event_t I); This is equal to pth_poll(3) (see below), but has an additional event argument I. When pth_poll(3) suspends the current threads execution it usually only uses the I/O event on I to awake. With this function any number of extra events can be used to awake the current thread (remember that I actually is an event I). =item ssize_t B(int I, void *I, size_t I, pth_event_t I); This is equal to pth_read(3) (see below), but has an additional event argument I. When pth_read(3) suspends the current threads execution it usually only uses the I/O event on I to awake. With this function any number of extra events can be used to awake the current thread (remember that I actually is an event I). =item ssize_t B(int I, const struct iovec *I, int I, pth_event_t I); This is equal to pth_readv(3) (see below), but has an additional event argument I. When pth_readv(3) suspends the current threads execution it usually only uses the I/O event on I to awake. With this function any number of extra events can be used to awake the current thread (remember that I actually is an event I). =item ssize_t B(int I, const void *I, size_t I, pth_event_t I); This is equal to pth_write(3) (see below), but has an additional event argument I. When pth_write(3) suspends the current threads execution it usually only uses the I/O event on I to awake. With this function any number of extra events can be used to awake the current thread (remember that I actually is an event I). =item ssize_t B(int I, const struct iovec *I, int I, pth_event_t I); This is equal to pth_writev(3) (see below), but has an additional event argument I. When pth_writev(3) suspends the current threads execution it usually only uses the I/O event on I to awake. With this function any number of extra events can be used to awake the current thread (remember that I actually is an event I). =item ssize_t B(int I, void *I, size_t I, int I, pth_event_t I); This is equal to pth_recv(3) (see below), but has an additional event argument I. When pth_recv(3) suspends the current threads execution it usually only uses the I/O event on I to awake. With this function any number of extra events can be used to awake the current thread (remember that I actually is an event I). =item ssize_t B(int I, void *I, size_t I, int I, struct sockaddr *I, socklen_t *I, pth_event_t I); This is equal to pth_recvfrom(3) (see below), but has an additional event argument I. When pth_recvfrom(3) suspends the current threads execution it usually only uses the I/O event on I to awake. With this function any number of extra events can be used to awake the current thread (remember that I actually is an event I). =item ssize_t B(int I, const void *I, size_t I, int I, pth_event_t I); This is equal to pth_send(3) (see below), but has an additional event argument I. When pth_send(3) suspends the current threads execution it usually only uses the I/O event on I to awake. With this function any number of extra events can be used to awake the current thread (remember that I actually is an event I). =item ssize_t B(int I, const void *I, size_t I, int I, const struct sockaddr *I, socklen_t I, pth_event_t I); This is equal to pth_sendto(3) (see below), but has an additional event argument I. When pth_sendto(3) suspends the current threads execution it usually only uses the I/O event on I to awake. With this function any number of extra events can be used to awake the current thread (remember that I actually is an event I). =back =head2 Standard POSIX Replacement API The following functions are standard replacements functions for the POSIX API. The difference is mainly that they suspend the current thread only instead of the whole process in case the file descriptors will block. =over 4 =item int B(const struct timespec *I, struct timespec *I); This is a variant of the POSIX nanosleep(3) function. It suspends the current threads execution until the amount of time in I elapsed. The thread is guaranteed to not wake up before this time, but because of the non-preemptive scheduling nature of B, it can be awakened later, of course. If I is not C, the C structure it references is updated to contain the unslept amount (the request time minus the time actually slept time). The difference between nanosleep(3) and pth_nanosleep(3) is that that pth_nanosleep(3) suspends only the execution of the current thread and not the whole process. =item int B(unsigned int I); This is a variant of the 4.3BSD usleep(3) function. It suspends the current threads execution until I microseconds (= I*1/1000000 sec) elapsed. The thread is guaranteed to not wake up before this time, but because of the non-preemptive scheduling nature of B, it can be awakened later, of course. The difference between usleep(3) and pth_usleep(3) is that that pth_usleep(3) suspends only the execution of the current thread and not the whole process. =item unsigned int B(unsigned int I); This is a variant of the POSIX sleep(3) function. It suspends the current threads execution until I seconds elapsed. The thread is guaranteed to not wake up before this time, but because of the non-preemptive scheduling nature of B, it can be awakened later, of course. The difference between sleep(3) and pth_sleep(3) is that pth_sleep(3) suspends only the execution of the current thread and not the whole process. =item pid_t B(pid_t I, int *I, int I); This is a variant of the POSIX waitpid(2) function. It suspends the current threads execution until I information is available for a terminated child process I. The difference between waitpid(2) and pth_waitpid(3) is that pth_waitpid(3) suspends only the execution of the current thread and not the whole process. For more details about the arguments and return code semantics see waitpid(2). =item int B(const char *I); This is a variant of the POSIX system(3) function. It executes the shell command I with Bourne Shell (C) and suspends the current threads execution until this command terminates. The difference between system(3) and pth_system(3) is that pth_system(3) suspends only the execution of the current thread and not the whole process. For more details about the arguments and return code semantics see system(3). =item int B(int I, const sigset_t *I, sigset_t *I) This is the B thread-related equivalent of POSIX sigprocmask(2) respectively pthread_sigmask(3). The arguments I, I and I directly relate to sigprocmask(2), because B internally just uses sigprocmask(2) here. So alternatively you can also directly call sigprocmask(2), but for consistency reasons you should use this function pth_sigmask(3). =item int B(const sigset_t *I, int *I); This is a variant of the POSIX.1c sigwait(3) function. It suspends the current threads execution until a signal in I occurred and stores the signal number in I. The important point is that the signal is not delivered to a signal handler. Instead it's caught by the scheduler only in order to awake the pth_sigwait() call. The trick and noticeable point here is that this way you get an asynchronous aware application that is written completely synchronously. When you think about the problem of I functions you should recognize that this is a great benefit. =item int B(int I, const struct sockaddr *I, socklen_t I); This is a variant of the 4.2BSD connect(2) function. It establishes a connection on a socket I to target specified in I and I. The difference between connect(2) and pth_connect(3) is that pth_connect(3) suspends only the execution of the current thread and not the whole process. For more details about the arguments and return code semantics see connect(2). =item int B(int I, struct sockaddr *I, socklen_t *I); This is a variant of the 4.2BSD accept(2) function. It accepts a connection on a socket by extracting the first connection request on the queue of pending connections, creating a new socket with the same properties of I and allocates a new file descriptor for the socket (which is returned). The difference between accept(2) and pth_accept(3) is that pth_accept(3) suspends only the execution of the current thread and not the whole process. For more details about the arguments and return code semantics see accept(2). =item int B(int I, fd_set *I, fd_set *I, fd_set *I, struct timeval *I); This is a variant of the 4.2BSD select(2) function. It examines the I/O descriptor sets whose addresses are passed in I, I, and I to see if some of their descriptors are ready for reading, are ready for writing, or have an exceptional condition pending, respectively. For more details about the arguments and return code semantics see select(2). =item int B(int I, fd_set *I, fd_set *I, fd_set *I, const struct timespec *I, const sigset_t *I); This is a variant of the POSIX pselect(2) function, which in turn is a stronger variant of 4.2BSD select(2). The difference is that the higher-resolution C is passed instead of the lower-resolution C and that a signal mask is specified which is temporarily set while waiting for input. For more details about the arguments and return code semantics see pselect(2) and select(2). =item int B(struct pollfd *I, unsigned int I, int I); This is a variant of the SysV poll(2) function. It examines the I/O descriptors which are passed in the array I to see if some of them are ready for reading, are ready for writing, or have an exceptional condition pending, respectively. For more details about the arguments and return code semantics see poll(2). =item ssize_t B(int I, void *I, size_t I); This is a variant of the POSIX read(2) function. It reads up to I bytes into I from file descriptor I. The difference between read(2) and pth_read(2) is that pth_read(2) suspends execution of the current thread until the file descriptor is ready for reading. For more details about the arguments and return code semantics see read(2). =item ssize_t B(int I, const struct iovec *I, int I); This is a variant of the POSIX readv(2) function. It reads data from file descriptor I into the first I rows of the I vector. The difference between readv(2) and pth_readv(2) is that pth_readv(2) suspends execution of the current thread until the file descriptor is ready for reading. For more details about the arguments and return code semantics see readv(2). =item ssize_t B(int I, const void *I, size_t I); This is a variant of the POSIX write(2) function. It writes I bytes from I to file descriptor I. The difference between write(2) and pth_write(2) is that pth_write(2) suspends execution of the current thread until the file descriptor is ready for writing. For more details about the arguments and return code semantics see write(2). =item ssize_t B(int I, const struct iovec *I, int I); This is a variant of the POSIX writev(2) function. It writes data to file descriptor I from the first I rows of the I vector. The difference between writev(2) and pth_writev(2) is that pth_writev(2) suspends execution of the current thread until the file descriptor is ready for reading. For more details about the arguments and return code semantics see writev(2). =item ssize_t B(int I, void *I, size_t I, off_t I); This is a variant of the POSIX pread(3) function. It performs the same action as a regular read(2), except that it reads from a given position in the file without changing the file pointer. The first three arguments are the same as for pth_read(3) with the addition of a fourth argument I for the desired position inside the file. =item ssize_t B(int I, const void *I, size_t I, off_t I); This is a variant of the POSIX pwrite(3) function. It performs the same action as a regular write(2), except that it writes to a given position in the file without changing the file pointer. The first three arguments are the same as for pth_write(3) with the addition of a fourth argument I for the desired position inside the file. =item ssize_t B(int I, void *I, size_t I, int I); This is a variant of the SUSv2 recv(2) function and equal to ``pth_recvfrom(fd, buf, nbytes, flags, NULL, 0)''. =item ssize_t B(int I, void *I, size_t I, int I, struct sockaddr *I, socklen_t *I); This is a variant of the SUSv2 recvfrom(2) function. It reads up to I bytes into I from file descriptor I while using I and I/I. The difference between recvfrom(2) and pth_recvfrom(2) is that pth_recvfrom(2) suspends execution of the current thread until the file descriptor is ready for reading. For more details about the arguments and return code semantics see recvfrom(2). =item ssize_t B(int I, const void *I, size_t I, int I); This is a variant of the SUSv2 send(2) function and equal to ``pth_sendto(fd, buf, nbytes, flags, NULL, 0)''. =item ssize_t B(int I, const void *I, size_t I, int I, const struct sockaddr *I, socklen_t I); This is a variant of the SUSv2 sendto(2) function. It writes I bytes from I to file descriptor I while using I and I/I. The difference between sendto(2) and pth_sendto(2) is that pth_sendto(2) suspends execution of the current thread until the file descriptor is ready for writing. For more details about the arguments and return code semantics see sendto(2). =back =head1 EXAMPLE The following example is a useless server which does nothing more than listening on TCP port 12345 and displaying the current time to the socket when a connection was established. For each incoming connection a thread is spawned. Additionally, to see more multithreading, a useless ticker thread runs simultaneously which outputs the current time to C every 5 seconds. The example contains I error checking and is I intended to show you the look and feel of B. #include #include #include #include #include #include #include #include #include #include #include "pth.h" #define PORT 12345 /* the socket connection handler thread */ static void *handler(void *_arg) { int fd = (int)_arg; time_t now; char *ct; now = time(NULL); ct = ctime(&now); pth_write(fd, ct, strlen(ct)); close(fd); return NULL; } /* the stderr time ticker thread */ static void *ticker(void *_arg) { time_t now; char *ct; float load; for (;;) { pth_sleep(5); now = time(NULL); ct = ctime(&now); ct[strlen(ct)-1] = '\0'; pth_ctrl(PTH_CTRL_GETAVLOAD, &load); printf("ticker: time: %s, average load: %.2f\n", ct, load); } } /* the main thread/procedure */ int main(int argc, char *argv[]) { pth_attr_t attr; struct sockaddr_in sar; struct protoent *pe; struct sockaddr_in peer_addr; int peer_len; int sa, sw; int port; pth_init(); signal(SIGPIPE, SIG_IGN); attr = pth_attr_new(); pth_attr_set(attr, PTH_ATTR_NAME, "ticker"); pth_attr_set(attr, PTH_ATTR_STACK_SIZE, 64*1024); pth_attr_set(attr, PTH_ATTR_JOINABLE, FALSE); pth_spawn(attr, ticker, NULL); pe = getprotobyname("tcp"); sa = socket(AF_INET, SOCK_STREAM, pe->p_proto); sar.sin_family = AF_INET; sar.sin_addr.s_addr = INADDR_ANY; sar.sin_port = htons(PORT); bind(sa, (struct sockaddr *)&sar, sizeof(struct sockaddr_in)); listen(sa, 10); pth_attr_set(attr, PTH_ATTR_NAME, "handler"); for (;;) { peer_len = sizeof(peer_addr); sw = pth_accept(sa, (struct sockaddr *)&peer_addr, &peer_len); pth_spawn(attr, handler, (void *)sw); } } =head1 BUILD ENVIRONMENTS In this section we will discuss the canonical ways to establish the build environment for a B based program. The possibilities supported by B range from very simple environments to rather complex ones. =head2 Manual Build Environment (Novice) As a first example, assume we have the above test program staying in the source file C. Then we can create a very simple build environment by just adding the following C: $ vi Makefile | CC = cc | CFLAGS = `pth-config --cflags` | LDFLAGS = `pth-config --ldflags` | LIBS = `pth-config --libs` | | all: foo | foo: foo.o | $(CC) $(LDFLAGS) -o foo foo.o $(LIBS) | foo.o: foo.c | $(CC) $(CFLAGS) -c foo.c | clean: | rm -f foo foo.o This imports the necessary compiler and linker flags on-the-fly from the B installation via its C program. This approach is straight-forward and works fine for small projects. =head2 Autoconf Build Environment (Advanced) The previous approach is simple but inflexible. First, to speed up building, it would be nice to not expand the compiler and linker flags every time the compiler is started. Second, it would be useful to also be able to build against uninstalled B, that is, against a B source tree which was just configured and built, but not installed. Third, it would be also useful to allow checking of the B version to make sure it is at least a minimum required version. And finally, it would be also great to make sure B works correctly by first performing some sanity compile and run-time checks. All this can be done if we use GNU B and the C macro provided by B. For this, we establish the following three files: First we again need the C, but this time it contains B placeholders and additional cleanup targets. And we create it under the name C, because it is now an input file for B: $ vi Makefile.in | CC = @@CC@@ | CFLAGS = @@CFLAGS@@ | LDFLAGS = @@LDFLAGS@@ | LIBS = @@LIBS@@ | | all: foo | foo: foo.o | $(CC) $(LDFLAGS) -o foo foo.o $(LIBS) | foo.o: foo.c | $(CC) $(CFLAGS) -c foo.c | clean: | rm -f foo foo.o | distclean: | rm -f foo foo.o | rm -f config.log config.status config.cache | rm -f Makefile Because B generates additional files, we added a canonical C target which cleans this up. Secondly, we wrote C, a (minimal) B script specification: $ vi configure.ac | AC_INIT(Makefile.in) | AC_CHECK_PTH(1.3.0) | AC_OUTPUT(Makefile) Then we let B's C program generate for us an C file containing B's C macro. Then we generate the final C script out of this C file and the C file: $ aclocal --acdir=`pth-config --acdir` $ autoconf After these steps, the working directory should look similar to this: $ ls -l -rw-r--r-- 1 rse users 176 Nov 3 11:11 Makefile.in -rw-r--r-- 1 rse users 15314 Nov 3 11:16 aclocal.m4 -rwxr-xr-x 1 rse users 52045 Nov 3 11:16 configure -rw-r--r-- 1 rse users 63 Nov 3 11:11 configure.ac -rw-r--r-- 1 rse users 4227 Nov 3 11:11 foo.c If we now run C we get a correct C which immediately can be used to build C (assuming that B is already installed somewhere, so that C is in C<$PATH>): $ ./configure creating cache ./config.cache checking for gcc... gcc checking whether the C compiler (gcc ) works... yes checking whether the C compiler (gcc ) is a cross-compiler... no checking whether we are using GNU C... yes checking whether gcc accepts -g... yes checking how to run the C preprocessor... gcc -E checking for GNU Pth... version 1.3.0, installed under /usr/local updating cache ./config.cache creating ./config.status creating Makefile rse@@en1:/e/gnu/pth/ac $ make gcc -g -O2 -I/usr/local/include -c foo.c gcc -L/usr/local/lib -o foo foo.o -lpth If B is installed in non-standard locations or C is not in C<$PATH>, one just has to drop the C script a note about the location by running C with the option C<--with-pth=>I (where I is the argument which was used with the C<--prefix> option when B was installed). =head2 Autoconf Build Environment with Local Copy of Pth (Expert) Finally let us assume the C program stays under either a I or I distribution license and we want to make it a stand-alone package for easier distribution and installation. That is, we don't want to oblige the end-user to install B just to allow our C package to compile. For this, it is a convenient practice to include the required libraries (here B) into the source tree of the package (here C). B ships with all necessary support to allow us to easily achieve this approach. Say, we want B in a subdirectory named C and this directory should be seamlessly integrated into the configuration and build process of C. First we again start with the C, but this time it is a more advanced version which supports subdirectory movement: $ vi Makefile.in | CC = @@CC@@ | CFLAGS = @@CFLAGS@@ | LDFLAGS = @@LDFLAGS@@ | LIBS = @@LIBS@@ | | SUBDIRS = pth | | all: subdirs_all foo | | subdirs_all: | @@$(MAKE) $(MFLAGS) subdirs TARGET=all | subdirs_clean: | @@$(MAKE) $(MFLAGS) subdirs TARGET=clean | subdirs_distclean: | @@$(MAKE) $(MFLAGS) subdirs TARGET=distclean | subdirs: | @@for subdir in $(SUBDIRS); do \ | echo "===> $$subdir ($(TARGET))"; \ | (cd $$subdir; $(MAKE) $(MFLAGS) $(TARGET) || exit 1) || exit 1; \ | echo "<=== $$subdir"; \ | done | | foo: foo.o | $(CC) $(LDFLAGS) -o foo foo.o $(LIBS) | foo.o: foo.c | $(CC) $(CFLAGS) -c foo.c | | clean: subdirs_clean | rm -f foo foo.o | distclean: subdirs_distclean | rm -f foo foo.o | rm -f config.log config.status config.cache | rm -f Makefile Then we create a slightly different B script C: $ vi configure.ac | AC_INIT(Makefile.in) | AC_CONFIG_AUX_DIR(pth) | AC_CHECK_PTH(1.3.0, subdir:pth --disable-tests) | AC_CONFIG_SUBDIRS(pth) | AC_OUTPUT(Makefile) Here we provided a default value for C's C<--with-pth> option as the second argument to C which indicates that B can be found in the subdirectory named C. Additionally we specified that the C<--disable-tests> option of B should be passed to the C subdirectory, because we need only to build the B library itself. And we added a C call which indicates to B that it should configure the C subdirectory, too. The C directive was added just to make B happy, because it wants to find a C or C script if C is used. Now we let B's C program again generate for us an C file with the contents of B's C macro. Finally we generate the C script out of this C file and the C file. $ aclocal --acdir=`pth-config --acdir` $ autoconf Now we have to create the C subdirectory itself. For this, we extract the B distribution to the C source tree and just rename it to C: $ gunzip subdirectory, we can strip down the B sources to a minimum with the I feature: $ cd pth $ ./configure $ make striptease $ cd .. After this the source tree of C should look similar to this: $ ls -l -rw-r--r-- 1 rse users 709 Nov 3 11:51 Makefile.in -rw-r--r-- 1 rse users 16431 Nov 3 12:20 aclocal.m4 -rwxr-xr-x 1 rse users 57403 Nov 3 12:21 configure -rw-r--r-- 1 rse users 129 Nov 3 12:21 configure.ac -rw-r--r-- 1 rse users 4227 Nov 3 11:11 foo.c drwxr-xr-x 2 rse users 3584 Nov 3 12:36 pth $ ls -l pth/ -rw-rw-r-- 1 rse users 26344 Nov 1 20:12 COPYING -rw-rw-r-- 1 rse users 2042 Nov 3 12:36 Makefile.in -rw-rw-r-- 1 rse users 3967 Nov 1 19:48 README -rw-rw-r-- 1 rse users 340 Nov 3 12:36 README.1st -rw-rw-r-- 1 rse users 28719 Oct 31 17:06 config.guess -rw-rw-r-- 1 rse users 24274 Aug 18 13:31 config.sub -rwxrwxr-x 1 rse users 155141 Nov 3 12:36 configure -rw-rw-r-- 1 rse users 162021 Nov 3 12:36 pth.c -rw-rw-r-- 1 rse users 18687 Nov 2 15:19 pth.h.in -rw-rw-r-- 1 rse users 5251 Oct 31 12:46 pth_acdef.h.in -rw-rw-r-- 1 rse users 2120 Nov 1 11:27 pth_acmac.h.in -rw-rw-r-- 1 rse users 2323 Nov 1 11:27 pth_p.h.in -rw-rw-r-- 1 rse users 946 Nov 1 11:27 pth_vers.c -rw-rw-r-- 1 rse users 26848 Nov 1 11:27 pthread.c -rw-rw-r-- 1 rse users 18772 Nov 1 11:27 pthread.h.in -rwxrwxr-x 1 rse users 26188 Nov 3 12:36 shtool Now when we configure and build the C package it looks similar to this: $ ./configure creating cache ./config.cache checking for gcc... gcc checking whether the C compiler (gcc ) works... yes checking whether the C compiler (gcc ) is a cross-compiler... no checking whether we are using GNU C... yes checking whether gcc accepts -g... yes checking how to run the C preprocessor... gcc -E checking for GNU Pth... version 1.3.0, local under pth updating cache ./config.cache creating ./config.status creating Makefile configuring in pth running /bin/sh ./configure --enable-subdir --enable-batch --disable-tests --cache-file=.././config.cache --srcdir=. loading cache .././config.cache checking for gcc... (cached) gcc checking whether the C compiler (gcc ) works... yes checking whether the C compiler (gcc ) is a cross-compiler... no [...] $ make ===> pth (all) ./shtool scpp -o pth_p.h -t pth_p.h.in -Dcpp -Cintern -M '==#==' pth.c pth_vers.c gcc -c -I. -O2 -pipe pth.c gcc -c -I. -O2 -pipe pth_vers.c ar rc libpth.a pth.o pth_vers.o ranlib libpth.a <=== pth gcc -g -O2 -Ipth -c foo.c gcc -Lpth -o foo foo.o -lpth As you can see, B now automatically configures the local (stripped down) copy of B in the subdirectory C and the C automatically builds the subdirectory, too. =head1 SYSTEM CALL WRAPPER FACILITY B per default uses an explicit API, including the system calls. For instance you've to explicitly use pth_read(3) when you need a thread-aware read(3) and cannot expect that by just calling read(3) only the current thread is blocked. Instead with the standard read(3) call the whole process will be blocked. But because for some applications (mainly those consisting of lots of third-party stuff) this can be inconvenient. Here it's required that a call to read(3) `magically' means pth_read(3). The problem here is that such magic B cannot provide per default because it's not really portable. Nevertheless B provides a two step approach to solve this problem: =head2 Soft System Call Mapping This variant is available on all platforms and can I be enabled by building B with C<--enable-syscall-soft>. This then triggers some C<#define>'s in the C header which map for instance read(3) to pth_read(3), etc. Currently the following functions are mapped: fork(2), nanosleep(3), usleep(3), sleep(3), sigwait(3), waitpid(2), system(3), select(2), poll(2), connect(2), accept(2), read(2), write(2), recv(2), send(2), recvfrom(2), sendto(2). The drawback of this approach is just that really all source files of the application where these function calls occur have to include C, of course. And this also means that existing libraries, including the vendor's B, usually will still block the whole process if one of its I/O functions block. =head2 Hard System Call Mapping This variant is available only on those platforms where the syscall(2) function exists and there it can be enabled by building B with C<--enable-syscall-hard>. This then builds wrapper functions (for instances read(3)) into the B library which internally call the real B replacement functions (pth_read(3)). Currently the following functions are mapped: fork(2), nanosleep(3), usleep(3), sleep(3), waitpid(2), system(3), select(2), poll(2), connect(2), accept(2), read(2), write(2). The drawback of this approach is that it depends on syscall(2) interface and prototype conflicts can occur while building the wrapper functions due to different function signatures in the vendor C header files. But the advantage of this mapping variant is that the source files of the application where these function calls occur have not to include C and that existing libraries, including the vendor's B, magically become thread-aware (and then block only the current thread). =head1 IMPLEMENTATION NOTES B is very portable because it has only one part which perhaps has to be ported to new platforms (the machine context initialization). But it is written in a way which works on mostly all Unix platforms which support makecontext(2) or at least sigstack(2) or sigaltstack(2) [see C for details]. Any other B code is POSIX and ANSI C based only. The context switching is done via either SUSv2 makecontext(2) or POSIX make[sig]setjmp(3) and [sig]longjmp(3). Here all CPU registers, the program counter and the stack pointer are switched. Additionally the B dispatcher switches also the global Unix C variable [see C for details] and the signal mask (either implicitly via sigsetjmp(3) or in an emulated way via explicit setprocmask(2) calls). The B event manager is mainly select(2) and gettimeofday(2) based, i.e., the current time is fetched via gettimeofday(2) once per context switch for time calculations and all I/O events are implemented via a single central select(2) call [see C for details]. The thread control block management is done via virtual priority queues without any additional data structure overhead. For this, the queue linkage attributes are part of the thread control blocks and the queues are actually implemented as rings with a selected element as the entry point [see C and C for details]. Most time critical code sections (especially the dispatcher and event manager) are speeded up by inline functions (implemented as ANSI C pre-processor macros). Additionally any debugging code is I removed from the source when not built with C<-DPTH_DEBUG> (see Autoconf C<--enable-debug> option), i.e., not only stub functions remain [see C for details]. =head1 RESTRICTIONS B (intentionally) provides no replacements for non-thread-safe functions (like strtok(3) which uses a static internal buffer) or synchronous system functions (like gethostbyname(3) which doesn't provide an asynchronous mode where it doesn't block). When you want to use those functions in your server application together with threads, you've to either link the application against special third-party libraries (or for thread-safe/reentrant functions possibly against an existing C of the platform vendor). For an asynchronous DNS resolver library use the GNU B package from Ian Jackson ( see http://www.gnu.org/software/adns/adns.html ). =head1 HISTORY The B library was designed and implemented between February and July 1999 by I after evaluating numerous (mostly preemptive) thread libraries and after intensive discussions with I, I, I and I related to an experimental (matrix based) non-preemptive C++ scheduler class written by I. B was then implemented in order to combine the I approach of multithreading (which provides better portability and performance) with an API similar to the popular one found in B libraries (which provides easy programming). So the essential idea of the non-preemptive approach was taken over from I scheduler. The priority based scheduling algorithm was suggested by I. Some code inspiration also came from an experimental threading library (B) written by I for an ancient internal test version of the Apache webserver. The concept and API of message ports was borrowed from AmigaOS' B subsystem. The concept and idea for the flexible event mechanism came from I's B (which can be found as a part of B v8). =head1 BUG REPORTS AND SUPPORT If you think you have found a bug in B, you should send a report as complete as possible to I. If you can, please try to fix the problem and include a patch, made with 'C', in your report. Always, at least, include a reasonable amount of description in your report to allow the author to deterministically reproduce the bug. For further support you additionally can subscribe to the I mailing list by sending an Email to I with `C' (or `C I
' if you want to subscribe from a particular Email I
) in the body. Then you can discuss your issues with other B users by sending messages to I. Currently (as of August 2000) you can reach about 110 Pth users on this mailing list. Old postings you can find at I. =head1 SEE ALSO =head2 Related Web Locations `comp.programming.threads Newsgroup Archive', http://www.deja.com/topics_if.xp? search=topic&group=comp.programming.threads `comp.programming.threads Frequently Asked Questions (F.A.Q.)', http://www.lambdacs.com/newsgroup/FAQ.html `I', Numeric Quest Inc 1998; http://www.numeric-quest.com/lang/multi-frame.html `I', The Open Group 1997; http://www.opengroup.org/onlinepubs /007908799/xsh/threads.html SMI Thread Resources, Sun Microsystems Inc; http://www.sun.com/workshop/threads/ Bibliography on threads and multithreading, Torsten Amundsen; http://liinwww.ira.uka.de/bibliography/Os/threads.html =head2 Related Books B. Nichols, D. Buttlar, J.P. Farrel: `I', O'Reilly 1996; ISBN 1-56592-115-1 B. Lewis, D. J. Berg: `I', Sun Microsystems Press, Prentice Hall 1998; ISBN 0-13-680729-1 B. Lewis, D. J. Berg: `I', Prentice Hall 1996; ISBN 0-13-443698-9 S. J. Norton, M. D. Dipasquale: `I', Prentice Hall 1997; ISBN 0-13-190067-6 D. R. Butenhof: `I', Addison Wesley 1997; ISBN 0-201-63392-2 =head2 Related Manpages pth-config(1), pthread(3). getcontext(2), setcontext(2), makecontext(2), swapcontext(2), sigstack(2), sigaltstack(2), sigaction(2), sigemptyset(2), sigaddset(2), sigprocmask(2), sigsuspend(2), sigsetjmp(3), siglongjmp(3), setjmp(3), longjmp(3), select(2), gettimeofday(2). =head1 AUTHOR Ralf S. Engelschall rse@@engelschall.com www.engelschall.com =cut @ 1.167 log @Adjusted all copyright messages for new year 2006 @ text @d3 1 a3 1 ## Copyright (c) 1999-2006 Ralf S. Engelschall @ 1.166 log @Adjusted all copyright messages for new year 2005. @ text @d3 1 a3 1 ## Copyright (c) 1999-2005 Ralf S. Engelschall @ 1.165 log @The pth_uctx_save() and pth_uctx_restore() API functions unfortunately were broken by design because they are C _functions_. This leads to one more deadly nesting on the run-time stack which effectively caused the pth_mctx_restore() in pth_uctx_restore() to return to the end of pth_uctx_save() but then the control flow unfortunately returns to the pth_uctx_restore() caller instead of the pth_uctx_save() caller because the call to pth_uctx_restore() had already overwritten the run-time stack position where the original return address for the pth_uctx_save() call was stored. The only workaround would be to #define pth_uctx_save() and pth_uctx_restore() as C _macros_, but this then would require that lots of the GNU Pth internals from pth_mctx.c would have to be exported in the GNU Pth API (which in turn is not acceptable). So, the only consequence is to remove the two functions again from the GNU Pth API. Prompted by hints from: Stefan Brantschen @ text @d3 1 a3 1 ## Copyright (c) 1999-2004 Ralf S. Engelschall @ 1.164 log @Added PTH_CTRL_FAVOURNEW control which allows the user to disable the favouring of new threads on scheduling to get more strict priority based scheduling behavior. Triggered by: Vinu V @ text @a144 2 pth_uctx_save, pth_uctx_restore, d1521 2 a1522 2 have to do this with pth_uctx_make(3) or pth_uctx_set(3). On success, this function returns C, else C. a1538 17 =item int B(pth_uctx_t I); This function saves the current user-space context in I for later restoring by either pth_uctx_restore(3) or pth_uctx_switch(3). This function is somewhat modeled after POSIX getcontext(3). If I is C, C is returned instead of C. This is the only error possible. =item int B(pth_uctx_t I); This function restores the current user-space context from I, which previously had to be set with either pth_uctx_make(3) or pth_uctx_save(3). This function is somewhat modeled after POSIX setcontext(3). If I is C or I contains no valid user-space context, C is returned instead of C. These are the only errors possible. d1542 7 a1548 7 later restoring by either pth_uctx_restore(3) or pth_uctx_switch(3) and restores the new user-space context from I, which previously had to be set with either pth_uctx_make(3) or pth_uctx_save(3). This function is somewhat modeled after POSIX swapcontext(3). If I or I are C or if I contains no valid user-space context, C is returned instead of C. These are the only errors possible. d1554 3 a1556 3 was given by the application (see I of pth_uctx_create(3)). If I is C, C is returned instead of C. This is the only error possible. @ 1.163 log @Adjusted all copyright messages for new year 2004. @ text @d682 9 @ 1.162 log @Adjusted all copyright messages for new year 2003. @ text @d3 1 a3 1 ## Copyright (c) 1999-2003 Ralf S. Engelschall @ 1.161 log @Added soft and hard syscall mapping for nanosleep(3) and usleep(3) functions. @ text @d3 1 a3 1 ## Copyright (c) 1999-2002 Ralf S. Engelschall @ 1.160 log @1. The function "int pth_event_occurred(pth_event_t)" was replaced with "pth_status_t pth_event_status(pth_event_t)" where pth_status_t can have values of PTH_STATUS_PENDING (replacing the old FALSE return value of pth_event_occurred), PTH_STATUS_OCCURRED (replacing the old TRUE return value of pth_event_occurred), and PTH_STATUS_FAILED (a new return value indicating an error in processing the event). This was scheduler/event-manager errors can be indicated which happended while processing the event. For backward compatibility reasons, a macro pth_event_occurred() was added. This will be removed soon. 2. Use the new PTH_STATUS_FAILED event status in the scheduler's event-manager for filedescriptor events if the internal select(2) call returned with an error. Additionally this PTH_STATUS_FAILED is recognized by the high-level API functions (pth_select, etc) and produce the necessary POSIX conforming return codes (usually -1 and errno == EBADF). Parts submitted by: Thanh Luu @ text @d2270 3 a2272 3 sleep(3), sigwait(3), waitpid(2), system(3), select(2), poll(2), connect(2), accept(2), read(2), write(2), recv(2), send(2), recvfrom(2), sendto(2). d2287 2 a2288 2 are mapped: fork(2), sleep(3), waitpid(2), system(3), select(2), poll(2), connect(2), accept(2), read(2), write(2). @ 1.159 log @Added thread attribute PTH_ATTR_DISPATCHES which (in bounded attribute objects) is incremented every time the context is switched to the associated thread. This can be used for statistical information. @ text @d96 1 a96 1 pth_event_occurred, d973 6 a978 5 on which the current thread wants to wait. The scheduler awakes the thread when one ore more of them occurred after tagging them as occurred. The I argument is a I to an event ring which isn't changed except for the tagging. pth_wait(3) returns the number of occurred events and the application can use pth_event_occurred(3) to test which events occurred. d1240 1 a1240 1 =item int B(pth_event_t I); d1242 7 a1248 4 This checks whether the event I occurred. This is a fast operation because only a tag on I is checked which was either set or still not set by the scheduler. In other words: This doesn't check the event itself, it just checks the last knowledge of the scheduler. @ 1.158 log @Added a stand-alone sub-API for manual user-space context switching. It is somewhat modeled after the POSIX ucontext(3) facility and consists of an opaque data type pth_uctx_t and the management functions pth_uctx_create(), pth_uctx_make(), pth_uctx_save(), pth_uctx_restore(), pth_uctx_switch() and pth_uctx_destroy(). These functions are based on the same underlying machine context switching facility (pth_mctx) the threads in GNU Pth are using. This facility can be used to implement co-routines without a full real multithreading environment or even to implement an own multithreading environment. @ text @d717 5 d807 3 a809 2 C := C, C := C, C := 64*1024 and d823 1 d838 1 d871 8 a878 6 starting point at routine I. This entry routine is called as `pth_exit(I(I))' inside the new thread unit, i.e., I's return value is fed to an implicit pth_exit(3). So the thread usually can exit by just returning. Nevertheless the thread can also exit explicitly at any time by calling pth_exit(3). But keep in mind that calling the POSIX function exit(3) still terminates the complete process and not just the current thread. @ 1.157 log @Add a Pth variant of the new POSIX pselect(2) function, including soft and hard syscall mapping support for it. @ text @d141 9 d1473 80 @ 1.156 log @Added pth_nanosleep() function. Obtained from: NetBSD, Nick Hudson @ text @d169 1 d1684 9 @ 1.155 log @Matthew Mondor wrote: > I noticed that pth_msgport_create(), although inspired from the AmigaOS > API, does not support NULL for port identifyer, which would be very > useful for thread-specific private message ports (mmftpd uses those and > unfortunately currently has to generate unique strings to create ports). > AmigaOS had this functionality... So, make him happy and allow NULL from now on, too. @ text @d159 1 d1588 12 @ 1.154 log @remove trailing whitespaces @ text @d1269 4 a1272 3 This returns a pointer to a new message port with name I. The I can be used by other threads via pth_msgport_find(3) to find the message port in case they do not know directly the pointer to the message port. @ 1.153 log @Fixed more ENglish errors. Submitted by: Felix Berger @ text @d555 2 a556 2 the scheduler gets a chance to pick it up for scheduling. That is what the B queue is for. d802 1 a802 1 be used: d808 1 a808 1 PTH_ATTR_STACK_SIZE unsigned int d1647 1 a1647 1 The difference between connect(2) and pth_connect(3) is that @ 1.152 log @Woohhooo! Major GNU Pth source tree overhauling: - Removed all generated files from CVS. - Use OSSP devtool stuff to re-generate files on demand. - Switched to Autoconf 2.52 and Libtool 1.4.2 environment. @ text @d492 1 a492 1 third-party libraries can be used without side-effects than its the case d555 1 a555 1 the scheduler gets a chance to pick it up for scheduling. That is for d576 2 a577 2 is discussed in detail. With the knowledge given above, it should be now easy to understand how to program threads with this API. In good d612 1 a612 1 code of C if it is called not from within the main thread. Else d770 1 a770 1 The following API functions exists to handle the attribute objects: d842 2 a843 2 The following functions control the threading itself and form the main API of the B library. d931 1 a931 1 C set to C) if I specified and invalid or still not d1005 1 a1005 1 The following functions are utility functions. d1305 1 a1305 1 The following functions provide per-thread cleanup functions. d1591 1 a1591 1 elapsed. The thread is guaranteed to not awakened before this time, but d1601 1 a1601 1 not awakened before this time, but because of the non-preemptive scheduling d1603 1 a1603 1 sleep(3) and pth_sleep(3) is that that pth_sleep(3) suspends only the d1611 1 a1611 1 pth_waitpid(3) is that that pth_waitpid(3) suspends only the execution of the d1620 1 a1620 1 system(3) and pth_system(3) is that that pth_system(3) suspends only d1647 1 a1647 1 The difference between connect(2) and pth_connect(3) is that that d1658 1 a1658 1 difference between accept(2) and pth_accept(3) is that that pth_accept(3) d1682 1 a1682 1 and pth_read(2) is that that pth_read(2) suspends execution of the current d1690 1 a1690 1 difference between readv(2) and pth_readv(2) is that that pth_readv(2) d1699 1 a1699 1 pth_write(2) is that that pth_write(2) suspends execution of the current d1707 1 a1707 1 difference between writev(2) and pth_writev(2) is that that pth_writev(2) d1738 1 a1738 1 pth_recvfrom(2) is that that pth_recvfrom(2) suspends execution of the d1752 1 a1752 1 that that pth_sendto(2) suspends execution of the current thread until d1881 1 a1881 1 The previous approach is simple but unflexible. First, to speed up d1884 1 a1884 1 also be able to build against an uninstalled B, that is, against d2199 1 a2199 1 manager) are speeded up by inlined functions (implemented as ANSI C @ 1.151 log @bump copyright year @ text @d1917 1 a1917 1 C, a (minimal) B script specification: d1919 1 a1919 1 $ vi configure.in d1926 1 a1926 1 C script out of this C file and the C d1938 1 a1938 1 -rw-r--r-- 1 rse users 63 Nov 3 11:11 configure.in d2019 1 a2019 1 Then we create a slightly different B script C: d2021 1 a2021 1 $ vi configure.in d2041 1 a2041 1 file and the C file. d2066 1 a2066 1 -rw-r--r-- 1 rse users 129 Nov 3 12:21 configure.in @ 1.150 log @fix typo (found by Takashi Ishihara ) @ text @d3 1 a3 1 ## Copyright (c) 1999-2001 Ralf S. Engelschall @ 1.149 log @*** empty log message *** @ text @d1100 1 a1100 1 C or C have to be OR-ed into @ 1.148 log @*** empty log message *** @ text @d1916 2 a1917 3 C target which cleanups this, too. Second, we write a (minimalistic) B script specification in a file C: d1972 2 a1973 2 easier distribution and installation. That is, we don't want that the end-user first has to install B just to allow our C package to @ 1.147 log @*** empty log message *** @ text @d708 4 a711 3 The thread detachment type, C indicates a joinable thread, C indicates a detached thread. When a the is detached after termination it is immediately kicked out of the system instead of inserted into the dead queue. d982 8 a989 7 This joins the current thread with the thread specified via I. It first suspends the current thread until the I thread has terminated. Then it is awakened and stores the value of I's pth_exit(3) call into *I (if I and not C) and returns to the caller. A thread can be joined only when it was I spawned with C. A thread can only be joined once, i.e., after the pth_join(3) call the thread I is removed from the system. d993 7 a999 6 This terminates the current thread. Whether it's immediately removed from the system or inserted into the dead queue of the scheduler depends on its join type which was specified at spawning time. When it was spawned with C it's immediately removed and I is ignored. Else the thread is inserted into the dead queue and I remembered for a pth_join(3) call by another thread. @ 1.146 log @*** empty log message *** @ text @d2201 1 a2201 1 C for details]. @ 1.145 log @*** empty log message *** @ text @d1442 1 a1442 1 =item int B(pth_barrier_t *I, int I @ 1.143 log @*** empty log message *** @ text @d2140 1 a2140 1 pth_read(3), etc. Currently the following functions are mapped: fork(2), d2142 2 a2143 1 connect(2), accept(2), read(2), write(2). @ 1.142 log @*** empty log message *** @ text @d162 1 d1612 9 d2141 2 a2142 2 sleep(3), sigwait(3), waitpid(2), select(2), poll(2), connect(2), accept(2), read(2), write(2). d2156 3 a2158 3 replacement functions (pth_read(3)). Currently the following functions are mapped: fork(2), sleep(3), waitpid(2), select(2), poll(2), connect(2), accept(2), read(2), write(2). @ 1.141 log @*** empty log message *** @ text @d1481 1 a1481 1 usually only uses the I/O event on I to awake. With this function any d1489 1 a1489 1 usually only uses the I/O event on I to awake. With this function any @ 1.140 log @*** empty log message *** @ text @d1587 1 a1587 1 elapsed. The thread is guaranteed to not awakened before this time, but d1589 1 a1589 1 later, of course. The difference between usleep(3) and pth_usleep(3) is that d1591 1 a1591 3 the whole process. The function returns the value C<0> if successful, otherwise the value C<-1> is returned and the global variable C is set to indicate the error. d1595 6 a1600 10 This is a variant of the POSIX sleep(3) function. It suspends the current threads execution until I seconds elapsed. The thread is guaranteed to not awakened before this time, but because of the non-preemptive scheduling nature of B, it can be awakened later, of course. The difference between sleep(3) and pth_sleep(3) is that that pth_sleep(3) suspends only the execution of the current thread and not the whole process. If the function returns because the requested time has elapsed, the value returned will be C<0>. If the function returns due to the delivery of a signal, the value returned will be the unslept amount (the requested time minus the time actually slept) in seconds. @ 1.139 log @*** empty log message *** @ text @d1586 2 a1587 2 threads execution until I microsecond (= I * 1/1000000 sec) elapsed. The thread is guaranteed to not awakened before this time, but d1589 1 a1589 1 later, of course. The difference between usleep(3) and pth_usleep(3) is that d1591 3 a1593 1 the whole process. d1601 1 a1601 1 course. The difference between sleep(3) and pth_sleep(3) is that that d1603 4 a1606 1 whole process. @ 1.138 log @*** empty log message *** @ text @d2244 3 a2246 2 I. Currently (as of January 2000) you can reach about 50 Pth users on this mailing list. @ 1.137 log @*** empty log message *** @ text @d293 1 a293 1 =item B B vs. B thread scheduling d538 1 a538 1 this thread yields execution (either explicitly by yielding excution d1163 1 a1163 1 C. Once it returned C the thread will be awakend. The d1865 1 a1865 1 straight-foreward and works fine for small projects. d1963 1 a1963 1 compile. For this, it is a convinient practice to include the required @ 1.136 log @*** empty log message *** @ text @d28 1 a28 1 # read the listings of the object deck. @ 1.135 log @*** empty log message *** @ text @d151 5 a155 1 pth_writev_ev. d173 5 a177 1 pth_pwrite. d1541 32 d1715 28 @ 1.134 log @*** empty log message *** @ text @d915 4 a918 5 particular cooperating thread. If I is not C and points to a I thread, it is guaranteed that this thread receives execution control on the next dispatching step. If I is in a different state (that is, still not I) this has no effect and is equal to calling this function with I specified as C. d921 2 a922 2 C set to C) if I specified and invalid or still not ready thread. @ 1.133 log @*** empty log message *** @ text @d1011 1 a1011 1 =item pth_time_t B(int I, int I); d1017 1 a1017 1 =item pth_time_t B(int I, int I); @ 1.132 log @*** empty log message *** @ text @d711 1 a711 1 The thread stack size in bytes. Use lower values than 32KB with great care! d782 1 a782 1 C, C := 32*1024 and d1750 1 a1750 1 pth_attr_set(attr, PTH_ATTR_STACK_SIZE, 32*1024); @ 1.131 log @*** empty log message *** @ text @d975 5 a979 4 awakened and stores the value of I's pth_exit(3) call into I and returns to the caller. A thread can be joined only when it was I spawned with C. A thread can only be joined once, i.e., after the pth_join(3) call the thread I is removed from the system. @ 1.130 log @*** empty log message *** @ text @d683 1 a683 1 threads. The following attribute fields exists in attribute objects: d789 11 a799 3 This sets the attribute field I in I to a value specified as an additional argument on the variable argument list. The following attribute fields exists: d803 20 d825 2 a826 2 This destroys a attribute object I. After this I is no longer a valid attribute object. d2127 1 a2127 1 B (intentionally) provides no replacements for thread-safe @ 1.129 log @*** empty log message *** @ text @d1768 2 a1769 2 B installation via its C program. This approach works fine for small projects. d1773 11 a1783 10 The previous approach is simple but unflexible. First, to speed up building it would be nice to not expand the compiler and linker flags every time the compiler is started. Second, it would be useful to also be able to build against an uninstalled B, that is, against a B source tree which was just configured and built, but not installed. Third, it would be also useful to allow checking of the B version to make sure it is at least a minimum required version. And finally, it would be also great to make sure B works correctly by first performing some sanity compile and run-time checks. All this can be done if we use GNU B and the C macro provided by B. For this, we establish the following three files: d1808 3 a1810 2 C target which cleans up this, too. Second, we write a (minimalistic) B script specification in a file C: d1814 1 a1814 1 | AC_CHECK_PTH(1.2.0) d1834 3 a1836 2 If we now run C we get a correct C which immediately can be used to build C (assuming that B is already installed somewhere): d1846 1 a1846 1 checking for GNU Pth... version 1.2.0, installed under /usr/local d1852 8 a1859 2 gcc -g -O2 -I/sw/pkg/pth/include -c foo.c gcc -L/sw/pkg/pth/lib -o foo foo.o -lpth d1917 1 a1917 1 | AC_CHECK_PTH(1.2.0, subdir:pth --disable-tests) d1931 4 a1934 4 Now we let B's C program again generate an C file for us with the contents of B's C macro and finally we generate the C script out of this C file and the C file: d1990 1 a1990 1 checking for GNU Pth... version 1.3a1, local under pth d2014 3 a2016 3 As you can see B now automatically configures the local (stripped down) copy of B in the subdirectory C and the C automatically builds the subdirectory, too. @ 1.128 log @*** empty log message *** @ text @d1652 7 a1658 8 The following example is a useless server which does nothing more than listening on a TCP port (specified numerically on the command line) and displaying the current time to the socket when a connection was established. For each incoming connection a thread is spawned. Additionally, to see more multithreading, a useless ticker thread runs simultaneously which outputs the current time to C every 5 seconds. The example contains I error checking and is I intended to show you the look and feel of B. d1672 2 a1715 5 if (argc != 2) { fprintf(stderr, "Usage: %s \n", argv[0]); exit(1); } a1716 1 port = atoi(argv[1]); d1729 1 a1729 1 sar.sin_port = htons(port); @ 1.127 log @*** empty log message *** @ text @d333 1 a333 1 traditional approach to achieve tread-safety is to wrap a function body d344 1 a344 1 side-effects from within a signal handler context. Usually very less d371 3 a373 4 function after each other controlled by this matrix. The threads are created by more than one jump-trail through this matrix and by switching the individual jump-trails between function calls controlled by corresponding occurred events. d604 1 a604 1 kills the treading system and returns C. d928 1 a928 1 just made pending. But when its C the tread is d1714 5 @ 1.126 log @*** empty log message *** @ text @a1118 7 =item C This is a process event. Three additional arguments have to be given which correspond to the arguments of the waitpid(2) function: `C', `C' and `C'. This events waits until the process changed to the specified state. Example: `C'. d1121 11 a1131 7 This is a custom callback function event. Two additional arguments have to be given with the following types: `C' and `C'. The first is a function pointer and the second is an argument which is passed to the function. The scheduler calls this function on a regular basis (on his own scheduler stack, so be careful!) and the thread is kept sleeping while the function returns 0. Once it returned not 0 the thread will be awakend. Example: `C'. @ 1.125 log @*** empty log message *** @ text @d1200 1 a1200 1 similar to the POSIX Pthread API. Use this for thread specific global data. d2108 1 a2108 1 July 1999 by I after evaluating various (mostly d2116 1 a2116 1 performance) with an API similar to the popular one found in Pthread d2119 9 a2127 9 So the essential idea for the non-preemptive approach was taken over from I scheduler. The priority based scheduling algorithm was contributed by I. Some code inspiration also came from an experimental threading library (B) written by I for an ancient internal test version of the Apache webserver. The concept and API of message ports was borrowed from AmigaOS' B subsystem. The concept and idea for the flexible event mechanism came from I's B (which can be found as a part of B v8). @ 1.124 log @*** empty log message *** @ text @d183 1 a183 1 execution (aka "multithreading") inside event-driven applications. All threads d198 1 a198 1 ("Pthreads") which can be used for backward compatibility to existing d207 1 a207 1 machines, we use "multitasking" -- that is, we have the application d334 1 a334 1 with an internal mutual exclusion lock (aka "mutex"). As you should d467 1 a467 1 concept of "coroutines". On the other hand, event driven applications d526 1 a526 1 priority of all remaining threads by 1, to prevent them from "starving". d589 1 a589 1 unit of the current process into a thread (the "main" thread). It d600 1 a600 1 ``C'' in the main thread (which waits for all other threads to d602 1 a602 1 ``C'' (which immediately kills the threading system and d622 11 a632 8 C for the number of threads in the new queue (threads created via pth_spawn(3) but still not scheduled once), C for the number of threads in the ready queue (threads who want to do CPU bursts), C for the number of running threads (always just one thread!), C for the number of threads in the waiting queue (threads waiting for events), C for the number of threads in the new queue (terminated threads waiting for a join). d636 1 a636 1 This requires a second argument of type ``C'' (pointer to a floating d647 1 a647 1 This requires a second argument of type ``C'' which identifies a d653 1 a653 1 This requires a second argument of type ``C'' which identifies a d655 1 a655 1 pth_ctrl(3) should be casted to a ``C''. d659 1 a659 1 This requires a second argument of type ``C'' to which a summary d669 1 a669 1 This function returns a hex-value ``0xIIII'' which describes the d781 1 a781 1 C := C, C := "C", d816 1 a816 1 ``pth_exit(I(I))'' inside the new thread unit, i.e., I's d831 1 a831 1 as ``I(I)'' in the system. In other words: Only the first call to d833 2 a834 2 I should be declared as ``C I = C;'' before calling this function. d868 1 a868 1 performed, i.e., ``C'' returns C when thread I d932 1 a932 1 ``C'' at one of his cancellation points. In B d939 1 a939 1 waits to be joined it just joins it (via ``CIC<, NULL)>'') and d942 1 a942 1 ``CIC<)>''. d1038 1 a1038 1 ``C''. d1064 1 a1064 1 ``C''. d1078 3 a1080 3 Example: ``C'' where C has to be of type ``C'', C has to be of type ``C'' and C, C and C have to be of type ``C'' (see d1086 2 a1087 2 to a signal set (type ``C'') and a pointer to a signal number variable (type ``C''). This event waits until one of the signals in d1092 2 a1093 2 your notice. Example: ``C''. d1103 1 a1103 1 ``C''. d1109 1 a1109 1 on the specified message port. Example: ``C''. d1117 1 a1117 1 ``C''. d1122 3 a1124 3 correspond to the arguments of the waitpid(2) function: ``C'', ``C'' and ``C''. This events waits until the process changed to the specified state. Example: ``C''. d1129 1 a1129 1 given with the following types: ``C'' and ``C''. The d1134 1 a1134 1 Example: ``C''. d1153 3 a1155 3 To make it clear, when you constructed I via ``C'' you have to extract it via ``C'', etc. For multiple arguments of an event the d1347 3 a1349 3 This dynamically initializes a mutex variable of type ``C''. Alternatively one can also use static initialization via ``C''. d1370 2 a1371 2 ``C''. Alternatively one can also use static initialization via ``C''. d1390 2 a1391 2 ``C''. Alternatively one can also use static initialization via ``C''. d1411 3 a1413 3 This dynamically initializes a barrier variable of type ``C''. Alternatively one can also use static initialization via ``CIC<)>''. d1431 2 a1432 2 API, i.e., they are similar to the functions under "B" but all have an additional event argument which can be used d1656 7 a1662 4 The following example is a useless server which does nothing more than listening on a specified TCP port and displaying the current time to the socket when a connection was established. For each incoming connection a thread is spawned. The example contains I error checking and is I d1677 1 d1691 1 d1708 1 d2021 1 a2021 1 to read(3) ``magically'' means pth_read(3). The problem here is that such d2133 1 a2133 1 fix the problem and include a patch, made with ``C'', in your d2139 2 a2140 2 I with ``C'' (or ``C I
'' if you want to subscribe d2150 1 a2150 1 ``comp.programming.threads Newsgroup Archive'', d2154 1 a2154 1 ``comp.programming.threads Frequently Asked Questions (F.A.Q.)'', d2157 1 a2157 1 ``I'', d2161 1 a2161 1 ``I'', d2176 1 a2176 1 ``I'', d2181 1 a2181 1 ``I'', d2186 1 a2186 1 ``I'', d2191 1 a2191 1 ``I'', d2196 1 a2196 1 ``I'', d2204 1 d2206 1 a2206 1 sigprocmask(2). sigsuspend(2), sigsetjmp(3), siglongjmp(3), setjmp(3), @ 1.123 log @*** empty log message *** @ text @d81 1 a81 1 pth_sfiodisc, d190 1 a190 1 scheduler. The intention is, that this way both better portability and run-time d199 2 a200 1 multithreaded applications. d205 1 a205 1 regular jobs and one-shot requests have to processed in parallel. d215 1 a215 1 of memory). d221 1 a221 1 (one has to use atomic locks, etc). The machine's resources can be d227 3 a229 3 load because of these resource problems. In practice, lot's of tricks are usually used to overcome these problems (ranging from pre-forked sub-process pools to semi-serialized processing, etc). d231 1 a231 1 One the most elegant ways to solve these resource- and data-sharing d237 1 a237 1 processes. Threads are neither the optimal runtime facility for all d257 1 a257 1 descriptors>, I. On every process switch, the kernel d274 1 a274 1 called light-weight processes / LWP). d276 8 a283 7 User-space threads are usually more portable and can perform faster and cheaper context switches (for instance via setjmp(3)/longjmp(3)) than kernel based threads. On the other hand, kernel-space threads can take advantage of multiprocessor machines and don't have any I/O blocking problems. Kernel-space threads are usually scheduled in preemptive way side-by-side with the underlying processes. User-space threads on the other hand use either preemptive or non-preemptive scheduling. d287 11 a297 10 In preemptive scheduling the scheduler lets a thread execute until a blocking situation occurres (usually a function call which would block) or the assigned timeslice elapses. Then it detracts control from the thread without a chance for the thread to object. This is usually realized by interrupting the thread through a software signal (like C or C). In non-preemptive scheduling, once a thread received control from the scheduler it keeps it until either a blocking situation occurs (again a function call which would block and instead switches to the scheduler) or the thread explicitly yields control back to the scheduler in a cooperative way. d311 6 a316 5 Responsiveness of a system can be described by the user visible delay until the system responses to an external request. When this delay is small enough and the user doesn't recognize a noticeable delay, the responsiveness of the system is considered good. When the user recognizes or is even annoyed by the delay, the responsiveness of the system is considered bad. d318 1 a318 1 =item B B, B and B functions d321 30 a350 21 simultaneously by several threads. Functions that access global state, such as memory or files, of course, need to be carefully designed in order to be reentrant. Two traditional approaches to solve these problems are caller-supplied states and thread-specific data. Thread-safety is the avoidance of I, i.e., situations in which data is set to either correct or incorrect value depending upon the (unpredictable) order in which multiple threads access and modify the data. So a function is thread-safe when it behaves semantically correct when executed by several threads. As you should recognize, reentrant is a slightly stronger attribute than thread-safe, because it is usually harder to achieve. Additionally there is a related attribute named I which comes into play in conjunction with signal handlers. This is very related to the problem of I functions. An I function is one that can be called safe and without side-effects from within a signal handler context. Usually very less functions are of this type. The problem is that an application is very restricted in what it can perform from within a signal handler, because only a few POSIX functions are officially declared as and guaranteed to be async-safe. d366 9 a374 8 execution units (each runs for no more than a few milliseconds) and those units are implemented by separate functions. Then a global matrix is defined which describes the execution (and perhaps even dependency) order of these functions. The main server procedure then just dispatches between these units by calling one function after each other controlled by this matrix. The threads are created by more than one jump-trail through this matrix and by switching the threads between these jump-trails controlled by corresponding occurred events. d376 1 a376 1 This approach gives the best possible performance because one can d382 9 a390 9 The disadvantage of this approach is that it is complicated to write large applications with this approach, because in those applications one quickly gets hundreds(!) of execution units and the control flow inside such an application is very hard to understand (because it is interrupted by function borders and one always has to remember the global dispatching matrix to follow it). Additionally all threads operate on the same execution stack. Although this saves memory it is often nasty because one cannot switch between threads in the middle of a function. Thus the scheduling borders are the function borders. d400 5 a404 5 Actually in a preemptive way similar to what the kernel does for the heavy-weight processes, i.e., every few milliseconds the scheduler switches between the threads of execution. But the thread itself doesn't recognize this and usually (except for synchronization issues) doesn't have to care about this. d409 2 a410 2 Additionally the programming is very similar to a fork(2) based approach. d413 1 a413 1 compared to using approaches with heavy-weight processes, it is decreased d418 2 a419 2 Finally there is no really portable POSIX/ANSI-C based way to implement user-space preemptive threads. Either the platform already has threads, d421 3 a423 2 even those semi-portable packages have to deal with assembler code and other nasty internals and are not easy to port to forthcoming platforms. d427 1 a427 1 So, in short: The matrix-dispatching approach is portable and fast, but d450 1 a450 1 This is because it uses a nifty and portable POSIX/ANSI-C approach for d454 1 a454 1 hand this way not all fancy threading features can be implemented. d463 11 a473 9 The reason is the non-preemptive scheduling. Number-crunching applications usually require preemptive scheduling to achieve concurrency because of their long CPU bursts. For them non-preemptive scheduling (even together with explicit yielding) provides only the old concept of "coroutines". On the other hand, event driven applications benefit greatly from non-preemptive scheduling. They have only short CPU bursts and lots of events to wait on and this way run faster under preemptive scheduling because of no unnecessary context switching occurs as it is the case for preemptive scheduling. That's why B is mainly intended for server type applications. d481 5 a485 5 reentered before it returned. This is a great portability benefit, because thread-safety can be achieved more easily than reentrance possibility. Especially this means that under B more existing third-party libraries can be used without side-effects than its the case for other threading systems. d495 4 a498 4 benefit from the existence of multiprocessors, because for this kernel support would be needed. In practice, this is no problem because multiprocessor systems are rare, and portability is more important than highest concurrency. d504 4 a507 3 To understand the B API it helps to first understand the life cycle of a thread in the B threading system. It can be illustrated with the following graph: d521 16 a536 15 scheduler. On the next dispatching, the scheduler picks it up from there and moves it to the B queue. This is a queue containing all threads which want to perform a CPU burst. There they are queued in priority order. On each dispatching step, the scheduler always removes the thread with the highest priority only. It then increases the priority of all remaining threads by 1, to prevent them from "starving". The thread which was removed from the B queue is the new B thread (there is always just one B thread, of course). The B thread is assigned execution control. After this thread yields execution (either explicitly or implicitly by calling a function which would block) there are three possibilities: Either it has terminated, then it is moved to the B queue, or it has events on which it wants to wait, then it is moved into the B queue. Else it is assumed it wants to perform more CPU bursts and enters the B queue again. d539 15 a553 14 B queue is checked for pending events. If one or more events of a thread occurred, the threads that are waiting on them are immediately moved to the B queue. The purpose of the B queue has to do with the fact that a thread never directly switches to another thread. A thread always yields execution to the scheduler and the scheduler dispatches to the next thread. So a freshly spawned thread has to be kept somewhere until the scheduler gets a chance to pick it up for scheduling. That is for what the B queue is for. The purpose of the B queue is to support thread joining. When a thread is marked to be unjoinable, it is directly kicked out of the system after it terminated. But when it is joinable it enters the B queue. There it remains until another thread joins it. d555 1 a555 1 Finally there is a special separated queue named B to where d557 1 a557 1 queue by the application. The purpose of this special queue is to d562 2 a563 1 from where it originally came. d565 1 a565 1 =head1 APPLICATION PROGRAMMERS INTERFACE d567 8 a574 3 In the following the B I (API) is discussed in detail. With the knowledge given above it should be now easy to understand how to program threads with this API. @ 1.122 log @*** empty log message *** @ text @d175 6 @ 1.121 log @*** empty log message *** @ text @d177 1 a177 1 execution (aka `multithreading') inside event-driven applications. All threads d557 2 a558 1 unit of the current process into a thread (the "main" thread). d571 3 a573 3 terminates the process). The pth_kill() return immediately with a return code of C if it is called not from within the main thread. Else kills the treading system and returns C. d630 2 d789 2 a790 1 keeps track of thread in dynamic data structures. @ 1.120 log @*** empty log message *** @ text @d65 2 d493 2 a494 2 V DEAD d529 9 d804 17 d1741 1 a1741 1 provided by B. For this we establish the following three files: d1817 1 a1817 1 compile. For this it is a convinient practice to include the required d1889 1 a1889 1 Now we have to create the C subdirectory itself. For this we extract the d1989 5 a1993 4 The drawback of this approach is just that really all source files of the application where these function calls occur have to include C, of course. And this also means that this way existing libraries, including the vendor's B usually will still block the whole process. d2005 7 a2011 6 The drawback of this approach is that it depends on syscall(2) and that prototype conflicts can occur while building the wrapper functions due to different function signatures in the vendor headers. But the advantage of this mapping variant is that the source files of the application where these function calls occur have not to include C and that existing libraries, including the vendor's B, magically become thread-aware. d2015 31 a2045 29 B is very portable because it has only one part which perhaps has to be ported to new platforms (the machine context initialization). But it is written in a way which works on mostly all Unix platforms which support sigstack(2) or sigaltstack(2) [see C for details]. Any other code is straight-forward POSIX and ANSI C based. The context switching is done via POSIX [sig]setjmp(3) and [sig]longjmp(3). Here all CPU registers, the program counter and the stack pointer are switched. Additionally the B dispatcher switches also the global Unix C variable [see C for details] and the signal mask (either implicitly via sigsetjmp(3) or in an emulated way via explicit setprocmask(2) calls). The B event manager is mainly select(2) and gettimeofday(2) based, i.e., the current time is fetched via gettimeofday(2) once per context switch for calculations and both the time and all I/O events are implemented via a single select(2) call [see C for details]. The thread control block management is done via priority queues without any additional data structure overhead. For this the queue linkage attributes are part of the thread control blocks and the queues are actually implemented as rings with a selected element as the entry point [see C and C for details]. Most time critical sections (especially the dispatcher and event manager) are speeded up by inlined functions (implemented as ANSI C pre-processor macros). Additionally any debugging code is I removed from the source when not built with C<-DPTH_DEBUG> (see Autoconf C<--enable-debug> option), i.e., not only stub functions remain [see C for details]. d2050 9 a2058 9 functions (like strtok(3) which uses a static internal buffer) or synchronous system functions (like gethostbyname(3) which doesn't provide an asynchronous mode where it doesn't block). When you want to use those functions in your server application together with threads you've to either link the application against special third-party libraries (or for thread-safe/reentrant functions possibly against an existing C of the platform vendor). For an asynchronous DNS resolver library use either the C from B ( see ftp://ftp.isc.org/isc/bind/ ) or the forthcoming GNU B package from Ian Jackson ( see http://www.gnu.org/software/adns/adns.html ). d2062 21 a2082 18 The B library was designed and implemented between February and July 1999 by I after evaluating various (mostly preemptive) thread libraries and intensive discussions with I, I, I and I related to an experimental (matrix based) non-preemptive C++ scheduler class written by I. B was then implemented in order to combine the I approach of multithreading (providing better portability and performance) with an API similar to the one found in POSIX thread libraries (providing easy programming). So the essential idea for the non-preemptive approach was taken over from I scheduler. The priority based scheduling algorithm was contributed by I. Some code inspiration also came from an old threading library (B) written by I for an ancient internal test version of Apache. The concept and API of message ports was borrowed from AmigaOS' B. The concept and idea for the flexible event mechanism came from I's B (part of B). d2087 13 a2099 10 complete as possible to I. If you can, please try to fix the problem and include a patch, made with ``C'', in your report. Always at least include a reasonable amount of description in your report to allow the author to reproduce the bug. For further support you additionally can subscribe yourself to the I mailing list by sending a mail to I with ``C'' in the body. Then you can discuss your issues with other B users by sending messages to I. d2134 5 @ 1.119 log @*** empty log message *** @ text @d176 2 a177 2 runs in the same address space of the application process, but each thread has its own individual program-counter, run-time stack, signal mask and errno d189 2 a190 2 Additionally B provides an optional emulation API for POSIX.1c threads ("pthreads") which can be used for backward compatibility to existing d195 37 a231 35 When programming event driven applications, usually servers, lots of regular jobs and one-shot requests have to processed in parallel. To achieve this in an efficient way on uniprocessor machines the idea of multitasking is implemented by the operating system which can be used by the applications to spawn multiple instances of itself. On Unix the kernel classically implements multitasking in a preemptive and priority-based way through heavy-weight processes spawned with fork(2). These processes do usually I share a common address space. Instead they are clearly separated from each other and were created by direct cloning a process address space (although modern kernels use memory segment mapping and copy-on-write semantics to avoid unnecessary copying of memory). The drawbacks are obvious: Sharing data between the processes is complicated and can usually only solved in an efficient way through shared memory (but which itself is not very portable). Synchronization is complicated because of the preemptive nature of the Unix scheduler (one has to use atomic locks, etc). The machine resources can be exhausted very quickly when the server application has to serve too much longer running requests (heavy-weight processes cost memory). Additionally when for each request a sub-process is spawned to handle it, the server performance and responsiveness is horrible (heavy-weight processes cost time to spawn). And finally the server application doesn't scale very well with the load because of these resource problems. Lot's of tricks are usually done in practice to overcome these problems (ranging from pre-forked sub-process pools to semi-serialized processing, etc). One the most elegant ways to solve these resource and data sharing problems is to have multiple I threads of execution inside a single (heavy-weight) process, i.e., to use I. Those I usually improve responsiveness and performance of the application, often improve and simplify the internal program structure and especially require less system resources. Threads neither are the optimal runtime facility for all types of applications nor can all applications gain from them. But at least event driven server applications usually benefit greatly from using threads. d235 5 a239 5 Lots of documents exists which describe and define the world of threading. To understand B only the basic knowledge about threading is actually required. The following definitions of thread related terms should at least help you in understanding the programming context of threads in order to allow you to use B. d245 9 a253 8 A process on Unix systems consist of at least the following fundamental ingredients: I, I, I, I, I, I, I, I. On every process switch the kernel saves and restores these ingredients for the individual processes. On the other hand a thread consists only of a private program counter, stack memory, stack pointer and signal table. All other ingredients, especially the virtual memory, it shares with the other threads of the same process. d257 17 a273 16 Threads on a Unix platform classically can be implemented either inside kernel space or user space. When threads are implemented by the kernel, the thread context switches are performed by the kernel without notice by the application. When threads are implemented in user space, the thread context switches are performed by an application library without notice by the kernel. Additionally there exist also hybrid threading approaches where typically a user-space library binds one or more user-space threads to one or more kernel-space threads (there usually called light-weight processes / LWP). User space threads are usually more portable and can perform faster and cheaper context switches (for instance via setjmp(3)/longjmp(3)) than kernel based threads. On the other hand, kernel space threads can take advantage of multiprocessor machines and don't have any I/O blocking problems. Kernel-space threads are usually always scheduled in preemptive way side-by-side with the underlaying processes. User-space threads on the other hand use either preemptive or non-preemptive scheduling. d282 1 a282 1 C or C). In non-preemptive scheduling once a thread d288 1 a288 1 =item B B vs. B d290 2 a291 2 Concurrency exists when at least two threads are I at the same time. Parallelism arises when at least two threads are I d293 4 a296 3 machines, of course. But one also usually speaks of parallelism or I in the context of preemptive thread scheduling and of I in the context of non-preemptive thread scheduling. d309 12 a320 11 simultaneously by several threads. Functions that access global state, like memory or files, have inherently reentrant problems, of course. Two classical approaches to solve these problems are caller-supplied states and thread-specific data. Thread-safety is the avoidance of I, i.e., situations in which data is set to either correct or incorrect value depending upon the (unpredictable) order in which multiple threads access and modify the data. So a function is thread-safe when it behaves logically correct when executed by several threads. As you should recognize, reentrant is a stronger attribute than thread-safe. d326 4 a329 4 context. Usually very less functions are of this type. The problem is that an application is very restricted in what it can perform from within a signal handler, because only a few POSIX functions are officially declared as and guarrantied to be async-safe. d333 1 a333 1 =head2 User-Land Threads d335 2 a336 2 User-land threads can be implemented in various way. The two classical approaches are: d345 14 a358 14 execution units (each has to run maximal a few milliseconds) and those units are implemented by separate program functions. Then a global matrix is created which describes the execution (and perhaps even dependency) order of these functions. The main server procedure then does just dispatching between these units by calling one function after each other controlled by this matrix. The treads are created by more than one jump-trail through this matrix and by switching between these jump-trails controlled by corresponding occurred events. The advantage of this approach is that the performance is really as maximal as possible (because one can fine-tune the threads of execution by adjusting the matrix and the scheduling is done explicitly by the application itself). Additionally this is very portable, because the matrix is just an ordinary data structure and functions are a standard feature of ANSI C. d361 8 a368 7 applications with this approach, because in those applications one quickly get hundreds(!) of execution units and the control flow inside such an application is very hard to understand (because it is interrupted by function borders and one always has to remember the global dispatching matrix to follow it). Additionally all threads operate on the same execution stack. Although this saves memory it is often nasty because one cannot switch between threads in the middle of a function. The scheduling borders are function borders. d372 1 a372 1 B d374 15 a388 13 Here the idea is that one programs the application as with fork(2)'ed processes, i.e., one spawns a thread of execution and this runs from the begin to the end without an interrupted control flow. But the control flow can be still interrupted - even in the middle of a function. Actually in a preemptive way similar to what the kernel does for the heavy-weight processes, i.e., every few milliseconds the scheduler switches between the threads of execution. But the thread itself doesn't recognize this and usually (except for synchronization issues) doesn't have to care about this. The advantage of this approach is that it's usually very easy to program, because the control flow and context of a thread directly follows a procedure without forced interrupts through function borders. Additionally the programming is very similar to a fork(2)'ed approach. d393 1 a393 1 scheduling does usually a lot more context switches (every user-land context d397 4 a400 4 user-space preemptive threads. Either the platform already has threads or one has to hope that some semi-portable package exists for it. And even those semi-portable packages have to deal with assembler code and other nasty internals and are not easy to port to forthcoming platforms. d404 4 a407 3 So, in short: The matrix-dispatching approach is portable and fast, but nasty to program. The thread scheduling approach is easy to program, but suffers from synchronization and portability problems caused by its preemptive nature. d411 9 a419 10 But why not combine the good aspects of both discussed approaches while trying to avoid their bad aspects? That's the general intention and goal of B. In detail this means that B implements the easy to program threads of execution but in a way which doesn't have the portability side-effects of preemptive scheduling. This means that instead a non-preemptive scheduling is used. This sounds and is an interesting approach. Nevertheless one has to keep the implications of non-preemptive thread scheduling in mind when working with B. The following list summarizes a few essential points: d427 4 a430 4 The reasons are mainly because it uses a nifty and portable POSIX/ANSI-C approach for thread creation (and this way doesn't require any platform dependent assembler hacks) and schedules the threads in non-preemptive way (which doesn't require unportable facilities like C). On the other d437 2 a438 2 B. d440 1 a440 1 The reason is the non-preemptive scheduling. Number crunching applications d467 7 a473 6 This means that B runs on mostly all types of Unix kernels, because the kernel does not even recognize the B threads (because they are implemented entirely in user-space). On the other hand, it cannot benefit from the existance of multiprocessors, because for this kernel support would be needed. Practice this is no problem because multiprocessor systems are rare and portability is more important than highest concurrency. d479 3 a481 3 To better understand the B API it is reasonable to first understand the life cycle of a thread in the B threading system. It can be illustrated with the following graph: d486 1 a486 1 +---> READY----+ d495 1 a495 1 scheduler. On the next dispatching the scheduler picks it up from there and d497 4 a500 4 want to perform a CPU burst. There they are queued in priority order. Per dispatching step, the scheduler always removes the thread with the highest priority only. The assigned queue priority for all remaining threads every time is increased by 1 to prevent thread starvation. d511 15 a525 11 Before the next thread is taken out of the B queue, the B queue is checked for pending events. When one or more events of a thread occured, it is immediately moved to the B queue, too. The purpose of the B queue has to do with the fact that a thread never directly switches to another thread. A thread always yields execution to the scheduler and the scheduler dispatches to the next thread. The purpose of the B queue is to support thread joining. When a thread is marked to be unjoinable, it is directly kicked out of the system after it terminated. But when it is joinable it enters the B queue. There is remains until another thread joins it. d530 1 a530 1 disscussed in detail. With the knowledge given above it should be now easy to d535 1 a535 1 The following functions act on a global library basis. They are used to d542 5 a546 5 This initializes the B library. It has to be really the first B API function call in an application and is mandatory. It's usually done at the begin of the main() function of the application. This implicitly spawns the internal scheduler thread and transforms the single execution unit of the current process into a thread (the "main" thread). d552 1 a552 1 the main function of the application. At least it has to be called from within d554 1 a554 1 calling thread into the single execution unit of the underlaying process. The d563 1 a563 1 =item long B(unsigned lont I, ...); d798 1 a798 1 signal to a thread and its guarranties that only this thread gets the signal d822 1 a822 1 I thread, it is guarrantied that this thread receives execution control d835 1 a835 1 a resolution of one microsecond. In pratice you should neither rely on this d837 1 a837 1 only guarranties that the thread will sleep at least I. But because d847 1 a847 1 the various pth_event_xxx() functions). It's modelled like select(2), i.e., one d850 1 a850 1 when one ore more of them occurred after tagging them as occured. The I d852 2 a853 2 tagging. pth_wait(3) returns the number of occured events and the application can use pth_event_occurred(3) to test which events occured. d861 1 a861 1 performed. When its C again the calcellation request is d863 1 a863 1 immediately cancelled before pth_cancel(3) returns. The effect of a thread d871 1 a871 1 This is the crual way to cancel a thread I. When it's already dead and d881 1 a881 1 awakend and stores the value of I's pth_exit(3) call into I and d905 1 a905 1 This switches the non-blocking mode flag on filedescriptor I. The d911 1 a911 1 longer a requirement to manually switch a filedescriptor into non-blocking d913 1 a913 1 Instead when you now switch a filedescriptor explicitly into non-blocking d931 1 a931 1 This functions is always available, but only reasonably useable when B d993 1 a993 1 This is a filedescriptor event. One or more of C, d995 2 a996 2 I to specify on which state of the filedescriptor you want to wait. The filedescriptor itself has to be given as an additional argument. Example: d1001 1 a1001 1 This is a multiple filedescriptor event modeled directly after the select(2) d1003 4 a1006 4 convinient way to wait for a large set of filedescriptors at once and at each filedescriptor for a different type of state. Additionally as a nice side-effect one receives the number of filedescriptors which causes the event to be occurred (using BSD semantics, i.e., when a filedescriptor occurred in d1014 1 a1014 1 select(2)). The number of occurred filedescriptors are stored in C. d1079 1 a1079 1 When pth_event(3) is threated like sprintf(3), then this function is d1133 1 a1133 1 similar to the POSIX pthread API. Use this for thread specific global data. d1289 1 a1289 1 C). Recursive locking is explicity supported, i.e., a thread is allowed d1448 1 a1448 1 the whole process in case the filedescriptors will block. d1456 1 a1456 1 elapsed. The thread is guarrantied to not awakened before this time, but d1466 1 a1466 1 thread is guarrantied to not awakened before this time, but because of the d1494 1 a1494 1 signal handler. Instead it's catched by the scheduler only in order to awake d1538 1 a1538 1 bytes into I from filedescriptor I. The difference between read(2) d1540 1 a1540 1 thread until the filedescriptor is ready for reading. For more details about d1546 1 a1546 1 filedescriptor I into the first I rows of the I vector. The d1548 1 a1548 1 suspends execution of the current thread until the filedescriptor is ready for d1555 1 a1555 1 from I to filedescriptor I. The difference between write(2) and d1557 1 a1557 1 thread until the filedescriptor is ready for writing. For more details about d1563 1 a1563 1 filedescriptor I from the first I rows of the I vector. The d1565 1 a1565 1 suspends execution of the current thread until the filedescriptor is ready for d1947 1 a1947 1 third-party stuff) this can be inconvinient. Here it's required that a call @ 1.118 log @*** empty log message *** @ text @d1652 1 @ 1.117 log @*** empty log message *** @ text @d596 6 @ 1.116 log @*** empty log message *** @ text @d3 1 a3 1 ## Copyright (c) 1999 Ralf S. Engelschall @ 1.115 log @*** empty log message *** @ text @d534 1 a534 1 =item void B(void); d538 10 a547 3 the main function of the application. It implicitly kills all threads and transforms back the calling thread into the single execution unit of the underlaying process. @ 1.114 log @*** empty log message *** @ text @d5 1 a5 1 ## This file is part of GNU Pth, a non-preemptive thread scheduling d26 2 a27 2 # ``Real programmers don't document. # Documentation is for wimps who can't d116 1 a116 1 pth_cleanup_push, d219 1 a219 1 processing, etc). d307 1 a307 1 thread-specific data. d323 1 a323 1 guarrantied to be async-safe. d336 1 a336 1 B d346 1 a346 1 events. d365 1 a365 1 B d406 1 a406 1 used. d414 1 a414 1 =item B d416 1 a416 1 B. d426 1 a426 1 =item B d429 1 a429 1 application, but NOT the concurrency of number crunching applications>. d441 1 a441 1 =item B d443 1 a443 1 B. d453 1 a453 1 =item B d456 1 a456 1 benefit from multiprocessor machines>. d499 1 a499 1 CPU bursts and enters the B queue again. d532 1 a532 1 current process into a thread (the "main" thread). d787 1 a787 1 bursts into smaller units with this call. d956 1 a956 1 =item pth_event_t B(unsigned long I, ...); d982 1 a982 1 timeouts already can be handled via C events). d1241 1 a1241 1 mutex to protect it, of course. d1247 1 a1247 1 the event mechanism. d1433 1 a1433 1 the whole process. d1443 1 a1443 1 whole process. d1548 1 a1548 1 desired position inside the file. d1556 1 a1556 1 desired position inside the file. d1579 1 a1579 1 d1585 1 a1585 1 d1592 1 a1592 1 d1598 1 a1598 1 d1608 2 a1609 2 int main(int argc, char *argv[]) d1618 1 a1618 1 d1622 1 a1622 1 d1628 1 a1628 1 d1636 1 a1636 1 d1661 1 a1661 1 | d1696 1 a1696 1 | d1931 1 a1931 1 accept(2), read(2), write(2). d1946 1 a1946 1 accept(2), read(2), write(2). d1961 1 a1961 1 is straight-forward POSIX and ANSI C based. d2006 1 a2006 1 based) non-preemptive C++ scheduler class written by I. d2011 1 a2011 1 programming). d2058 1 a2058 1 Bibliography on threads and multithreading, d2065 2 a2066 2 ``I'', O'Reilly 1996; d2070 2 a2071 2 ``I'', Prentice Hall 1996; @ 1.113 log @*** empty log message *** @ text @d1644 268 @ 1.112 log @*** empty log message *** @ text @d2 1 a2 2 ## pth.pod -- Pth manual page ## d22 2 @ 1.112.2.1 log @*** empty log message *** @ text @d2 2 a3 1 ## GNU Pth - The GNU Portable Threads a22 2 ## ## pth.pod: Pth manual page @ 1.112.2.2 log @*** empty log message *** @ text @d538 3 a540 10 the main function of the application. At least it has to be called from within the main thread. It implicitly kills all threads and transforms back the calling thread into the single execution unit of the underlaying process. The usual way to terminate a B application is either a simple ``C'' in the main thread (which waits for all other threads to terminate, kills the threading system and then terminates the process) or a ``C'' (which immediately kills the threading system and terminates the process). The pth_kill() return immediately with a return code of C if it is called not from within the main thread. Else kills the treading system and returns C. @ 1.112.2.3 log @*** empty log message *** @ text @a1645 1 peer_len = sizeof(peer_addr); @ 1.111 log @*** empty log message *** @ text @d174 3 a176 3 execution ("multithreading") inside even driven applications. All threads runs in the same address space of the application process, but each thread has its own individual program-counter, run-time stack, signal mask and errno d179 1 a179 1 The thread scheduling itself is done in a cooperative way, i.e. the threads d181 1 a181 1 scheduler. The intention is that this way both better portability and run-time d222 1 a222 1 (heavy-weight) process, i.e. to use I. Those I d308 1 a308 1 Thread-safety is the avoidance of I, i.e. situations in which data d367 1 a367 1 processes, i.e. one spawns a thread of execution and this runs from the begin d370 1 a370 1 way similar to what the kernel does for the heavy-weight processes, i.e. every d553 1 a553 1 particular state, i.e. the C query is equal to the d585 1 a585 1 thread. It returns the name of the given thread, i.e. the return value of d629 1 a629 1 The thread cancellation state, i.e. a combination of C or d669 1 a669 1 The scheduling state of the thread, i.e. either C, d739 1 a739 1 ``pth_exit(I(I))'' inside the new thread unit, i.e. I's d773 1 a773 1 performed, i.e. ``C'' returns C when thread I d785 1 a785 1 times the threads should be cooperative, i.e. when they should split their CPU d795 1 a795 1 on the next dispatching step. If I is in a different state (i.e. still d819 1 a819 1 the various pth_event_xxx() functions). It's modelled like select(2), i.e. one d855 1 a855 1 with C. A thread can only be joined once, i.e. after the d935 1 a935 1 C for allowing asynchronous cancellations, i.e. d978 1 a978 1 to be occurred (using BSD semantics, i.e. when a filedescriptor occurred in d1052 1 a1052 1 sscanf(3), i.e. it is the inverse operation of pth_event(3). This means that d1211 2 a1212 2 calls to pth_atfork_push(3), i.e. FIFO. The I fork handlers are called in the opposite order, i.e. LIFO. d1223 1 a1223 1 is forked into a separate process, i.e. in the parent process nothing changes d1261 1 a1261 1 C). Recursive locking is explicity supported, i.e. a thread is allowed d1336 1 a1336 1 API, i.e. they are similar to the functions under "B event manager is mainly select(2) and gettimeofday(2) based, i.e. d1715 1 a1715 1 not built with C<-DPTH_DEBUG> (see Autoconf C<--enable-debug> option), i.e. @ 1.110 log @*** empty log message *** @ text @d12 1 a12 1 ## version 2 of the License, or (at your option) any later version. @ 1.109 log @*** empty log message *** @ text @d175 1 a175 1 in the same address space of the application process, but each thread has it's @ 1.108 log @*** empty log message *** @ text @d514 1 a514 1 In the following the B I (API) is @ 1.107 log @*** empty log message *** @ text @d197 1 a197 1 implemented by the operation system which can be used by the applications to d1752 1 a1752 1 =head1 BUG REPORTS d1759 6 @ 1.106 log @*** empty log message *** @ text @d616 1 a616 1 =item C (read-write) [B] d736 8 a743 7 C for default attributes) with the starting point at routine I. This entry routine is called as ``pth_exit(I(I))'' inside the new thread unit, i.e. I's return value is fed to an implicit pth_exit(3). So the thread usually can exit by just returning. Nevertheless the thread can also exit explicitly at any time by calling pth_exit(3). But keep in mind that calling the POSIX function exit(3) still terminates the complete process and not just the current thread. @ 1.105 log @*** empty log message *** @ text @d1349 1 a1349 1 =item int B(int I, const struct sockaddr *I, int I, pth_event_t I); d1357 1 a1357 1 =item int B(int I, struct sockaddr *I, int *I, pth_event_t I); d1471 1 a1471 1 =item int B(int I, const struct sockaddr *I, int I); d1480 1 a1480 1 =item int B(int I, struct sockaddr *I, int *I); @ 1.104 log @*** empty log message *** @ text @d10 1 a10 1 ## modify it under the terms of the GNU Library General Public d17 1 a17 1 ## Library General Public License for more details. d19 1 a19 1 ## You should have received a copy of the GNU Library General Public @ 1.104.2.1 log @*** empty log message *** @ text @d616 1 a616 1 =item C (read-write) [C] d736 7 a742 8 C for default attributes - which means that thread priority, joinability and cancel state are inherited from the current thread) with the starting point at routine I. This entry routine is called as ``pth_exit(I(I))'' inside the new thread unit, i.e. I's return value is fed to an implicit pth_exit(3). So the thread usually can exit by just returning. Nevertheless the thread can also exit explicitly at any time by calling pth_exit(3). But keep in mind that calling the POSIX function exit(3) still terminates the complete process and not just the current thread. @ 1.103 log @*** empty log message *** @ text @d775 1 a775 1 =item void B(void); d777 24 a800 9 This explicitly yields back the execution to the scheduler thread. Usually the execution is transferred back to the scheduler when a thread waits for an event. But when a thread has to do larger CPU bursts, it can be reasonable to interrupt it explicitly by doing a few pth_yield() calls to give other threads a chance to execute, too. This obviously is the cooperating part of B. A thread I to yield execution, of course. But when you want to program a server application with good response times the threads should be cooperative, i.e. when they should split their CPU bursts into smaller units with this call. @ 1.102 log @*** empty log message *** @ text @d1767 4 @ 1.101 log @*** empty log message *** @ text @d25 4 @ 1.100 log @*** empty log message *** @ text @d892 2 a893 1 structure when it is no longer needed. @ 1.99 log @*** empty log message *** @ text @d1575 1 d1582 2 a1583 1 printf("ticker: time: %s, average load: %.2f\n", ct, pth_load()); @ 1.98 log @*** empty log message *** @ text @d1244 1 a1244 1 returns C with C set to C. d1265 1 a1265 1 execution. Instead it returns C with C set to C. @ 1.97 log @*** empty log message *** @ text @d73 2 a74 1 pth_timeout. d880 13 @ 1.96 log @*** empty log message *** @ text @d861 1 a861 1 C on error. Keep in mind that since Pth 1.1 there is no d863 1 a863 1 mode in order to use it. This is automatically done temporarily inside Pth. @ 1.95 log @*** empty log message *** @ text @d129 3 a131 1 pth_cond_notify. d1200 6 a1205 5 locks (mutex), read-write locks (rwlock) and condition variables (cond). Keep in mind that in a non-preemptive threading system like B this might sound unnecessary at the first look, because a thread isn't interrupted by the system. Actually when you have a critical code section which doesn't contain any pth_xxx() functions, you don't need any mutex to protect it, of course. d1211 1 a1211 1 the event mechanism. d1278 17 @ 1.94 log @*** empty log message *** @ text @d71 1 a71 1 pth_nonblocking, d852 1 a852 1 =item int B(int I); d854 10 a863 2 This switches filedescriptor I into non-blocking mode which is a prerequisite to use it together with the B library. a1529 1 pth_nonblocking(fd); a1572 1 pth_nonblocking(sa); @ 1.94.2.1 log @*** empty log message *** @ text @d1219 1 a1219 1 returns C with C set to C. d1240 1 a1240 1 execution. Instead it returns C with C set to C. @ 1.93 log @*** empty log message *** @ text @d1559 1 a1559 1 pth_attr_set(attr, PTH_ATTR_NAME, "ticker") d1573 1 a1573 1 pth_attr_set(attr, PTH_ATTR_NAME, "handler") @ 1.92 log @*** empty log message *** @ text @d729 1 a729 1 C for no attributes) with the starting point at routine @ 1.91 log @*** empty log message *** @ text @d793 1 a793 1 =item int B(pth_event_t *I); @ 1.90 log @*** empty log message *** @ text @d726 1 a726 1 =item pth_t B(pth_attr_t *I, void *(*I)(void *), void *I); @ 1.89 log @*** empty log message *** @ text @d737 4 @ 1.88 log @*** empty log message *** @ text @d1685 8 @ 1.87 log @*** empty log message *** @ text @d1202 1 a1202 1 This dynamically initializes a mutex variable of type ``C''. d1204 1 a1204 1 *mutex = PTH_MUTEX_INITIALIZER>''. d1225 2 a1226 2 ``C''. Alternatively one can also use static initialization via ``C''. d1245 2 a1246 2 ``C''. Alternatively one can also use static initialization via ``C''. @ 1.86 log @*** empty log message *** @ text @d856 9 a864 1 function to avoid temporary structure values. @ 1.85 log @*** empty log message *** @ text @d139 3 a141 1 pth_write_ev. d155 5 a159 1 pth_write. d1315 8 d1331 8 d1440 9 d1456 25 @ 1.84 log @*** empty log message *** @ text @d139 1 a139 1 pth_write_ev, d153 1 a153 1 pth_write, d1602 1 a1602 1 The B library was designed and implemented between February and June 1999 @ 1.83 log @*** empty log message *** @ text @d1530 1 a1530 1 usleep(3), sleep(3), sigwait(3), waitpid(2), select(2), poll(2), connect(2), @ 1.82 log @*** empty log message *** @ text @d160 5 a164 4 provides non-preemptive scheduling for multiple threads of execution ("multithreading") inside even driven applications. All threads runs in the same address space of the application process, but each thread has it's own individual program-counter, run-time stack, signal mask and errno variable. @ 1.81 log @*** empty log message *** @ text @a138 1 pth_readline_ev. a152 1 pth_readline. a1307 8 =item ssize_t B(int I, void *I, size_t I, pth_event_t I); This is equal to pth_readline(3) (see below), but has an additional event argument I. When pth_readline(3) suspends the current threads execution it usually only uses the I/O event on I to awake. With this function any number of extra events can be used to awake the current thread (remember that I actually is an event I). a1415 10 =item ssize_t B(int I, void *I, size_t I); This is a convenience function which is based on pth_read(3). It reads bytes from filedescriptor I into I until a newline (``C<\n>'')is found, EOF occurred or I is reached. It internally uses thread-local buffering to be able to read larger chunks of data. Do either use pth_read(3) I pth_readline(3) because pth_read(3) currently isn't aware of the buffering of pth_readline(3). When you need generalized I/O with buffering then use a real I/O library and let it use pth_read(3)/pth_write(3). @ 1.80 log @*** empty log message *** @ text @d1680 5 @ 1.79 log @*** empty log message *** @ text @d137 1 d152 1 d1294 8 d1410 8 @ 1.78 log @*** empty log message *** @ text @d1531 2 a1532 2 usleep(3), sleep(3), sigwait(3), waitpid(2), select(2), connect(2), accept(2), read(2), write(2). d1546 2 a1547 2 mapped: fork(2), sleep(3), waitpid(2), select(2), connect(2), accept(2), read(2), write(2). @ 1.77 log @*** empty log message *** @ text @d513 1 a513 1 This initializes the B library. It has to be the first B API d1513 43 d1590 1 a1590 7 B uses an explicit API (i.e. for instance you've to use pth_read(3) and cannot just use read(3)) which might by nasty for some users. The reason is because this way B doesn't require any system call wrappers which usually cannot be provided in a portable way. And portability is one of B' major goals. Additionally B (intentionally) provides no replacements for thread-safe @ 1.76 log @*** empty log message *** @ text @d136 1 a136 1 pth_write_ev, d139 1 d150 1 a150 1 pth_write, d153 1 d907 17 d1284 1 a1284 1 =item ssize_t B(int I, const void *I, size_t I, pth_event_t I); d1286 5 a1290 5 This is equal to pth_write(3) (see below), but has an additional event argument I. When pth_write(3) suspends the current threads execution it usually only uses the I/O event on I to awake. With this function any number of extra events can be used to awake the current thread (remember that I actually is an event I). d1308 8 d1393 1 a1393 1 =item ssize_t B(int I, const void *I, size_t I); d1395 5 a1399 5 This is a variant of the POSIX write(2) function. It writes I bytes from I to filedescriptor I. The difference between write(2) and pth_write(2) is that that pth_write(2) suspends execution of the current thread until the filedescriptor is ready for writing. For more details about the arguments and return code semantics see write(2). d1418 8 @ 1.75 log @*** empty log message *** @ text @d64 1 d804 8 @ 1.74 log @*** empty log message *** @ text @d29 1 a29 1 B - GNU Portable Threads @ 1.73 log @*** empty log message *** @ text @d576 2 a577 2 current PTH library version. I is the version, I the revisions, I the level and I the type of the level (alphalevel=0, betalevel=1, @ 1.72 log @*** empty log message *** @ text @d7 1 a7 1 ## library library which can be found at http://www.gnu.org/software/pth/. @ 1.71 log @*** empty log message *** @ text @d29 1 a29 1 B - GNU Portable Threads d33 1 a33 1 Pth PTH_VERSION_STR @ 1.70 log @*** empty log message *** @ text @d6 2 a7 2 ## This file is part of GNU Pth, a non-preepmtive thread scheduling ## library which can be found at http://www.gnu.org/software/pth/. @ 1.69 log @*** empty log message *** @ text @d2 1 a2 1 ## pth.pod -- PTH manual page d6 2 a7 2 ## This file is part of PTH, a non-preepmtive thread scheduling library ## which can be found at http://www.gnu.org/software/pth/. d29 1 a29 1 B - GNU Portable Threads d33 1 a33 1 GNU pth PTH_VERSION_STR d156 1 a156 1 B is a very portable POSIX/ANSI-C based library for Unix platforms which d171 1 a171 1 Additionally PTH provides an optional emulation API for POSIX.1c threads d216 1 a216 1 understand B only the basic knowledge about threading is actually d219 1 a219 1 you to use B. d381 1 a381 1 =head2 The Compromise of PTH d384 2 a385 2 to avoid their bad aspects? That's the general intention and goal of B. In detail this means that B implements the easy to program threads of d392 1 a392 1 B. The following list summarizes a few essential points: d398 1 a398 1 B. d410 1 a410 1 B is d425 1 a425 1 B. d431 1 a431 1 Especially this means that under B more existing third-party libraries d437 1 a437 1 B runs on mostly all types of Unix kernels, because the kernel does not even recognize the B threads (because they are d451 2 a452 2 To better understand the B API it is reasonable to first understand the life cycle of a thread in the B threading system. It can be illustrated d497 1 a497 1 In the following the B I (API) is d510 1 a510 1 This initializes the B library. It has to be the first B API d518 1 a518 1 This kills the B library. It should be the last B API function call d526 1 a526 1 This is a generalized query/control function for the B library. The d578 1 a578 1 patchlevel=2, etc). For instance PTH version 1.0b1 is encoded as 0x100101. d587 1 a587 1 Attribute objects are used in B for two things: First stand-alone/unbound d712 1 a712 1 the B library. d760 1 a760 1 a chance to execute, too. This obviously is the cooperating part of B. d773 1 a773 1 of the non-preemptive nature of B it can last longer (when another thread d800 1 a800 1 ``C'' at one of his cancellation points. In PTH d833 1 a833 1 prerequisite to use it together with the B library. d844 1 a844 1 B supports POSIX style thread cancellation via pth_cancel(3) and the d874 1 a874 1 B has a very flexible event facility which is linked into the scheduler d902 1 a902 1 the second additional argument. Keep in mind that the B scheduler doesn't d1144 1 a1144 1 in mind that in a non-preemptive threading system like B this might sound d1295 1 a1295 1 because of the non-preemptive scheduling nature of B, it can be awakened d1305 1 a1305 1 non-preemptive scheduling nature of B, it can be awakened later, of d1321 1 a1321 1 This is the PTH thread-related equivalent of POSIX sigprocmask(2) respectively d1323 1 a1323 1 to sigprocmask(2), because B internally just uses sigprocmask(2) here. So d1391 1 a1391 1 intended to show you the look and feel of B. d1471 1 a1471 1 B is very portable because it has only one part which perhaps has to be d1479 1 a1479 1 switched. Additionally the B dispatcher switches also the global Unix d1484 1 a1484 1 The B event manager is mainly select(2) and gettimeofday(2) based, i.e. d1503 1 a1503 1 B uses an explicit API (i.e. for instance you've to use pth_read(3) and d1505 2 a1506 2 because this way B doesn't require any system call wrappers which usually cannot be provided in a portable way. And portability is one of B' major d1509 1 a1509 13 Additionally B currently doesn't provide the standardized POSIX Threading API ("I"), although the B API is very close to it. The reason for this is that B' API is intentionally more flexible. For instance there is no explicit event mechanism in POSIX threads, etc. But it is clear that for portability reasons and easy upgrading of applications a pthread(3) compatible wrapper API for B is required sooner or later. Development for such a wrapper library has already started, but it will last until B version 1.1 before this additional API can be finally released. Actually this library just maps pth_xxx() functions to pthread_xxx() functions and tries to emulate the POSIX return value semantics. Finally B (intentionally) provides no replacements for thread-safe d1522 1 a1522 1 The B library was designed and implemented between February and June 1999 d1528 1 a1528 1 B was then implemented in order to combine the I approach @ 1.68 log @*** empty log message *** @ text @d107 11 a140 1 pth_fork, d1080 60 a1288 9 =item pid_t B(void) This is a variant of fork(2) with the difference that the current thread only is forked into a separate process, i.e. in the parent process nothing changes while in the child process all threads are gone except for the scheduler and the calling thread. When you really want to duplicate all threads in the current process you should use fork(2) directly. But this is usually not reasonable. @ 1.67 log @*** empty log message *** @ text @d82 2 d940 23 @ 1.66 log @*** empty log message *** @ text @d29 1 a29 1 B - GNU Portable Threads d33 1 a33 1 PTH PTH_VERSION_STR @ 1.65 log @*** empty log message *** @ text @d144 1 a144 1 B is a portable POSIX/ANSI-C based library for Unix platforms which d158 4 @ 1.64 log @*** empty log message *** @ text @d7 1 a7 1 ## which can be found at http://www.engelschall.com/sw/pth/. @ 1.63 log @*** empty log message *** @ text @d29 1 a29 1 B - Bon-B

reemtive Thread Bcheduling Library @ 1.62 log @*** empty log message *** @ text @d494 1 a494 1 This initialized the B library. It has to be the first B API @ 1.61 log @*** empty log message *** @ text @d1467 7 a1476 3 ``comp.programming.threads Frequently Asked Questions (F.A.Q.)'', http://www.lambdacs.com/newsgroup/FAQ.html @ 1.60 log @*** empty log message *** @ text @d557 1 a557 1 =item int B(void); @ 1.59 log @*** empty log message *** @ text @d1 1 a1 1 @ 1.58 log @*** empty log message *** @ text @d1357 5 a1361 2 attr = pth_attr("ticker", 0, 0, 32*1024, FALSE); pth_spawn(&attr, ticker, NULL); d1372 1 a1372 1 attr = pth_attr("handler", 0, PTH_FLAG_NOJOIN, 32*1024, NULL); d1375 1 a1375 1 pth_spawn(&attr, handler, (void *)sw); d1464 36 @ 1.57 log @*** empty log message *** @ text @d43 2 a44 1 pth_ctrl. d556 10 @ 1.56 log @*** empty log message *** @ text @d586 1 a586 1 C and C and d829 1 a829 1 C and C. @ 1.55 log @*** empty log message *** @ text @d718 8 a725 6 This function raises a signal to thread I. Currently this functionality of sending a signal to just a particular thread is still not implemented, so usually this function always returns C with errno set to C. But when I is 0 POSIX style thread checking is at least possible, i.e. ``C'' returns C when thread I exists in the B system. @ 1.54 log @*** empty log message *** @ text @a58 2 pth_sigmask, pth_sigraise, d63 1 d131 1 d716 1 a716 9 =item int B(int I, const sigset_t *I, sigset_t *I) This is the PTH thread-related equivalent of POSIX sigprocmask(2). The arguments I, I and I directly relate to sigprocmask(2), because B internally just uses sigprocmask(2) here. So alternatively you can also directly call sigprocmask(2), but for consistency reasons you should use this function pth_sigmask(3). =item int B(pth_t I, int I) d722 1 a722 1 ``C'' returns C when thread I exists in the d1215 8 @ 1.53 log @*** empty log message *** @ text @d625 2 a626 2 The scheduling state of the thread, i.e. either C, C, C, or C @ 1.52 log @*** empty log message *** @ text @d45 9 a55 1 pth_attr, a58 1 pth_priority, a64 1 pth_detach, d71 2 a72 1 pth_time. d558 124 a688 17 =item pth_attr_t B(char *I, int I, unsigned int I, unsigned int I, void *I); This is a constructor for C structures which can be used for the first argument of pth_spawn(3) when it's not C. I is a string assigned to the thread which is mainly interested for debugging. I is the priority of the thread ranging from C to C; the default is C. I can be either C (no flags) or C (indicates that the thread cannot be joined, i.e. after termination its immediately kicked out of the system instead of inserted into the dead queue). I is the number of bytes the stack for the thread is in size. Use lower values than 32768 (32KB) with care. Finally I can be a dynamically pre-allocated chunk of memory (minimum I in length!) which should be used for the stack (when the thread terminates a free(3) is done). When I is C (the usual case) then the stack is allocated automatically by the B library. a715 6 =item int B(pth_t I, int I); This overrides the priority of the thread I with I. The current priority of a thread can be obtained via ``C''. a781 7 =item int B(pth_t I); This function is used to indicate to the implementation that storage for the thread I can be reclaimed when that thread terminates, i.e. it just detaches the thread by marking it unjoinable (see pth_attr(3) and C). @ 1.51 log @*** empty log message *** @ text @a50 1 pth_equal, a600 6 =item int B(pth_t I, pth_t I); This compares two thread handles and returns C when they are equal, i.e. when they describe the same thread. For portability reasons do not compare C variables directly via ``C<==>''. @ 1.50 log @*** empty log message *** @ text @d54 1 d622 9 @ 1.49 log @*** empty log message *** @ text @d977 1 a977 1 =item int B(pth_rwlock_t *I, int I, pth_event_t I); d984 2 a985 1 the locking timeout, etc, @ 1.48 log @*** empty log message *** @ text @d955 1 a955 1 =item int B(pth_mutex_t *I, pth_event_t I); d963 2 @ 1.47 log @*** empty log message *** @ text @d854 1 a854 1 =item void B(pth_event_t I, int I); @ 1.46 log @*** empty log message *** @ text @d586 1 a586 1 =item void B(pth_once_t *I, void (*I)(void *), void *I); d611 2 a612 1 priority of a thread can be obtained via ``C''. d634 1 a634 1 =item void B(pth_time_t I); @ 1.45 log @*** empty log message *** @ text @d53 1 d612 8 @ 1.44 log @*** empty log message *** @ text @d258 1 a258 1 =item B B and B functions d273 9 d1113 4 a1116 1 the pth_sigwait() call. @ 1.43 log @*** empty log message *** @ text @d913 1 a913 1 =head2 B @ 1.42 log @*** empty log message *** @ text @d1274 12 @ 1.41 log @*** empty log message *** @ text @d97 1 a97 1 =item B d100 8 a107 3 pth_mutex_lock, pth_mutex_unlock, pth_mutex_holder. d913 1 a913 1 =head2 B d916 5 a920 5 locks. Keep in mind that in a non-preemptive threading system like B this might sound unnecessary at the first look, because a thread isn't interrupted by the system. Actually when you have a critical code section which doesn't contain any pth_xxx() functions, you don't need any mutex to protect it, of course. d932 1 a932 1 This dynamically initialized a mutex variable of type ``C''. d936 1 a936 1 =item int B(pth_mutex_t *I, int I, pth_event_t I); d938 6 a943 8 This acquires a mutex I. When I is C the mutex is only tried to be acquired. When it is already locked C is returned immediately. When I is C and the mutex is already locked the thread's execution is suspended until the mutex is unlocked again or additionally the extra events in I occurred (when I is not C). Recursive locking is supported, i.e. a thread is allowed to acquire a mutex more than once before its released. But it then also has be released the same number of times until the mutex is again lockable by others. d947 1 a947 1 This decrements the recursion count on I and when it is zero it d950 1 a950 1 =item pth_t B(pth_mutex_t *I); d952 38 a989 2 This returns the thread id of the holder of mutex I. C is returned when I is currently not acquired by a thread. @ 1.40 log @*** empty log message *** @ text @d101 2 a102 1 pth_mutex_unlock. d946 5 @ 1.39 log @*** empty log message *** @ text @d907 1 a907 1 =head2 B @ 1.38 log @*** empty log message *** @ text @d97 6 d904 41 @ 1.37 log @*** empty log message *** @ text @d57 1 d640 7 @ 1.36 log @*** empty log message *** @ text @d1 1 a1 1 ## d704 1 a704 1 ``C again the calcellation request is d692 4 a695 4 C and C. C is the default state where cancellation is possible but only at cancellation points. Use C to complete disable cancellation for a thread and @ 1.34 log @*** empty log message *** @ text @d56 1 d65 5 d626 14 d675 30 @ 1.33 log @*** empty log message *** @ text @a1134 8 =head1 SEE ALSO pth-config(1), pthread(3). sigstack(2), sigaltstack(2), sigaction(2), sigemptyset(2), sigaddset(2), sigprocmask(2). sigsuspend(2), sigsetjmp(3), siglongjmp(3), setjmp(3), longjmp(3), select(2), gettimeofday(2). d1155 8 @ 1.32 log @*** empty log message *** @ text @d118 2 a119 2 ("multi-threading") inside server applications. All threads runs in the same address space of the server application, but each thread has it's own d123 31 a153 27 are managed by a priority- and event-based non-preemptive scheduler. The intention is that this way one can achieve better portability and run-time performance than with preemptive scheduling. The event facility allows threads to wait until various types of events occur, including pending I/O on filedescriptors, asynchronous signals, elapsed timers, pending I/O on message ports, thread and process termination, and even customized callback functions. =head2 Background When programming server type applications, lots of regular jobs and one-shot requests have to processed in parallel. To achieve this in an efficient way on uniprocessor machines the idea of multitasking is implemented by the operation system which can be used by the applications to spawn multiple instances of itself. On Unix the kernel implements multitasking in a preemptive and priority-based way through heavy-weight processes spawned with fork(2). These processes do usually I share a common address space. Instead they are clearly separated from each other and were created by direct cloning a process address space. The drawbacks are obvious: Sharing data is complicated and can usually only solved in an efficient way through shared memory (which itself is not very portable). Synchronization is complicated because of the preemptive nature of the Unix scheduler. The machine resources can be exhausted very quickly when the server application has to serve too much longer running requests occur (heavy-weight processes cost memory). Additionally when for each request a sub-process is spawned to handle it, the server performance is horrible (heavy-weight processes cost time to spawn). And finally the server d155 100 a254 13 problems. Lot's of tricks are done in practice to overcome these problems (ranging from pre-forked sub-process pools to semi-serialized processing, etc). Nevertheless one the most elegant ways to solve the resource and data sharing problems is to have multiple I threads of execution inside a single (heavy-weight) process, i.e. to use multithreading. But those light-weight processes are not supported by all Unix kernels. And even where kernel threads exists, the thread context switching is usually still too expensive. So the usual way to solve this is to implement user-land threads where the process is split into multiple threads of execution by the application itself and without the knowledge of the kernel and where context switches can be done faster. d258 1 a258 1 User-land threads can be implemented in in various way. The two classical d263 3 a265 1 =item B<1. Matrix-based explicit dispatching between small units of execution:> d267 1 a267 1 Here the global procedures of the server application are split into small d272 2 a273 2 units by calling one function after each other controlled by this matrix. The treads are created by more than one jump-trail through this matrix and by d275 1 a275 2 events. Examples of this is the I B/B based B web server or the B web proxy server. d279 9 a287 9 matrix and the scheduling is done explicitly by the application itself) and that it's very portable (because the matrix is just an ordinary data structure and functions are a standard feature of ANSI C). The disadvantage of this approach is that it's complicated to write large server applications with this approach, because in large applications one quickly get hundreds of execution units and the control flow inside such an application is very hard to understand (because it's interrupted by function borders and one always has to use the global matrix to follow it). d289 2 a290 2 saves memory it's often nasty because one cannot switch between threads in the middle of a function. The scheduling borders are function borders. d292 3 a294 1 =item B<2. Queue-based based implicit scheduling between threads of execution:> d303 1 a303 2 synchronization things) doesn't have to care about this. Examples of this approach are the various POSIX thread ("pthread") based server applications. d306 3 a308 3 because the control flow of a thread directly follows a procedure without forced interrupts through function borders. Additionally the programming is very similar to the fork(2) approach. d310 2 a311 2 The disadvantage is that although the general performance is increased compared to using approaches with heavy-weight processes, it's decreased d314 1 a314 1 switch costs some overhead even when it's a lot cheaper than a kernel-level d316 3 a318 3 Finally there is no really portable ANSI C & POSIX based way to implement preemptive threads yourself. Either the platform already has threads or one has to hope that some semi-portable package exists for it. And even those d328 1 a328 1 =head2 The Compromise d331 64 a394 4 to avoid their bad aspects? That's the general intention of B. In detail this means that B implements the easy to program threads of execution but in a way which doesn't have the portability side-effects of preemptive scheduling. This means that instead a non-preemptive scheduling is used. d398 3 a400 3 To better understand the B API its reasonable to first understand the life cycle of a thread in the B system. It can be illustrated with the following graph: d405 3 a407 3 +--> READY---+ | ^ | | | V d409 3 a411 3 | V DEAD d413 1 a413 1 When a new thread is created it is moved into the B queue of the d416 1 a416 1 want to perform a CPU burst. They are queued in priority order. Per d418 2 a419 2 priority only. The assigned queue priority for all remaining threads every time is increased by one to prevent thread starvation. d422 7 a428 6 thread (there is always just one B thread, of course). After this thread yields execution (either explicitly or implicitly by calling a function which would block) there are three possibilities: Either it has terminated, then it's moved to the B queue, or it has events on which it wants to wait, then its moved into the B queue. Else it is assumed it wants to perform more CPU bursts and enters the B queue again. d432 1 a432 1 occured, its immediately moved to the B queue, too. d436 5 a440 4 scheduler and the scheduler invokes a thread. The purpose of the B queue is to support thread joining. When a thread is marked to be unjoinable, it is directly kicked out of the system after it terminated. But when its joinable it enter the B queue. There is remains until another thread joins it. d444 3 a446 2 In the following the B Application Programmers Interface (API) is discussed in detail. d582 1 a582 1 priority of a thread can be obtained via ``pth_ctrl(PTH_CTRL_GETPRIO, tid)''. a1002 32 =head1 IMPLEMENTATION NOTES B is very portable because it has only one part which perhaps have to be ported to new platforms (the machine context initialization). But it is written in a way which works on mostly all Unix platforms which support sigstack(2) or sigaltstack(2) [see C for details]. Any other code is plain POSIX and ANSI C based. The context switching is done via POSIX [sig]setjmp(3) and [sig]longjmp(3). Here all CPU registers, the program counter and the stack pointer are switched. Additionally the B dispatcher switches also the global Unix C variable [see C for details] and the signal mask (either implicitly via sigsetjmp(3) or in an emulated way via explicit setprocmask(2) calls). The B event manager is mainly select(2) and gettimeofday(2) based, i.e. the current time is fetched via gettimeofday(2) once per context switch for calculations and both the time and all I/O events are implemented via a single select(2) call [see C for details]. The thread control block management is done via queues without any additional data structure overhead. For this the queue linkage attributes are part of the thread control blocks and the queues are actually implemented as rings with a selected element as the entry point [see C and C for details]. Most time critical sections (especially the dispatcher and event manager) are speeded up by inlined functions (implemented as ANSI C macros). Additionally any debugging code is I removed from the source when not built with C<-DPTH_DEBUG> (see C<--enable-debug>), i.e. not only stub functions remain [see C for details]. d1084 7 a1090 1 =head1 BUGS d1092 23 a1114 1 No real bugs currently known. d1118 2 a1119 2 B uses an explicit API (i.e. for instance you've to use pth_read() and cannot just use read()) which might by nasty for some users. The reason is d1124 10 a1133 10 Finally B (intentionally) provides no replacements for non-reentrant (e.g. strtok(3) which uses a static internal buffer) or synchronous system functions (e.g. gethostbyname(3) which doesn't provide an asynchronous mode where it doesn't block). When you want to use those functions in your server application together with threads you've to either link the application against special third-party libraries (or for reentrant functions possibly against an existing C of the platform vendor). For an asynchronous DNS resolver library use either the new C from B ( see ftp://ftp.isc.org/isc/bind/ ) or the forthcoming GNU B package from Ian Jackson ( see http://www.gnu.org/software/adns/adns.html ). d1160 3 a1162 3 test version of Apache. The concept and API of message ports was borrowed from AmigaOS' B. The concept and idea for the flexible event mechanism came from I's B (part of B). @ 1.31 log @*** empty log message *** @ text @d92 1 d105 1 d696 8 d785 1 a785 1 This is a variant of the POSIX waitpid(2) function. It suspends suspends the d791 8 @ 1.30 log @*** empty log message *** @ text @d123 4 a126 4 performance than with preemptive scheduling. The event facility allows threads to wait until various types of events occur, including pending I/O on filedescriptors, elapsed timers, pending I/O on message ports, thread and process termination, and even customized callback functions. @ 1.29 log @*** empty log message *** @ text @d118 1 a118 1 individual run-time stack and program-counter. @ 1.28 log @*** empty log message *** @ text @d524 1 a524 1 =item C d526 9 a534 6 This is a signal event. The additional argument has to be a signal number (``CI''). This event wait until the signal is pending. Keep in mind that the B scheduler doesn't block signals itself. So when you want to wait for a signal with this event you've to block it via sigprocmask(2) or it will be delivered without your notice. Example: ``C''. @ 1.27 log @*** empty log message *** @ text @d593 3 a595 1 returns this new reached event. @ 1.26 log @*** empty log message *** @ text @d66 1 a66 1 pth_event d114 2 a115 2 B is a maximum portable POSIX/ANSI-C based library for Unix platforms which provides non-preemptive scheduling for multiple threads of execution d382 5 a386 1 with care. d423 1 a423 1 This overrides the priority of the thread I with I. The current d454 7 a460 7 the various pth_event_xxx() functions). Its modeled like select(2), i.e. one gives this function one or more events (in the event ring specified by I) on which the current thread wants to wait. The scheduler awakes the thread when one ore more of them occurred after tagging them as occured. The I argument is a I to an event ring which isn't changed except for the tagging. pth_wait(3) returns the number of occured events and the application can use pth_event_occurred(3) to test which events occured. d495 1 a495 1 This is a constructor for a pth_time_t structure which is a convenient d510 65 a574 2 This creates a new event ring consisting of a single event. Its type is specified by I. ???MORE DETAILS??? d579 3 a581 1 and returns I. The end of the argument list has to be a C argument. d585 3 a587 2 This isolates the event I from possibly appended events in the event ring. d591 3 a593 2 This walks to the next or previews event in the event ring I and returns this event. d597 10 a606 5 This checks whether the event I occurred. =item void B(pth_event_t I, int I); This deallocates the event I or all events appended to it. d827 11 a837 9 ported to new platforms (the machine context initialization). But its written in a way which works on mostly all Unix platforms which support sigstack(2) or sigaltstack(2) [see C for details]. Any other code is plain POSIX and ANSI C based. The context switching is done via POSIX setjmp(3) and longjmp(3). Here all CPU registers, the program counter and the stack pointer are switched. Additionally the B dispatcher switches also the global Unix C variable [see C for details]. d840 1 a840 1 the current time is fetched via gettimeofday(2) on every context switch for d845 1 a845 1 data structure overhead. For this the queue linkage variables are part of the d930 1 a930 1 attr = pth_attr("handler", 0, 0, 32*1024, FALSE); a948 2 Additionally B still lacks support for per-thread signal handling. d964 3 a966 2 sigstack(2), sigaltstack(2), sigaction(2), sigemptyset(2), sigaddset(2), sigprocmask(2). setjmp(3), longjmp(3), select(2), gettimeofday(2). d970 1 a970 1 The B library was designed and implemented between February and May 1999 d973 2 a974 2 I and I related to an experimental (matrix based) non-preemptive C++ scheduler class written by I. @ 1.25 log @*** empty log message *** @ text @d20 1 a20 1 ## License along with this library; if not, write to the Free @ 1.24 log @*** empty log message *** @ text @d371 1 a371 1 =item pth_attr_t B(char *I, int I, unsigned int I, unsigned int I); d447 1 a447 1 =item int B(pth_event_t *I, pth_event_t *I); d450 1 a450 1 the various pth_event_xxx() functions). Its modeled like select(2), i.e. one d452 5 a456 10 I) on which the current thread wants to wait. The scheduler awakes the thread when one ore more of them occurred after moving them from I to I (the second event ring). Both arguments are I to event rings. I is usually just a pointer to a C variable (which needs not to be initialized but can be). When I is specified as C, the scheduler doesn't move the occurred events out of I, i.e. the application then has to use pth_event_occurred(3) explicitly on all events in I to find out which one has occurred, but this way it can reuse the I event ring. @ 1.23 log @*** empty log message *** @ text @d92 2 d105 1 d517 1 a517 1 and returns I. d618 16 d705 9 @ 1.22 log @*** empty log message *** @ text @d857 3 a859 3 DNS resolver library use either the new C from B (see ftp://ftp.isc.org/isc/bind/) or the forthcoming GNU B package from Ian Jackson (see http://www.gnu.org/software/adns/adns.html). @ 1.21 log @*** empty log message *** @ text @d873 1 a873 1 I and I related to an experimental (matrix based) d884 1 a884 1 threading library (B) written by I for an ancient @ 1.20 log @*** empty log message *** @ text @d850 1 a850 1 Finally N (intentionally) provides no replacements for non-reentrant @ 1.19 log @*** empty log message *** @ text @d255 7 a261 7 When a new thread is created it is moved into the NEW queue of the scheduler. On the next dispatching the scheduler picks it up from there and moves it to the READY queue. This is a queue containing all threads which want to perform a CPU burst. They are queued in priority order. Per dispatching step, the scheduler always removes the thread with the highest priority only. The assigned queue priority for all remaining threads every time is increased by one to prevent thread starvation. d263 7 a269 7 The thread which was removed from the READY queue is the new RUNNING thread (there is always just one RUNNING thread, of course). After this thread yields execution (either explicitly or implicitly by calling a function which would block) there are three possibilities: Either it has terminated, then it's moved to the DEAD queue, or it has events on which it wants to wait, then its moved into the WAITING queue. Else it is assumed it wants to perform more CPU bursts and enters the READY queue again. d271 3 a273 3 Before the next thread is taken out of the READY queue, the WAITING queue is checked for pending events. When one or more events of a thread occured, its immediately moved to the READY queue, too. d275 1 a275 1 The purpose of the NEW queue has to do with the fact that a thread never d277 2 a278 2 scheduler and the scheduler invokes a thread. The purpose of the DEAD queue is to support thread joining. When a thread is marked to be unjoinable, it is d280 1 a280 1 it enter the DEAD queue. There is remains until another thread joins it. @ 1.18 log @*** empty log message *** @ text @d238 44 @ 1.17 log @*** empty log message *** @ text @d806 11 @ 1.16 log @*** empty log message *** @ text @d686 1 a686 1 sigaltstack(2) [see C(pth_event_t *I, pth_event_t *I); d407 2 a408 2 I to I (the second event ring). Both arguments are I to event rings. I is usually just a pointer to a d411 1 a411 1 When I is specified as C, the scheduler doesn't move the d482 1 a482 1 =item int B(pth_event_t I); @ 1.13 log @*** empty log message *** @ text @d70 1 a70 1 pth_event_occured, d153 1 a153 1 leight-weight processes are not supported by all Unix kernels. And even where d176 1 a176 1 switching between these jump-trails controlled by corresponding occured d188 1 a188 1 quickly get hundrets of execution units and the control flow inside such an d254 1 a254 1 begin of the main() function of the application. This implicity spawns the d346 1 a346 1 complete processo and not just the current thread. d350 1 a350 1 This is a convienience function which uses a control variable of type d392 1 a392 1 nor that the threas is awakened exactly after I has elapsed. It's d394 1 a394 1 of the non-preemtive nature of B it can last longer (when another thread d406 1 a406 1 awakes the thread when one ore more of them occured after moving them from d412 3 a414 3 occured events out of I, i.e. the application then has to use pth_event_occured(3) explicitly on all events in I to find out which one has occured, but this way it can reuse the I event ring. d449 1 a449 1 This is a constructor for a pth_time_t structure which is a convinient d484 1 a484 1 This checks whether the event I occured. d502 1 a502 1 can specifiy a destructor function which is called on the current threads d573 1 a573 1 This is equal to pth_write(3) (see below), but has an addtional event argument d581 1 a581 1 This is equal to pth_read(3) (see below), but has an addtional event argument d589 1 a589 1 This is equal to pth_readline(3) (see below), but has an addtional event d671 1 a671 1 This is a convienience function which is based on pth_read(3). It reads bytes d673 1 a673 1 occured or I is reached. It internally uses thread-local buffering to @ 1.12 log @*** empty log message *** @ text @d252 5 a256 2 This initialized the B library. It has to be the first pth_xxx() call made by an application. d260 6 d268 44 a311 1 ... d313 1 a313 5 This returns a floating point value describing the exponential averaged load of the scheduler. The load is a function from the number of threads in the ready queue of the schedulers dispatching unit. So a load around 1.0 means there is only one ready thread. A higher load means there a more threads ready who want to do CPU bursts. The average load value is adjusted once per second. d319 2 a320 1 The following functions control the threading itself. d324 12 a335 1 =item pth_attr_t B(char *I, int I, unsigned int I, unsigned int I, int I); d339 24 a362 4 This spawns a new thread with the attributes given in I with the start point at routine I. This entry routine is called as I(I) inside the new thread. When I returns an implicit pth_exit(NULL) is done. d364 1 a364 1 =item void B(pth_once_t *I, void (*I)(void *), void *I); d366 3 a368 1 =item pth_t B(void); d370 1 a370 1 =item int B(pth_t I, pth_t I); d372 2 a373 1 =item void B(pth_t I, int I); d377 10 d389 35 a423 1 =item int B(pth_event_t *I, pth_event_t *I); d425 1 a425 1 =item int B(pth_t I, void **I); d427 6 a432 1 =item void B(void *I); d438 2 d444 3 d449 3 d456 4 d464 3 d469 3 d474 3 d479 3 d484 2 d488 1 d494 3 d501 4 d507 2 d511 2 d515 2 d521 3 d528 4 d534 3 d539 3 d544 2 d548 2 d552 4 d558 2 d564 5 d573 6 d581 6 d589 6 d599 4 d655 5 a659 5 This is a variant of the 4.2BSD accept(2) function. It accepts a connection on a socket by extracting the first connection request on the queue of pending connections, creating a new socket with the same properties of I and allocates a new file descriptor for the socket (which is returned). For more details about the arguments and return code semantics see accept(2). d663 6 d671 8 a679 1 @ 1.11 log @*** empty log message *** @ text @d394 7 a400 4 This is a variant of 4.3BSD's usleep(3) function. It suspends the current threads execution until I microsecond (= I * 1/1000000s) elapsed. The thread is guarrantied to not awakened before this time, but because of the non-preemptive scheduling nature of B, it can be awakened a lot later. d404 8 d414 7 d423 8 d432 6 @ 1.10 log @*** empty log message *** @ text @d6 1 a6 1 ## This file is part of PTH, a non-preemtive thread scheduling library d112 1 a112 1 which provides non-preemtive scheduling for multiple threads of execution d118 1 a118 1 are managed by a priority- and event-based non-preemtive scheduler. The d120 1 a120 1 performance than with preemtive scheduling. The event facility allows threads d131 1 a131 1 itself. On Unix the kernel implements multitasking in a preemtive and d139 1 a139 1 portable). Synchronization is complicated because of the preemtive nature of d200 1 a200 1 still interrupted - even in the middle of a function. Actually in a preemtive d214 1 a214 1 compared to the matrix-approach above. Because the implicit preemtive d217 1 a217 1 context switch) than the explicit cooperative/non-preemtive scheduling. d219 1 a219 1 preemtive threads yourself. Either the platform already has threads or one has d228 1 a228 1 from synchronization and portability problems caused by its preemtive nature. d235 2 a236 2 in a way which doesn't have the portability side-effects of preemtive scheduling. This means that instead a non-preemtive scheduling is used. d383 9 d394 5 d519 1 a519 1 by I after evaluating various (mostly preemtive) thread d522 1 a522 1 non-preemtive C++ scheduler class written by I. d524 1 a524 1 B was then implemented in order to combine the I approach d529 1 a529 1 So the essential idea for the non-preemtive approach was taken over from @ 1.9 log @*** empty log message *** @ text @d98 1 @ 1.8 log @*** empty log message *** @ text @d43 1 a43 2 pth_stat, pth_load. d244 3 d251 3 d256 3 a258 1 =item unsigned int B(int I); d260 5 a264 1 =item float B(void); d467 1 a467 1 sar.sin_family = AF_INET; d469 1 a469 1 sar.sin_port = htons(port); d480 14 d496 4 a499 1 pth-config(1), pthread(3), fork(2). d503 4 a506 4 The B library was designed and implemented between February and May 1999 by I after evaluating various (mostly preemtive) thread libraries and intensive discussions with I, I and I related to an experimental (matrix based) d512 9 a520 11 programming). Some code inspiration also came from an old (never publically released) threading library written by I. Additionally the concept and API of message ports was borrowed from AmigaOS' I after, while the idea for the event mechanism itself came from I's I. So, the essential idea for the non-preemtive approach was take over from I scheduler. The priority based scheduling algorithm was contributed by I. initial. Some hard-core for the machine context switching was borrowed from I's threading library. The implementation of B itself was done by I. @ 1.7 log @*** empty log message *** @ text @d43 2 a44 1 pth_stat. d111 5 a115 5 B is a POSIX/ANSI-C based library for Unix platforms which provides non-preemtive scheduling for multiple threads of execution ("multi-threading") inside server applications. All threads runs in the same address space of the server application, but each thread has it's own individual run-time stack and program-counter. d204 2 a205 1 synchronization things) doesn't have to care about this. d235 2 a236 2 in a way which doesn't have the side-effects of preemtive scheduling. This means that instead a non-preemtive scheduling is used. d245 11 a255 1 ... d263 2 d272 18 d294 8 d304 17 d323 12 d337 18 d357 10 d369 18 d390 78 a467 1 ... @ 1.6 log @*** empty log message *** @ text @d112 3 a114 3 inside high-performance server applications. All threads runs in the same address space of the server application, but each thread has it's own individual run-time stack and program-counter. @ 1.5 log @*** empty log message *** @ text @a0 2 ## ==================================================================== ## Copyright (c) 1999 Ralf S. Engelschall. All rights reserved. d2 6 a7 36 ## Redistribution and use in source and binary forms, with or without ## modification, are permitted provided that the following conditions ## are met: ## ## 1. Redistributions of source code must retain the above copyright ## notice, this list of conditions and the following disclaimer. ## ## 2. Redistributions in binary form must reproduce the above copyright ## notice, this list of conditions and the following disclaimer in ## the documentation and/or other materials provided with the ## distribution. ## ## 3. All advertising materials mentioning features or use of this ## software must display the following acknowledgment: ## "This product includes software developed by ## Ralf S. Engelschall ." ## ## 4. Redistributions of any form whatsoever must retain the following ## acknowledgment: ## "This product includes software developed by ## Ralf S. Engelschall ." ## ## THIS SOFTWARE IS PROVIDED BY RALF S. ENGELSCHALL ``AS IS'' AND ANY ## EXPRESSED OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE ## IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR ## PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL RALF S. ENGELSCHALL OR ## ITS CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, ## SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT ## NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; ## LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) ## HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, ## STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ## ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED ## OF THE POSSIBILITY OF SUCH DAMAGE. ## ==================================================================== d9 14 a22 1 ## pth.pod -- PTH manpage d29 1 a29 1 B - Bon-B

reemtive Bcheduling Library @ 1.4 log @*** empty log message *** @ text @d48 1 a48 1 B - Non-Preemtive Scheduling Library d56 70 a125 15 B is a POSIX/ANSI-C based library for Unix platforms which provides non-preemtive scheduling for multiple threads of execution ("I") inside high-performance server applications. All threads runs in the same address space of the server application, but each thread has it's own individual run-time stack and program-counter. The API is very similar to the POSIX threads API ("I"), i.e. one can spawn and join threads. But the thread scheduling itself is done in a cooperative instead of the usual preemtive way. The threads are managed by a priority- and event-based non-preemtive scheduler. The intention is to achieve this way better portability and run-time performance. The event facility allows threads to wait until various types of events occur, including filedescriptor I/O, elapsed timers, raised signals, message port I/O, thread and process termination, etc. d129 14 d152 2 a153 1 clearly separated from each other. d158 2 a159 2 the Unix scheduler. The machine resources can be exhausted very quickly when the server application has to serve too much one-shot requests at once d165 2 a166 1 (ranging from pre-forked sub-process pools to semi-serialized processing). d168 9 a176 7 Nevertheless the most elegant way to solve the resource and data sharing problems would be to have multiple I threads of execution inside a (heavy-weight) process, i.e. to use multithreading. But those leight-weight processes are not supported by all Unix kernels. So the usual way to solve this is to implement user-land threads where the process is split into multiple threads of execution by the application itself and without the knowledge of the kernel. d181 1 a181 1 approaches exists: d185 1 a185 1 =item B<1.> d187 3 a189 4 B Here the global procedures of the server application are split into small execution units (each has to run maximal a few milliseconds) and those units are implemented by separate program functions. Then a global matrix is created d193 4 a196 2 Examples of this is the iMatix Libero/SMT based xitami server or the Squid web proxy server. d199 4 a202 4 possible (because one can fine-tune the threads of execution because the scheduling is done explicitly by the application itself) and that it's very portable (because the matrix is just an ordinary data structure and functions are a standard feature of ANSI C). d209 14 a222 12 =item B<2.> B Here the idea is that one programs as with fork(2)'ed processes, i.e. one spawns a thread of execution and this runs from the begin to the end without an interrupted control flow. But the execution control is interrupted, of course. Actually in a preemtive way similar to what the kernel does for the heavy-weight processes, i.e. every few milliseconds the scheduler switches between the threads of execution. But the thread itself doesn't recognize this and usually (except for synchronization things) doesn't have to care about this. d226 2 a227 2 forced interrupts. Additionally the programming is very similar to the fork(2) approach. d235 1 a235 3 Additionally one more side-effect of this preemtive approach is that one large procedures via implicit preemtion (e.g. POSIX threads). And finally there is no really portable ANSI C & POSIX based way to implement d247 1 a247 1 =head1 The Compromise d250 1 a250 1 to avoid their bad aspects? That's The general intention of B. In detail d252 4 a255 2 in a way which doesn't have the side-effects. This means that instead of preemtive scheduling a non-preemtive scheduling is used. d257 4 a260 1 =head1 FUNCTIONS d264 28 d302 5 a306 5 The B library was designed between February 1999 and May 1999 by I after evaluating various (mostly preemtive) thread libraries and intensive discussions with I, I and I related to an experimental (matrix based) non-preemtive C++ scheduler class written by I. @ 1.3 log @*** empty log message *** @ text @d204 5 a208 2 programming). Additional code inspiration came from an old (never publically released) threading library written by I. d213 1 a213 1 context switching was borrowed from I's threading library. The @ 1.2 log @*** empty log message *** @ text @d195 5 a199 7 The B library was written between February 1999 and May 1999 by Ralf S. Engelschall. It was inspired by an experimental (matrix based) non-preemtive C++ scheduler class written by Peter Simons and a thread-package by Robert S. Tau. Ralf S. Engelschall combined the non-preemtive approach with the popular idea of threads of executions one can found in POSIX thread libraries after receiving excellent hints from Peter Simons, Martin Kraemer and Lars Eilebrecht. d201 11 a211 5 The non-preemtive nature was takeb over from Peter Simons. The priority based scheduling algorithm was contributed by Martin Kraemer. So the original intention of B was to combine the speed and simplicity of matrix based dispatching libraries with the programming idea of multiple threads of execution from the preemtive POSIX threading libraries. @ 1.1 log @Initial revision @ text @d48 1 a48 1 B - Non-Preemtive Scheduler d56 15 a70 1 ... @ 1.1.1.1 log @Import of PTH into CVS @ text @@