[Linux] Read vDSO range from /proc/PID/task/PID/maps instead of /proc/PID/maps
[deliverable/binutils-gdb.git] / gdb / linux-nat.c
CommitLineData
3993f6b1 1/* GNU/Linux native-dependent code common to multiple platforms.
dba24537 2
618f726f 3 Copyright (C) 2001-2016 Free Software Foundation, Inc.
3993f6b1
DJ
4
5 This file is part of GDB.
6
7 This program is free software; you can redistribute it and/or modify
8 it under the terms of the GNU General Public License as published by
a9762ec7 9 the Free Software Foundation; either version 3 of the License, or
3993f6b1
DJ
10 (at your option) any later version.
11
12 This program is distributed in the hope that it will be useful,
13 but WITHOUT ANY WARRANTY; without even the implied warranty of
14 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
15 GNU General Public License for more details.
16
17 You should have received a copy of the GNU General Public License
a9762ec7 18 along with this program. If not, see <http://www.gnu.org/licenses/>. */
3993f6b1
DJ
19
20#include "defs.h"
21#include "inferior.h"
45741a9c 22#include "infrun.h"
3993f6b1 23#include "target.h"
96d7229d
LM
24#include "nat/linux-nat.h"
25#include "nat/linux-waitpid.h"
3993f6b1 26#include "gdb_wait.h"
d6b0e80f
AC
27#include <unistd.h>
28#include <sys/syscall.h>
5826e159 29#include "nat/gdb_ptrace.h"
0274a8ce 30#include "linux-nat.h"
125f8a3d
GB
31#include "nat/linux-ptrace.h"
32#include "nat/linux-procfs.h"
8cc73a39 33#include "nat/linux-personality.h"
ac264b3b 34#include "linux-fork.h"
d6b0e80f
AC
35#include "gdbthread.h"
36#include "gdbcmd.h"
37#include "regcache.h"
4f844a66 38#include "regset.h"
dab06dbe 39#include "inf-child.h"
10d6c8cd
DJ
40#include "inf-ptrace.h"
41#include "auxv.h"
1777feb0 42#include <sys/procfs.h> /* for elf_gregset etc. */
dba24537
AC
43#include "elf-bfd.h" /* for elfcore_write_* */
44#include "gregset.h" /* for gregset */
45#include "gdbcore.h" /* for get_exec_file */
46#include <ctype.h> /* for isdigit */
53ce3c39 47#include <sys/stat.h> /* for struct stat */
dba24537 48#include <fcntl.h> /* for O_RDONLY */
b84876c2
PA
49#include "inf-loop.h"
50#include "event-loop.h"
51#include "event-top.h"
07e059b5
VP
52#include <pwd.h>
53#include <sys/types.h>
2978b111 54#include <dirent.h>
07e059b5 55#include "xml-support.h"
efcbbd14 56#include <sys/vfs.h>
6c95b8df 57#include "solib.h"
125f8a3d 58#include "nat/linux-osdata.h"
6432734d 59#include "linux-tdep.h"
7dcd53a0 60#include "symfile.h"
5808517f
YQ
61#include "agent.h"
62#include "tracepoint.h"
87b0bb13 63#include "buffer.h"
6ecd4729 64#include "target-descriptions.h"
614c279d 65#include "filestuff.h"
77e371c0 66#include "objfiles.h"
7a6a1731
GB
67#include "nat/linux-namespaces.h"
68#include "fileio.h"
efcbbd14
UW
69
70#ifndef SPUFS_MAGIC
71#define SPUFS_MAGIC 0x23c9b64e
72#endif
dba24537 73
1777feb0 74/* This comment documents high-level logic of this file.
8a77dff3
VP
75
76Waiting for events in sync mode
77===============================
78
4a6ed09b
PA
79When waiting for an event in a specific thread, we just use waitpid,
80passing the specific pid, and not passing WNOHANG.
81
82When waiting for an event in all threads, waitpid is not quite good:
83
84- If the thread group leader exits while other threads in the thread
85 group still exist, waitpid(TGID, ...) hangs. That waitpid won't
86 return an exit status until the other threads in the group are
87 reaped.
88
89- When a non-leader thread execs, that thread just vanishes without
90 reporting an exit (so we'd hang if we waited for it explicitly in
91 that case). The exec event is instead reported to the TGID pid.
92
93The solution is to always use -1 and WNOHANG, together with
94sigsuspend.
95
96First, we use non-blocking waitpid to check for events. If nothing is
97found, we use sigsuspend to wait for SIGCHLD. When SIGCHLD arrives,
98it means something happened to a child process. As soon as we know
99there's an event, we get back to calling nonblocking waitpid.
100
101Note that SIGCHLD should be blocked between waitpid and sigsuspend
102calls, so that we don't miss a signal. If SIGCHLD arrives in between,
103when it's blocked, the signal becomes pending and sigsuspend
104immediately notices it and returns.
105
106Waiting for events in async mode (TARGET_WNOHANG)
107=================================================
8a77dff3 108
7feb7d06
PA
109In async mode, GDB should always be ready to handle both user input
110and target events, so neither blocking waitpid nor sigsuspend are
111viable options. Instead, we should asynchronously notify the GDB main
112event loop whenever there's an unprocessed event from the target. We
113detect asynchronous target events by handling SIGCHLD signals. To
114notify the event loop about target events, the self-pipe trick is used
115--- a pipe is registered as waitable event source in the event loop,
116the event loop select/poll's on the read end of this pipe (as well on
117other event sources, e.g., stdin), and the SIGCHLD handler writes a
118byte to this pipe. This is more portable than relying on
119pselect/ppoll, since on kernels that lack those syscalls, libc
120emulates them with select/poll+sigprocmask, and that is racy
121(a.k.a. plain broken).
122
123Obviously, if we fail to notify the event loop if there's a target
124event, it's bad. OTOH, if we notify the event loop when there's no
125event from the target, linux_nat_wait will detect that there's no real
126event to report, and return event of type TARGET_WAITKIND_IGNORE.
127This is mostly harmless, but it will waste time and is better avoided.
128
129The main design point is that every time GDB is outside linux-nat.c,
130we have a SIGCHLD handler installed that is called when something
131happens to the target and notifies the GDB event loop. Whenever GDB
132core decides to handle the event, and calls into linux-nat.c, we
133process things as in sync mode, except that the we never block in
134sigsuspend.
135
136While processing an event, we may end up momentarily blocked in
137waitpid calls. Those waitpid calls, while blocking, are guarantied to
138return quickly. E.g., in all-stop mode, before reporting to the core
139that an LWP hit a breakpoint, all LWPs are stopped by sending them
140SIGSTOP, and synchronously waiting for the SIGSTOP to be reported.
141Note that this is different from blocking indefinitely waiting for the
142next event --- here, we're already handling an event.
8a77dff3
VP
143
144Use of signals
145==============
146
147We stop threads by sending a SIGSTOP. The use of SIGSTOP instead of another
148signal is not entirely significant; we just need for a signal to be delivered,
149so that we can intercept it. SIGSTOP's advantage is that it can not be
150blocked. A disadvantage is that it is not a real-time signal, so it can only
151be queued once; we do not keep track of other sources of SIGSTOP.
152
153Two other signals that can't be blocked are SIGCONT and SIGKILL. But we can't
154use them, because they have special behavior when the signal is generated -
155not when it is delivered. SIGCONT resumes the entire thread group and SIGKILL
156kills the entire thread group.
157
158A delivered SIGSTOP would stop the entire thread group, not just the thread we
159tkill'd. But we never let the SIGSTOP be delivered; we always intercept and
160cancel it (by PTRACE_CONT without passing SIGSTOP).
161
162We could use a real-time signal instead. This would solve those problems; we
163could use PTRACE_GETSIGINFO to locate the specific stop signals sent by GDB.
164But we would still have to have some support for SIGSTOP, since PTRACE_ATTACH
165generates it, and there are races with trying to find a signal that is not
4a6ed09b
PA
166blocked.
167
168Exec events
169===========
170
171The case of a thread group (process) with 3 or more threads, and a
172thread other than the leader execs is worth detailing:
173
174On an exec, the Linux kernel destroys all threads except the execing
175one in the thread group, and resets the execing thread's tid to the
176tgid. No exit notification is sent for the execing thread -- from the
177ptracer's perspective, it appears as though the execing thread just
178vanishes. Until we reap all other threads except the leader and the
179execing thread, the leader will be zombie, and the execing thread will
180be in `D (disc sleep)' state. As soon as all other threads are
181reaped, the execing thread changes its tid to the tgid, and the
182previous (zombie) leader vanishes, giving place to the "new"
183leader. */
a0ef4274 184
dba24537
AC
185#ifndef O_LARGEFILE
186#define O_LARGEFILE 0
187#endif
0274a8ce 188
433bbbf8 189/* Does the current host support PTRACE_GETREGSET? */
0bdb2f78 190enum tribool have_ptrace_getregset = TRIBOOL_UNKNOWN;
433bbbf8 191
10d6c8cd
DJ
192/* The single-threaded native GNU/Linux target_ops. We save a pointer for
193 the use of the multi-threaded target. */
194static struct target_ops *linux_ops;
f973ed9c 195static struct target_ops linux_ops_saved;
10d6c8cd 196
9f0bdab8 197/* The method to call, if any, when a new thread is attached. */
7b50312a
PA
198static void (*linux_nat_new_thread) (struct lwp_info *);
199
26cb8b7c
PA
200/* The method to call, if any, when a new fork is attached. */
201static linux_nat_new_fork_ftype *linux_nat_new_fork;
202
203/* The method to call, if any, when a process is no longer
204 attached. */
205static linux_nat_forget_process_ftype *linux_nat_forget_process_hook;
206
7b50312a
PA
207/* Hook to call prior to resuming a thread. */
208static void (*linux_nat_prepare_to_resume) (struct lwp_info *);
9f0bdab8 209
5b009018
PA
210/* The method to call, if any, when the siginfo object needs to be
211 converted between the layout returned by ptrace, and the layout in
212 the architecture of the inferior. */
a5362b9a 213static int (*linux_nat_siginfo_fixup) (siginfo_t *,
5b009018
PA
214 gdb_byte *,
215 int);
216
ac264b3b
MS
217/* The saved to_xfer_partial method, inherited from inf-ptrace.c.
218 Called by our to_xfer_partial. */
4ac248ca 219static target_xfer_partial_ftype *super_xfer_partial;
10d6c8cd 220
6a3cb8e8
PA
221/* The saved to_close method, inherited from inf-ptrace.c.
222 Called by our to_close. */
223static void (*super_close) (struct target_ops *);
224
ccce17b0 225static unsigned int debug_linux_nat;
920d2a44
AC
226static void
227show_debug_linux_nat (struct ui_file *file, int from_tty,
228 struct cmd_list_element *c, const char *value)
229{
230 fprintf_filtered (file, _("Debugging of GNU/Linux lwp module is %s.\n"),
231 value);
232}
d6b0e80f 233
ae087d01
DJ
234struct simple_pid_list
235{
236 int pid;
3d799a95 237 int status;
ae087d01
DJ
238 struct simple_pid_list *next;
239};
240struct simple_pid_list *stopped_pids;
241
aa01bd36
PA
242/* Whether target_thread_events is in effect. */
243static int report_thread_events;
244
3dd5b83d
PA
245/* Async mode support. */
246
b84876c2
PA
247/* The read/write ends of the pipe registered as waitable file in the
248 event loop. */
249static int linux_nat_event_pipe[2] = { -1, -1 };
250
198297aa
PA
251/* True if we're currently in async mode. */
252#define linux_is_async_p() (linux_nat_event_pipe[0] != -1)
253
7feb7d06 254/* Flush the event pipe. */
b84876c2 255
7feb7d06
PA
256static void
257async_file_flush (void)
b84876c2 258{
7feb7d06
PA
259 int ret;
260 char buf;
b84876c2 261
7feb7d06 262 do
b84876c2 263 {
7feb7d06 264 ret = read (linux_nat_event_pipe[0], &buf, 1);
b84876c2 265 }
7feb7d06 266 while (ret >= 0 || (ret == -1 && errno == EINTR));
b84876c2
PA
267}
268
7feb7d06
PA
269/* Put something (anything, doesn't matter what, or how much) in event
270 pipe, so that the select/poll in the event-loop realizes we have
271 something to process. */
252fbfc8 272
b84876c2 273static void
7feb7d06 274async_file_mark (void)
b84876c2 275{
7feb7d06 276 int ret;
b84876c2 277
7feb7d06
PA
278 /* It doesn't really matter what the pipe contains, as long we end
279 up with something in it. Might as well flush the previous
280 left-overs. */
281 async_file_flush ();
b84876c2 282
7feb7d06 283 do
b84876c2 284 {
7feb7d06 285 ret = write (linux_nat_event_pipe[1], "+", 1);
b84876c2 286 }
7feb7d06 287 while (ret == -1 && errno == EINTR);
b84876c2 288
7feb7d06
PA
289 /* Ignore EAGAIN. If the pipe is full, the event loop will already
290 be awakened anyway. */
b84876c2
PA
291}
292
7feb7d06
PA
293static int kill_lwp (int lwpid, int signo);
294
295static int stop_callback (struct lwp_info *lp, void *data);
2db9a427 296static int resume_stopped_resumed_lwps (struct lwp_info *lp, void *data);
7feb7d06
PA
297
298static void block_child_signals (sigset_t *prev_mask);
299static void restore_child_signals_mask (sigset_t *prev_mask);
2277426b
PA
300
301struct lwp_info;
302static struct lwp_info *add_lwp (ptid_t ptid);
303static void purge_lwp_list (int pid);
4403d8e9 304static void delete_lwp (ptid_t ptid);
2277426b
PA
305static struct lwp_info *find_lwp_pid (ptid_t ptid);
306
8a99810d
PA
307static int lwp_status_pending_p (struct lwp_info *lp);
308
9c02b525
PA
309static int sigtrap_is_event (int status);
310static int (*linux_nat_status_is_event) (int status) = sigtrap_is_event;
311
e7ad2f14
PA
312static void save_stop_reason (struct lwp_info *lp);
313
cff068da
GB
314\f
315/* LWP accessors. */
316
317/* See nat/linux-nat.h. */
318
319ptid_t
320ptid_of_lwp (struct lwp_info *lwp)
321{
322 return lwp->ptid;
323}
324
325/* See nat/linux-nat.h. */
326
4b134ca1
GB
327void
328lwp_set_arch_private_info (struct lwp_info *lwp,
329 struct arch_lwp_info *info)
330{
331 lwp->arch_private = info;
332}
333
334/* See nat/linux-nat.h. */
335
336struct arch_lwp_info *
337lwp_arch_private_info (struct lwp_info *lwp)
338{
339 return lwp->arch_private;
340}
341
342/* See nat/linux-nat.h. */
343
cff068da
GB
344int
345lwp_is_stopped (struct lwp_info *lwp)
346{
347 return lwp->stopped;
348}
349
350/* See nat/linux-nat.h. */
351
352enum target_stop_reason
353lwp_stop_reason (struct lwp_info *lwp)
354{
355 return lwp->stop_reason;
356}
357
ae087d01
DJ
358\f
359/* Trivial list manipulation functions to keep track of a list of
360 new stopped processes. */
361static void
3d799a95 362add_to_pid_list (struct simple_pid_list **listp, int pid, int status)
ae087d01 363{
8d749320 364 struct simple_pid_list *new_pid = XNEW (struct simple_pid_list);
e0881a8e 365
ae087d01 366 new_pid->pid = pid;
3d799a95 367 new_pid->status = status;
ae087d01
DJ
368 new_pid->next = *listp;
369 *listp = new_pid;
370}
371
372static int
46a96992 373pull_pid_from_list (struct simple_pid_list **listp, int pid, int *statusp)
ae087d01
DJ
374{
375 struct simple_pid_list **p;
376
377 for (p = listp; *p != NULL; p = &(*p)->next)
378 if ((*p)->pid == pid)
379 {
380 struct simple_pid_list *next = (*p)->next;
e0881a8e 381
46a96992 382 *statusp = (*p)->status;
ae087d01
DJ
383 xfree (*p);
384 *p = next;
385 return 1;
386 }
387 return 0;
388}
389
de0d863e
DB
390/* Return the ptrace options that we want to try to enable. */
391
392static int
393linux_nat_ptrace_options (int attached)
394{
395 int options = 0;
396
397 if (!attached)
398 options |= PTRACE_O_EXITKILL;
399
400 options |= (PTRACE_O_TRACESYSGOOD
401 | PTRACE_O_TRACEVFORKDONE
402 | PTRACE_O_TRACEVFORK
403 | PTRACE_O_TRACEFORK
404 | PTRACE_O_TRACEEXEC);
405
406 return options;
407}
408
96d7229d 409/* Initialize ptrace warnings and check for supported ptrace
beed38b8
JB
410 features given PID.
411
412 ATTACHED should be nonzero iff we attached to the inferior. */
3993f6b1
DJ
413
414static void
beed38b8 415linux_init_ptrace (pid_t pid, int attached)
3993f6b1 416{
de0d863e
DB
417 int options = linux_nat_ptrace_options (attached);
418
419 linux_enable_event_reporting (pid, options);
96d7229d 420 linux_ptrace_init_warnings ();
4de4c07c
DJ
421}
422
6d8fd2b7 423static void
f045800c 424linux_child_post_attach (struct target_ops *self, int pid)
4de4c07c 425{
beed38b8 426 linux_init_ptrace (pid, 1);
4de4c07c
DJ
427}
428
10d6c8cd 429static void
2e97a79e 430linux_child_post_startup_inferior (struct target_ops *self, ptid_t ptid)
4de4c07c 431{
beed38b8 432 linux_init_ptrace (ptid_get_pid (ptid), 0);
4de4c07c
DJ
433}
434
4403d8e9
JK
435/* Return the number of known LWPs in the tgid given by PID. */
436
437static int
438num_lwps (int pid)
439{
440 int count = 0;
441 struct lwp_info *lp;
442
443 for (lp = lwp_list; lp; lp = lp->next)
444 if (ptid_get_pid (lp->ptid) == pid)
445 count++;
446
447 return count;
448}
449
450/* Call delete_lwp with prototype compatible for make_cleanup. */
451
452static void
453delete_lwp_cleanup (void *lp_voidp)
454{
9a3c8263 455 struct lwp_info *lp = (struct lwp_info *) lp_voidp;
4403d8e9
JK
456
457 delete_lwp (lp->ptid);
458}
459
d83ad864
DB
460/* Target hook for follow_fork. On entry inferior_ptid must be the
461 ptid of the followed inferior. At return, inferior_ptid will be
462 unchanged. */
463
6d8fd2b7 464static int
07107ca6
LM
465linux_child_follow_fork (struct target_ops *ops, int follow_child,
466 int detach_fork)
3993f6b1 467{
d83ad864 468 if (!follow_child)
4de4c07c 469 {
6c95b8df 470 struct lwp_info *child_lp = NULL;
d83ad864
DB
471 int status = W_STOPCODE (0);
472 struct cleanup *old_chain;
473 int has_vforked;
79639e11 474 ptid_t parent_ptid, child_ptid;
d83ad864
DB
475 int parent_pid, child_pid;
476
477 has_vforked = (inferior_thread ()->pending_follow.kind
478 == TARGET_WAITKIND_VFORKED);
79639e11
PA
479 parent_ptid = inferior_ptid;
480 child_ptid = inferior_thread ()->pending_follow.value.related_pid;
481 parent_pid = ptid_get_lwp (parent_ptid);
482 child_pid = ptid_get_lwp (child_ptid);
4de4c07c 483
1777feb0 484 /* We're already attached to the parent, by default. */
d83ad864 485 old_chain = save_inferior_ptid ();
79639e11 486 inferior_ptid = child_ptid;
d83ad864
DB
487 child_lp = add_lwp (inferior_ptid);
488 child_lp->stopped = 1;
489 child_lp->last_resume_kind = resume_stop;
4de4c07c 490
ac264b3b
MS
491 /* Detach new forked process? */
492 if (detach_fork)
f75c00e4 493 {
4403d8e9
JK
494 make_cleanup (delete_lwp_cleanup, child_lp);
495
4403d8e9
JK
496 if (linux_nat_prepare_to_resume != NULL)
497 linux_nat_prepare_to_resume (child_lp);
c077881a
HZ
498
499 /* When debugging an inferior in an architecture that supports
500 hardware single stepping on a kernel without commit
501 6580807da14c423f0d0a708108e6df6ebc8bc83d, the vfork child
502 process starts with the TIF_SINGLESTEP/X86_EFLAGS_TF bits
503 set if the parent process had them set.
504 To work around this, single step the child process
505 once before detaching to clear the flags. */
506
507 if (!gdbarch_software_single_step_p (target_thread_architecture
508 (child_lp->ptid)))
509 {
c077881a
HZ
510 linux_disable_event_reporting (child_pid);
511 if (ptrace (PTRACE_SINGLESTEP, child_pid, 0, 0) < 0)
512 perror_with_name (_("Couldn't do single step"));
513 if (my_waitpid (child_pid, &status, 0) < 0)
514 perror_with_name (_("Couldn't wait vfork process"));
515 }
516
517 if (WIFSTOPPED (status))
9caaaa83
PA
518 {
519 int signo;
520
521 signo = WSTOPSIG (status);
522 if (signo != 0
523 && !signal_pass_state (gdb_signal_from_host (signo)))
524 signo = 0;
525 ptrace (PTRACE_DETACH, child_pid, 0, signo);
526 }
4403d8e9 527
d83ad864 528 /* Resets value of inferior_ptid to parent ptid. */
4403d8e9 529 do_cleanups (old_chain);
ac264b3b
MS
530 }
531 else
532 {
6c95b8df 533 /* Let the thread_db layer learn about this new process. */
2277426b 534 check_for_thread_db ();
ac264b3b 535 }
9016a515 536
d83ad864
DB
537 do_cleanups (old_chain);
538
9016a515
DJ
539 if (has_vforked)
540 {
3ced3da4 541 struct lwp_info *parent_lp;
6c95b8df 542
79639e11 543 parent_lp = find_lwp_pid (parent_ptid);
96d7229d 544 gdb_assert (linux_supports_tracefork () >= 0);
3ced3da4 545
96d7229d 546 if (linux_supports_tracevforkdone ())
9016a515 547 {
6c95b8df
PA
548 if (debug_linux_nat)
549 fprintf_unfiltered (gdb_stdlog,
550 "LCFF: waiting for VFORK_DONE on %d\n",
551 parent_pid);
3ced3da4 552 parent_lp->stopped = 1;
9016a515 553
6c95b8df
PA
554 /* We'll handle the VFORK_DONE event like any other
555 event, in target_wait. */
9016a515
DJ
556 }
557 else
558 {
559 /* We can't insert breakpoints until the child has
560 finished with the shared memory region. We need to
561 wait until that happens. Ideal would be to just
562 call:
563 - ptrace (PTRACE_SYSCALL, parent_pid, 0, 0);
564 - waitpid (parent_pid, &status, __WALL);
565 However, most architectures can't handle a syscall
566 being traced on the way out if it wasn't traced on
567 the way in.
568
569 We might also think to loop, continuing the child
570 until it exits or gets a SIGTRAP. One problem is
571 that the child might call ptrace with PTRACE_TRACEME.
572
573 There's no simple and reliable way to figure out when
574 the vforked child will be done with its copy of the
575 shared memory. We could step it out of the syscall,
576 two instructions, let it go, and then single-step the
577 parent once. When we have hardware single-step, this
578 would work; with software single-step it could still
579 be made to work but we'd have to be able to insert
580 single-step breakpoints in the child, and we'd have
581 to insert -just- the single-step breakpoint in the
582 parent. Very awkward.
583
584 In the end, the best we can do is to make sure it
585 runs for a little while. Hopefully it will be out of
586 range of any breakpoints we reinsert. Usually this
587 is only the single-step breakpoint at vfork's return
588 point. */
589
6c95b8df
PA
590 if (debug_linux_nat)
591 fprintf_unfiltered (gdb_stdlog,
3e43a32a
MS
592 "LCFF: no VFORK_DONE "
593 "support, sleeping a bit\n");
6c95b8df 594
9016a515 595 usleep (10000);
9016a515 596
6c95b8df
PA
597 /* Pretend we've seen a PTRACE_EVENT_VFORK_DONE event,
598 and leave it pending. The next linux_nat_resume call
599 will notice a pending event, and bypasses actually
600 resuming the inferior. */
3ced3da4
PA
601 parent_lp->status = 0;
602 parent_lp->waitstatus.kind = TARGET_WAITKIND_VFORK_DONE;
603 parent_lp->stopped = 1;
6c95b8df
PA
604
605 /* If we're in async mode, need to tell the event loop
606 there's something here to process. */
d9d41e78 607 if (target_is_async_p ())
6c95b8df
PA
608 async_file_mark ();
609 }
9016a515 610 }
4de4c07c 611 }
3993f6b1 612 else
4de4c07c 613 {
3ced3da4 614 struct lwp_info *child_lp;
4de4c07c 615
3ced3da4
PA
616 child_lp = add_lwp (inferior_ptid);
617 child_lp->stopped = 1;
25289eb2 618 child_lp->last_resume_kind = resume_stop;
6c95b8df 619
6c95b8df 620 /* Let the thread_db layer learn about this new process. */
ef29ce1a 621 check_for_thread_db ();
4de4c07c
DJ
622 }
623
624 return 0;
625}
626
4de4c07c 627\f
77b06cd7 628static int
a863b201 629linux_child_insert_fork_catchpoint (struct target_ops *self, int pid)
4de4c07c 630{
96d7229d 631 return !linux_supports_tracefork ();
3993f6b1
DJ
632}
633
eb73ad13 634static int
973fc227 635linux_child_remove_fork_catchpoint (struct target_ops *self, int pid)
eb73ad13
PA
636{
637 return 0;
638}
639
77b06cd7 640static int
3ecc7da0 641linux_child_insert_vfork_catchpoint (struct target_ops *self, int pid)
3993f6b1 642{
96d7229d 643 return !linux_supports_tracefork ();
3993f6b1
DJ
644}
645
eb73ad13 646static int
e98cf0cd 647linux_child_remove_vfork_catchpoint (struct target_ops *self, int pid)
eb73ad13
PA
648{
649 return 0;
650}
651
77b06cd7 652static int
ba025e51 653linux_child_insert_exec_catchpoint (struct target_ops *self, int pid)
3993f6b1 654{
96d7229d 655 return !linux_supports_tracefork ();
3993f6b1
DJ
656}
657
eb73ad13 658static int
758e29d2 659linux_child_remove_exec_catchpoint (struct target_ops *self, int pid)
eb73ad13
PA
660{
661 return 0;
662}
663
a96d9b2e 664static int
ff214e67
TT
665linux_child_set_syscall_catchpoint (struct target_ops *self,
666 int pid, int needed, int any_count,
a96d9b2e
SDJ
667 int table_size, int *table)
668{
96d7229d 669 if (!linux_supports_tracesysgood ())
77b06cd7
TJB
670 return 1;
671
a96d9b2e
SDJ
672 /* On GNU/Linux, we ignore the arguments. It means that we only
673 enable the syscall catchpoints, but do not disable them.
77b06cd7 674
a96d9b2e
SDJ
675 Also, we do not use the `table' information because we do not
676 filter system calls here. We let GDB do the logic for us. */
677 return 0;
678}
679
d6b0e80f 680/* List of known LWPs. */
9f0bdab8 681struct lwp_info *lwp_list;
d6b0e80f
AC
682\f
683
d6b0e80f
AC
684/* Original signal mask. */
685static sigset_t normal_mask;
686
687/* Signal mask for use with sigsuspend in linux_nat_wait, initialized in
688 _initialize_linux_nat. */
689static sigset_t suspend_mask;
690
7feb7d06
PA
691/* Signals to block to make that sigsuspend work. */
692static sigset_t blocked_mask;
693
694/* SIGCHLD action. */
695struct sigaction sigchld_action;
b84876c2 696
7feb7d06
PA
697/* Block child signals (SIGCHLD and linux threads signals), and store
698 the previous mask in PREV_MASK. */
84e46146 699
7feb7d06
PA
700static void
701block_child_signals (sigset_t *prev_mask)
702{
703 /* Make sure SIGCHLD is blocked. */
704 if (!sigismember (&blocked_mask, SIGCHLD))
705 sigaddset (&blocked_mask, SIGCHLD);
706
707 sigprocmask (SIG_BLOCK, &blocked_mask, prev_mask);
708}
709
710/* Restore child signals mask, previously returned by
711 block_child_signals. */
712
713static void
714restore_child_signals_mask (sigset_t *prev_mask)
715{
716 sigprocmask (SIG_SETMASK, prev_mask, NULL);
717}
2455069d
UW
718
719/* Mask of signals to pass directly to the inferior. */
720static sigset_t pass_mask;
721
722/* Update signals to pass to the inferior. */
723static void
94bedb42
TT
724linux_nat_pass_signals (struct target_ops *self,
725 int numsigs, unsigned char *pass_signals)
2455069d
UW
726{
727 int signo;
728
729 sigemptyset (&pass_mask);
730
731 for (signo = 1; signo < NSIG; signo++)
732 {
2ea28649 733 int target_signo = gdb_signal_from_host (signo);
2455069d
UW
734 if (target_signo < numsigs && pass_signals[target_signo])
735 sigaddset (&pass_mask, signo);
736 }
737}
738
d6b0e80f
AC
739\f
740
741/* Prototypes for local functions. */
742static int stop_wait_callback (struct lwp_info *lp, void *data);
8dd27370 743static char *linux_child_pid_to_exec_file (struct target_ops *self, int pid);
20ba1ce6 744static int resume_stopped_resumed_lwps (struct lwp_info *lp, void *data);
710151dd 745
d6b0e80f 746\f
d6b0e80f 747
7b50312a
PA
748/* Destroy and free LP. */
749
750static void
751lwp_free (struct lwp_info *lp)
752{
753 xfree (lp->arch_private);
754 xfree (lp);
755}
756
d90e17a7
PA
757/* Remove all LWPs belong to PID from the lwp list. */
758
759static void
760purge_lwp_list (int pid)
761{
762 struct lwp_info *lp, *lpprev, *lpnext;
763
764 lpprev = NULL;
765
766 for (lp = lwp_list; lp; lp = lpnext)
767 {
768 lpnext = lp->next;
769
770 if (ptid_get_pid (lp->ptid) == pid)
771 {
772 if (lp == lwp_list)
773 lwp_list = lp->next;
774 else
775 lpprev->next = lp->next;
776
7b50312a 777 lwp_free (lp);
d90e17a7
PA
778 }
779 else
780 lpprev = lp;
781 }
782}
783
26cb8b7c
PA
784/* Add the LWP specified by PTID to the list. PTID is the first LWP
785 in the process. Return a pointer to the structure describing the
786 new LWP.
787
788 This differs from add_lwp in that we don't let the arch specific
789 bits know about this new thread. Current clients of this callback
790 take the opportunity to install watchpoints in the new thread, and
791 we shouldn't do that for the first thread. If we're spawning a
792 child ("run"), the thread executes the shell wrapper first, and we
793 shouldn't touch it until it execs the program we want to debug.
794 For "attach", it'd be okay to call the callback, but it's not
795 necessary, because watchpoints can't yet have been inserted into
796 the inferior. */
d6b0e80f
AC
797
798static struct lwp_info *
26cb8b7c 799add_initial_lwp (ptid_t ptid)
d6b0e80f
AC
800{
801 struct lwp_info *lp;
802
dfd4cc63 803 gdb_assert (ptid_lwp_p (ptid));
d6b0e80f 804
8d749320 805 lp = XNEW (struct lwp_info);
d6b0e80f
AC
806
807 memset (lp, 0, sizeof (struct lwp_info));
808
25289eb2 809 lp->last_resume_kind = resume_continue;
d6b0e80f
AC
810 lp->waitstatus.kind = TARGET_WAITKIND_IGNORE;
811
812 lp->ptid = ptid;
dc146f7c 813 lp->core = -1;
d6b0e80f
AC
814
815 lp->next = lwp_list;
816 lwp_list = lp;
d6b0e80f 817
26cb8b7c
PA
818 return lp;
819}
820
821/* Add the LWP specified by PID to the list. Return a pointer to the
822 structure describing the new LWP. The LWP should already be
823 stopped. */
824
825static struct lwp_info *
826add_lwp (ptid_t ptid)
827{
828 struct lwp_info *lp;
829
830 lp = add_initial_lwp (ptid);
831
6e012a6c
PA
832 /* Let the arch specific bits know about this new thread. Current
833 clients of this callback take the opportunity to install
26cb8b7c
PA
834 watchpoints in the new thread. We don't do this for the first
835 thread though. See add_initial_lwp. */
836 if (linux_nat_new_thread != NULL)
7b50312a 837 linux_nat_new_thread (lp);
9f0bdab8 838
d6b0e80f
AC
839 return lp;
840}
841
842/* Remove the LWP specified by PID from the list. */
843
844static void
845delete_lwp (ptid_t ptid)
846{
847 struct lwp_info *lp, *lpprev;
848
849 lpprev = NULL;
850
851 for (lp = lwp_list; lp; lpprev = lp, lp = lp->next)
852 if (ptid_equal (lp->ptid, ptid))
853 break;
854
855 if (!lp)
856 return;
857
d6b0e80f
AC
858 if (lpprev)
859 lpprev->next = lp->next;
860 else
861 lwp_list = lp->next;
862
7b50312a 863 lwp_free (lp);
d6b0e80f
AC
864}
865
866/* Return a pointer to the structure describing the LWP corresponding
867 to PID. If no corresponding LWP could be found, return NULL. */
868
869static struct lwp_info *
870find_lwp_pid (ptid_t ptid)
871{
872 struct lwp_info *lp;
873 int lwp;
874
dfd4cc63
LM
875 if (ptid_lwp_p (ptid))
876 lwp = ptid_get_lwp (ptid);
d6b0e80f 877 else
dfd4cc63 878 lwp = ptid_get_pid (ptid);
d6b0e80f
AC
879
880 for (lp = lwp_list; lp; lp = lp->next)
dfd4cc63 881 if (lwp == ptid_get_lwp (lp->ptid))
d6b0e80f
AC
882 return lp;
883
884 return NULL;
885}
886
6d4ee8c6 887/* See nat/linux-nat.h. */
d6b0e80f
AC
888
889struct lwp_info *
d90e17a7 890iterate_over_lwps (ptid_t filter,
6d4ee8c6 891 iterate_over_lwps_ftype callback,
d90e17a7 892 void *data)
d6b0e80f
AC
893{
894 struct lwp_info *lp, *lpnext;
895
896 for (lp = lwp_list; lp; lp = lpnext)
897 {
898 lpnext = lp->next;
d90e17a7
PA
899
900 if (ptid_match (lp->ptid, filter))
901 {
6d4ee8c6 902 if ((*callback) (lp, data) != 0)
d90e17a7
PA
903 return lp;
904 }
d6b0e80f
AC
905 }
906
907 return NULL;
908}
909
2277426b
PA
910/* Update our internal state when changing from one checkpoint to
911 another indicated by NEW_PTID. We can only switch single-threaded
912 applications, so we only create one new LWP, and the previous list
913 is discarded. */
f973ed9c
DJ
914
915void
916linux_nat_switch_fork (ptid_t new_ptid)
917{
918 struct lwp_info *lp;
919
dfd4cc63 920 purge_lwp_list (ptid_get_pid (inferior_ptid));
2277426b 921
f973ed9c
DJ
922 lp = add_lwp (new_ptid);
923 lp->stopped = 1;
e26af52f 924
2277426b
PA
925 /* This changes the thread's ptid while preserving the gdb thread
926 num. Also changes the inferior pid, while preserving the
927 inferior num. */
928 thread_change_ptid (inferior_ptid, new_ptid);
929
930 /* We've just told GDB core that the thread changed target id, but,
931 in fact, it really is a different thread, with different register
932 contents. */
933 registers_changed ();
e26af52f
DJ
934}
935
e26af52f
DJ
936/* Handle the exit of a single thread LP. */
937
938static void
939exit_lwp (struct lwp_info *lp)
940{
e09875d4 941 struct thread_info *th = find_thread_ptid (lp->ptid);
063bfe2e
VP
942
943 if (th)
e26af52f 944 {
17faa917
DJ
945 if (print_thread_events)
946 printf_unfiltered (_("[%s exited]\n"), target_pid_to_str (lp->ptid));
947
4f8d22e3 948 delete_thread (lp->ptid);
e26af52f
DJ
949 }
950
951 delete_lwp (lp->ptid);
952}
953
a0ef4274
DJ
954/* Wait for the LWP specified by LP, which we have just attached to.
955 Returns a wait status for that LWP, to cache. */
956
957static int
4a6ed09b 958linux_nat_post_attach_wait (ptid_t ptid, int first, int *signalled)
a0ef4274 959{
dfd4cc63 960 pid_t new_pid, pid = ptid_get_lwp (ptid);
a0ef4274
DJ
961 int status;
962
644cebc9 963 if (linux_proc_pid_is_stopped (pid))
a0ef4274
DJ
964 {
965 if (debug_linux_nat)
966 fprintf_unfiltered (gdb_stdlog,
967 "LNPAW: Attaching to a stopped process\n");
968
969 /* The process is definitely stopped. It is in a job control
970 stop, unless the kernel predates the TASK_STOPPED /
971 TASK_TRACED distinction, in which case it might be in a
972 ptrace stop. Make sure it is in a ptrace stop; from there we
973 can kill it, signal it, et cetera.
974
975 First make sure there is a pending SIGSTOP. Since we are
976 already attached, the process can not transition from stopped
977 to running without a PTRACE_CONT; so we know this signal will
978 go into the queue. The SIGSTOP generated by PTRACE_ATTACH is
979 probably already in the queue (unless this kernel is old
980 enough to use TASK_STOPPED for ptrace stops); but since SIGSTOP
981 is not an RT signal, it can only be queued once. */
982 kill_lwp (pid, SIGSTOP);
983
984 /* Finally, resume the stopped process. This will deliver the SIGSTOP
985 (or a higher priority signal, just like normal PTRACE_ATTACH). */
986 ptrace (PTRACE_CONT, pid, 0, 0);
987 }
988
989 /* Make sure the initial process is stopped. The user-level threads
990 layer might want to poke around in the inferior, and that won't
991 work if things haven't stabilized yet. */
4a6ed09b 992 new_pid = my_waitpid (pid, &status, __WALL);
dacc9cb2
PP
993 gdb_assert (pid == new_pid);
994
995 if (!WIFSTOPPED (status))
996 {
997 /* The pid we tried to attach has apparently just exited. */
998 if (debug_linux_nat)
999 fprintf_unfiltered (gdb_stdlog, "LNPAW: Failed to stop %d: %s",
1000 pid, status_to_str (status));
1001 return status;
1002 }
a0ef4274
DJ
1003
1004 if (WSTOPSIG (status) != SIGSTOP)
1005 {
1006 *signalled = 1;
1007 if (debug_linux_nat)
1008 fprintf_unfiltered (gdb_stdlog,
1009 "LNPAW: Received %s after attaching\n",
1010 status_to_str (status));
1011 }
1012
1013 return status;
1014}
1015
b84876c2 1016static void
136d6dae
VP
1017linux_nat_create_inferior (struct target_ops *ops,
1018 char *exec_file, char *allargs, char **env,
b84876c2
PA
1019 int from_tty)
1020{
8cc73a39
SDJ
1021 struct cleanup *restore_personality
1022 = maybe_disable_address_space_randomization (disable_randomization);
b84876c2
PA
1023
1024 /* The fork_child mechanism is synchronous and calls target_wait, so
1025 we have to mask the async mode. */
1026
2455069d 1027 /* Make sure we report all signals during startup. */
94bedb42 1028 linux_nat_pass_signals (ops, 0, NULL);
2455069d 1029
136d6dae 1030 linux_ops->to_create_inferior (ops, exec_file, allargs, env, from_tty);
b84876c2 1031
8cc73a39 1032 do_cleanups (restore_personality);
b84876c2
PA
1033}
1034
8784d563
PA
1035/* Callback for linux_proc_attach_tgid_threads. Attach to PTID if not
1036 already attached. Returns true if a new LWP is found, false
1037 otherwise. */
1038
1039static int
1040attach_proc_task_lwp_callback (ptid_t ptid)
1041{
1042 struct lwp_info *lp;
1043
1044 /* Ignore LWPs we're already attached to. */
1045 lp = find_lwp_pid (ptid);
1046 if (lp == NULL)
1047 {
1048 int lwpid = ptid_get_lwp (ptid);
1049
1050 if (ptrace (PTRACE_ATTACH, lwpid, 0, 0) < 0)
1051 {
1052 int err = errno;
1053
1054 /* Be quiet if we simply raced with the thread exiting.
1055 EPERM is returned if the thread's task still exists, and
1056 is marked as exited or zombie, as well as other
1057 conditions, so in that case, confirm the status in
1058 /proc/PID/status. */
1059 if (err == ESRCH
1060 || (err == EPERM && linux_proc_pid_is_gone (lwpid)))
1061 {
1062 if (debug_linux_nat)
1063 {
1064 fprintf_unfiltered (gdb_stdlog,
1065 "Cannot attach to lwp %d: "
1066 "thread is gone (%d: %s)\n",
1067 lwpid, err, safe_strerror (err));
1068 }
1069 }
1070 else
1071 {
f71f0b0d 1072 warning (_("Cannot attach to lwp %d: %s"),
8784d563
PA
1073 lwpid,
1074 linux_ptrace_attach_fail_reason_string (ptid,
1075 err));
1076 }
1077 }
1078 else
1079 {
1080 if (debug_linux_nat)
1081 fprintf_unfiltered (gdb_stdlog,
1082 "PTRACE_ATTACH %s, 0, 0 (OK)\n",
1083 target_pid_to_str (ptid));
1084
1085 lp = add_lwp (ptid);
8784d563
PA
1086
1087 /* The next time we wait for this LWP we'll see a SIGSTOP as
1088 PTRACE_ATTACH brings it to a halt. */
1089 lp->signalled = 1;
1090
1091 /* We need to wait for a stop before being able to make the
1092 next ptrace call on this LWP. */
1093 lp->must_set_ptrace_flags = 1;
1094 }
1095
1096 return 1;
1097 }
1098 return 0;
1099}
1100
d6b0e80f 1101static void
c0939df1 1102linux_nat_attach (struct target_ops *ops, const char *args, int from_tty)
d6b0e80f
AC
1103{
1104 struct lwp_info *lp;
d6b0e80f 1105 int status;
af990527 1106 ptid_t ptid;
d6b0e80f 1107
2455069d 1108 /* Make sure we report all signals during attach. */
94bedb42 1109 linux_nat_pass_signals (ops, 0, NULL);
2455069d 1110
492d29ea 1111 TRY
87b0bb13
JK
1112 {
1113 linux_ops->to_attach (ops, args, from_tty);
1114 }
492d29ea 1115 CATCH (ex, RETURN_MASK_ERROR)
87b0bb13
JK
1116 {
1117 pid_t pid = parse_pid_to_attach (args);
1118 struct buffer buffer;
1119 char *message, *buffer_s;
1120
1121 message = xstrdup (ex.message);
1122 make_cleanup (xfree, message);
1123
1124 buffer_init (&buffer);
7ae1a6a6 1125 linux_ptrace_attach_fail_reason (pid, &buffer);
87b0bb13
JK
1126
1127 buffer_grow_str0 (&buffer, "");
1128 buffer_s = buffer_finish (&buffer);
1129 make_cleanup (xfree, buffer_s);
1130
7ae1a6a6
PA
1131 if (*buffer_s != '\0')
1132 throw_error (ex.error, "warning: %s\n%s", buffer_s, message);
1133 else
1134 throw_error (ex.error, "%s", message);
87b0bb13 1135 }
492d29ea 1136 END_CATCH
d6b0e80f 1137
af990527
PA
1138 /* The ptrace base target adds the main thread with (pid,0,0)
1139 format. Decorate it with lwp info. */
dfd4cc63
LM
1140 ptid = ptid_build (ptid_get_pid (inferior_ptid),
1141 ptid_get_pid (inferior_ptid),
1142 0);
af990527
PA
1143 thread_change_ptid (inferior_ptid, ptid);
1144
9f0bdab8 1145 /* Add the initial process as the first LWP to the list. */
26cb8b7c 1146 lp = add_initial_lwp (ptid);
a0ef4274 1147
4a6ed09b 1148 status = linux_nat_post_attach_wait (lp->ptid, 1, &lp->signalled);
dacc9cb2
PP
1149 if (!WIFSTOPPED (status))
1150 {
1151 if (WIFEXITED (status))
1152 {
1153 int exit_code = WEXITSTATUS (status);
1154
1155 target_terminal_ours ();
1156 target_mourn_inferior ();
1157 if (exit_code == 0)
1158 error (_("Unable to attach: program exited normally."));
1159 else
1160 error (_("Unable to attach: program exited with code %d."),
1161 exit_code);
1162 }
1163 else if (WIFSIGNALED (status))
1164 {
2ea28649 1165 enum gdb_signal signo;
dacc9cb2
PP
1166
1167 target_terminal_ours ();
1168 target_mourn_inferior ();
1169
2ea28649 1170 signo = gdb_signal_from_host (WTERMSIG (status));
dacc9cb2
PP
1171 error (_("Unable to attach: program terminated with signal "
1172 "%s, %s."),
2ea28649
PA
1173 gdb_signal_to_name (signo),
1174 gdb_signal_to_string (signo));
dacc9cb2
PP
1175 }
1176
1177 internal_error (__FILE__, __LINE__,
1178 _("unexpected status %d for PID %ld"),
dfd4cc63 1179 status, (long) ptid_get_lwp (ptid));
dacc9cb2
PP
1180 }
1181
a0ef4274 1182 lp->stopped = 1;
9f0bdab8 1183
a0ef4274 1184 /* Save the wait status to report later. */
d6b0e80f 1185 lp->resumed = 1;
a0ef4274
DJ
1186 if (debug_linux_nat)
1187 fprintf_unfiltered (gdb_stdlog,
1188 "LNA: waitpid %ld, saving status %s\n",
dfd4cc63 1189 (long) ptid_get_pid (lp->ptid), status_to_str (status));
710151dd 1190
7feb7d06
PA
1191 lp->status = status;
1192
8784d563
PA
1193 /* We must attach to every LWP. If /proc is mounted, use that to
1194 find them now. The inferior may be using raw clone instead of
1195 using pthreads. But even if it is using pthreads, thread_db
1196 walks structures in the inferior's address space to find the list
1197 of threads/LWPs, and those structures may well be corrupted.
1198 Note that once thread_db is loaded, we'll still use it to list
1199 threads and associate pthread info with each LWP. */
1200 linux_proc_attach_tgid_threads (ptid_get_pid (lp->ptid),
1201 attach_proc_task_lwp_callback);
1202
7feb7d06 1203 if (target_can_async_p ())
6a3753b3 1204 target_async (1);
d6b0e80f
AC
1205}
1206
a0ef4274
DJ
1207/* Get pending status of LP. */
1208static int
1209get_pending_status (struct lwp_info *lp, int *status)
1210{
a493e3e2 1211 enum gdb_signal signo = GDB_SIGNAL_0;
ca2163eb
PA
1212
1213 /* If we paused threads momentarily, we may have stored pending
1214 events in lp->status or lp->waitstatus (see stop_wait_callback),
1215 and GDB core hasn't seen any signal for those threads.
1216 Otherwise, the last signal reported to the core is found in the
1217 thread object's stop_signal.
1218
1219 There's a corner case that isn't handled here at present. Only
1220 if the thread stopped with a TARGET_WAITKIND_STOPPED does
1221 stop_signal make sense as a real signal to pass to the inferior.
1222 Some catchpoint related events, like
1223 TARGET_WAITKIND_(V)FORK|EXEC|SYSCALL, have their stop_signal set
a493e3e2 1224 to GDB_SIGNAL_SIGTRAP when the catchpoint triggers. But,
ca2163eb
PA
1225 those traps are debug API (ptrace in our case) related and
1226 induced; the inferior wouldn't see them if it wasn't being
1227 traced. Hence, we should never pass them to the inferior, even
1228 when set to pass state. Since this corner case isn't handled by
1229 infrun.c when proceeding with a signal, for consistency, neither
1230 do we handle it here (or elsewhere in the file we check for
1231 signal pass state). Normally SIGTRAP isn't set to pass state, so
1232 this is really a corner case. */
1233
1234 if (lp->waitstatus.kind != TARGET_WAITKIND_IGNORE)
a493e3e2 1235 signo = GDB_SIGNAL_0; /* a pending ptrace event, not a real signal. */
ca2163eb 1236 else if (lp->status)
2ea28649 1237 signo = gdb_signal_from_host (WSTOPSIG (lp->status));
fbea99ea 1238 else if (target_is_non_stop_p () && !is_executing (lp->ptid))
ca2163eb
PA
1239 {
1240 struct thread_info *tp = find_thread_ptid (lp->ptid);
e0881a8e 1241
16c381f0 1242 signo = tp->suspend.stop_signal;
ca2163eb 1243 }
fbea99ea 1244 else if (!target_is_non_stop_p ())
a0ef4274 1245 {
ca2163eb
PA
1246 struct target_waitstatus last;
1247 ptid_t last_ptid;
4c28f408 1248
ca2163eb 1249 get_last_target_status (&last_ptid, &last);
4c28f408 1250
dfd4cc63 1251 if (ptid_get_lwp (lp->ptid) == ptid_get_lwp (last_ptid))
ca2163eb 1252 {
e09875d4 1253 struct thread_info *tp = find_thread_ptid (lp->ptid);
e0881a8e 1254
16c381f0 1255 signo = tp->suspend.stop_signal;
4c28f408 1256 }
ca2163eb 1257 }
4c28f408 1258
ca2163eb 1259 *status = 0;
4c28f408 1260
a493e3e2 1261 if (signo == GDB_SIGNAL_0)
ca2163eb
PA
1262 {
1263 if (debug_linux_nat)
1264 fprintf_unfiltered (gdb_stdlog,
1265 "GPT: lwp %s has no pending signal\n",
1266 target_pid_to_str (lp->ptid));
1267 }
1268 else if (!signal_pass_state (signo))
1269 {
1270 if (debug_linux_nat)
3e43a32a
MS
1271 fprintf_unfiltered (gdb_stdlog,
1272 "GPT: lwp %s had signal %s, "
1273 "but it is in no pass state\n",
ca2163eb 1274 target_pid_to_str (lp->ptid),
2ea28649 1275 gdb_signal_to_string (signo));
a0ef4274 1276 }
a0ef4274 1277 else
4c28f408 1278 {
2ea28649 1279 *status = W_STOPCODE (gdb_signal_to_host (signo));
ca2163eb
PA
1280
1281 if (debug_linux_nat)
1282 fprintf_unfiltered (gdb_stdlog,
1283 "GPT: lwp %s has pending signal %s\n",
1284 target_pid_to_str (lp->ptid),
2ea28649 1285 gdb_signal_to_string (signo));
4c28f408 1286 }
a0ef4274
DJ
1287
1288 return 0;
1289}
1290
d6b0e80f
AC
1291static int
1292detach_callback (struct lwp_info *lp, void *data)
1293{
1294 gdb_assert (lp->status == 0 || WIFSTOPPED (lp->status));
1295
1296 if (debug_linux_nat && lp->status)
1297 fprintf_unfiltered (gdb_stdlog, "DC: Pending %s for %s on detach.\n",
1298 strsignal (WSTOPSIG (lp->status)),
1299 target_pid_to_str (lp->ptid));
1300
a0ef4274
DJ
1301 /* If there is a pending SIGSTOP, get rid of it. */
1302 if (lp->signalled)
d6b0e80f 1303 {
d6b0e80f
AC
1304 if (debug_linux_nat)
1305 fprintf_unfiltered (gdb_stdlog,
a0ef4274
DJ
1306 "DC: Sending SIGCONT to %s\n",
1307 target_pid_to_str (lp->ptid));
d6b0e80f 1308
dfd4cc63 1309 kill_lwp (ptid_get_lwp (lp->ptid), SIGCONT);
d6b0e80f 1310 lp->signalled = 0;
d6b0e80f
AC
1311 }
1312
1313 /* We don't actually detach from the LWP that has an id equal to the
1314 overall process id just yet. */
dfd4cc63 1315 if (ptid_get_lwp (lp->ptid) != ptid_get_pid (lp->ptid))
d6b0e80f 1316 {
a0ef4274
DJ
1317 int status = 0;
1318
1319 /* Pass on any pending signal for this LWP. */
1320 get_pending_status (lp, &status);
1321
7b50312a
PA
1322 if (linux_nat_prepare_to_resume != NULL)
1323 linux_nat_prepare_to_resume (lp);
d6b0e80f 1324 errno = 0;
dfd4cc63 1325 if (ptrace (PTRACE_DETACH, ptid_get_lwp (lp->ptid), 0,
a0ef4274 1326 WSTOPSIG (status)) < 0)
8a3fe4f8 1327 error (_("Can't detach %s: %s"), target_pid_to_str (lp->ptid),
d6b0e80f
AC
1328 safe_strerror (errno));
1329
1330 if (debug_linux_nat)
1331 fprintf_unfiltered (gdb_stdlog,
1332 "PTRACE_DETACH (%s, %s, 0) (OK)\n",
1333 target_pid_to_str (lp->ptid),
7feb7d06 1334 strsignal (WSTOPSIG (status)));
d6b0e80f
AC
1335
1336 delete_lwp (lp->ptid);
1337 }
1338
1339 return 0;
1340}
1341
1342static void
52554a0e 1343linux_nat_detach (struct target_ops *ops, const char *args, int from_tty)
d6b0e80f 1344{
b84876c2 1345 int pid;
a0ef4274 1346 int status;
d90e17a7
PA
1347 struct lwp_info *main_lwp;
1348
dfd4cc63 1349 pid = ptid_get_pid (inferior_ptid);
a0ef4274 1350
ae5e0686
MK
1351 /* Don't unregister from the event loop, as there may be other
1352 inferiors running. */
b84876c2 1353
4c28f408
PA
1354 /* Stop all threads before detaching. ptrace requires that the
1355 thread is stopped to sucessfully detach. */
d90e17a7 1356 iterate_over_lwps (pid_to_ptid (pid), stop_callback, NULL);
4c28f408
PA
1357 /* ... and wait until all of them have reported back that
1358 they're no longer running. */
d90e17a7 1359 iterate_over_lwps (pid_to_ptid (pid), stop_wait_callback, NULL);
4c28f408 1360
d90e17a7 1361 iterate_over_lwps (pid_to_ptid (pid), detach_callback, NULL);
d6b0e80f
AC
1362
1363 /* Only the initial process should be left right now. */
dfd4cc63 1364 gdb_assert (num_lwps (ptid_get_pid (inferior_ptid)) == 1);
d90e17a7
PA
1365
1366 main_lwp = find_lwp_pid (pid_to_ptid (pid));
d6b0e80f 1367
a0ef4274
DJ
1368 /* Pass on any pending signal for the last LWP. */
1369 if ((args == NULL || *args == '\0')
d90e17a7 1370 && get_pending_status (main_lwp, &status) != -1
a0ef4274
DJ
1371 && WIFSTOPPED (status))
1372 {
52554a0e
TT
1373 char *tem;
1374
a0ef4274
DJ
1375 /* Put the signal number in ARGS so that inf_ptrace_detach will
1376 pass it along with PTRACE_DETACH. */
224c3ddb 1377 tem = (char *) alloca (8);
cde33bf1 1378 xsnprintf (tem, 8, "%d", (int) WSTOPSIG (status));
52554a0e 1379 args = tem;
ddabfc73
TT
1380 if (debug_linux_nat)
1381 fprintf_unfiltered (gdb_stdlog,
1382 "LND: Sending signal %s to %s\n",
1383 args,
1384 target_pid_to_str (main_lwp->ptid));
a0ef4274
DJ
1385 }
1386
7b50312a
PA
1387 if (linux_nat_prepare_to_resume != NULL)
1388 linux_nat_prepare_to_resume (main_lwp);
d90e17a7 1389 delete_lwp (main_lwp->ptid);
b84876c2 1390
7a7d3353
PA
1391 if (forks_exist_p ())
1392 {
1393 /* Multi-fork case. The current inferior_ptid is being detached
1394 from, but there are other viable forks to debug. Detach from
1395 the current fork, and context-switch to the first
1396 available. */
1397 linux_fork_detach (args, from_tty);
7a7d3353
PA
1398 }
1399 else
1400 linux_ops->to_detach (ops, args, from_tty);
d6b0e80f
AC
1401}
1402
8a99810d
PA
1403/* Resume execution of the inferior process. If STEP is nonzero,
1404 single-step it. If SIGNAL is nonzero, give it that signal. */
1405
1406static void
23f238d3
PA
1407linux_resume_one_lwp_throw (struct lwp_info *lp, int step,
1408 enum gdb_signal signo)
8a99810d 1409{
8a99810d 1410 lp->step = step;
9c02b525
PA
1411
1412 /* stop_pc doubles as the PC the LWP had when it was last resumed.
1413 We only presently need that if the LWP is stepped though (to
1414 handle the case of stepping a breakpoint instruction). */
1415 if (step)
1416 {
1417 struct regcache *regcache = get_thread_regcache (lp->ptid);
1418
1419 lp->stop_pc = regcache_read_pc (regcache);
1420 }
1421 else
1422 lp->stop_pc = 0;
1423
8a99810d
PA
1424 if (linux_nat_prepare_to_resume != NULL)
1425 linux_nat_prepare_to_resume (lp);
90ad5e1d 1426 linux_ops->to_resume (linux_ops, lp->ptid, step, signo);
23f238d3
PA
1427
1428 /* Successfully resumed. Clear state that no longer makes sense,
1429 and mark the LWP as running. Must not do this before resuming
1430 otherwise if that fails other code will be confused. E.g., we'd
1431 later try to stop the LWP and hang forever waiting for a stop
1432 status. Note that we must not throw after this is cleared,
1433 otherwise handle_zombie_lwp_error would get confused. */
8a99810d 1434 lp->stopped = 0;
23f238d3 1435 lp->stop_reason = TARGET_STOPPED_BY_NO_REASON;
8a99810d
PA
1436 registers_changed_ptid (lp->ptid);
1437}
1438
23f238d3
PA
1439/* Called when we try to resume a stopped LWP and that errors out. If
1440 the LWP is no longer in ptrace-stopped state (meaning it's zombie,
1441 or about to become), discard the error, clear any pending status
1442 the LWP may have, and return true (we'll collect the exit status
1443 soon enough). Otherwise, return false. */
1444
1445static int
1446check_ptrace_stopped_lwp_gone (struct lwp_info *lp)
1447{
1448 /* If we get an error after resuming the LWP successfully, we'd
1449 confuse !T state for the LWP being gone. */
1450 gdb_assert (lp->stopped);
1451
1452 /* We can't just check whether the LWP is in 'Z (Zombie)' state,
1453 because even if ptrace failed with ESRCH, the tracee may be "not
1454 yet fully dead", but already refusing ptrace requests. In that
1455 case the tracee has 'R (Running)' state for a little bit
1456 (observed in Linux 3.18). See also the note on ESRCH in the
1457 ptrace(2) man page. Instead, check whether the LWP has any state
1458 other than ptrace-stopped. */
1459
1460 /* Don't assume anything if /proc/PID/status can't be read. */
1461 if (linux_proc_pid_is_trace_stopped_nowarn (ptid_get_lwp (lp->ptid)) == 0)
1462 {
1463 lp->stop_reason = TARGET_STOPPED_BY_NO_REASON;
1464 lp->status = 0;
1465 lp->waitstatus.kind = TARGET_WAITKIND_IGNORE;
1466 return 1;
1467 }
1468 return 0;
1469}
1470
1471/* Like linux_resume_one_lwp_throw, but no error is thrown if the LWP
1472 disappears while we try to resume it. */
1473
1474static void
1475linux_resume_one_lwp (struct lwp_info *lp, int step, enum gdb_signal signo)
1476{
1477 TRY
1478 {
1479 linux_resume_one_lwp_throw (lp, step, signo);
1480 }
1481 CATCH (ex, RETURN_MASK_ERROR)
1482 {
1483 if (!check_ptrace_stopped_lwp_gone (lp))
1484 throw_exception (ex);
1485 }
1486 END_CATCH
1487}
1488
d6b0e80f
AC
1489/* Resume LP. */
1490
25289eb2 1491static void
e5ef252a 1492resume_lwp (struct lwp_info *lp, int step, enum gdb_signal signo)
d6b0e80f 1493{
25289eb2 1494 if (lp->stopped)
6c95b8df 1495 {
c9657e70 1496 struct inferior *inf = find_inferior_ptid (lp->ptid);
25289eb2
PA
1497
1498 if (inf->vfork_child != NULL)
1499 {
1500 if (debug_linux_nat)
1501 fprintf_unfiltered (gdb_stdlog,
1502 "RC: Not resuming %s (vfork parent)\n",
1503 target_pid_to_str (lp->ptid));
1504 }
8a99810d 1505 else if (!lwp_status_pending_p (lp))
25289eb2
PA
1506 {
1507 if (debug_linux_nat)
1508 fprintf_unfiltered (gdb_stdlog,
e5ef252a
PA
1509 "RC: Resuming sibling %s, %s, %s\n",
1510 target_pid_to_str (lp->ptid),
1511 (signo != GDB_SIGNAL_0
1512 ? strsignal (gdb_signal_to_host (signo))
1513 : "0"),
1514 step ? "step" : "resume");
25289eb2 1515
8a99810d 1516 linux_resume_one_lwp (lp, step, signo);
25289eb2
PA
1517 }
1518 else
1519 {
1520 if (debug_linux_nat)
1521 fprintf_unfiltered (gdb_stdlog,
1522 "RC: Not resuming sibling %s (has pending)\n",
1523 target_pid_to_str (lp->ptid));
1524 }
6c95b8df 1525 }
25289eb2 1526 else
d6b0e80f 1527 {
d90e17a7
PA
1528 if (debug_linux_nat)
1529 fprintf_unfiltered (gdb_stdlog,
25289eb2 1530 "RC: Not resuming sibling %s (not stopped)\n",
d6b0e80f 1531 target_pid_to_str (lp->ptid));
d6b0e80f 1532 }
25289eb2 1533}
d6b0e80f 1534
8817a6f2
PA
1535/* Callback for iterate_over_lwps. If LWP is EXCEPT, do nothing.
1536 Resume LWP with the last stop signal, if it is in pass state. */
e5ef252a 1537
25289eb2 1538static int
8817a6f2 1539linux_nat_resume_callback (struct lwp_info *lp, void *except)
25289eb2 1540{
e5ef252a
PA
1541 enum gdb_signal signo = GDB_SIGNAL_0;
1542
8817a6f2
PA
1543 if (lp == except)
1544 return 0;
1545
e5ef252a
PA
1546 if (lp->stopped)
1547 {
1548 struct thread_info *thread;
1549
1550 thread = find_thread_ptid (lp->ptid);
1551 if (thread != NULL)
1552 {
70509625 1553 signo = thread->suspend.stop_signal;
e5ef252a
PA
1554 thread->suspend.stop_signal = GDB_SIGNAL_0;
1555 }
1556 }
1557
1558 resume_lwp (lp, 0, signo);
d6b0e80f
AC
1559 return 0;
1560}
1561
1562static int
1563resume_clear_callback (struct lwp_info *lp, void *data)
1564{
1565 lp->resumed = 0;
25289eb2 1566 lp->last_resume_kind = resume_stop;
d6b0e80f
AC
1567 return 0;
1568}
1569
1570static int
1571resume_set_callback (struct lwp_info *lp, void *data)
1572{
1573 lp->resumed = 1;
25289eb2 1574 lp->last_resume_kind = resume_continue;
d6b0e80f
AC
1575 return 0;
1576}
1577
1578static void
28439f5e 1579linux_nat_resume (struct target_ops *ops,
2ea28649 1580 ptid_t ptid, int step, enum gdb_signal signo)
d6b0e80f
AC
1581{
1582 struct lwp_info *lp;
d90e17a7 1583 int resume_many;
d6b0e80f 1584
76f50ad1
DJ
1585 if (debug_linux_nat)
1586 fprintf_unfiltered (gdb_stdlog,
1587 "LLR: Preparing to %s %s, %s, inferior_ptid %s\n",
1588 step ? "step" : "resume",
1589 target_pid_to_str (ptid),
a493e3e2 1590 (signo != GDB_SIGNAL_0
2ea28649 1591 ? strsignal (gdb_signal_to_host (signo)) : "0"),
76f50ad1
DJ
1592 target_pid_to_str (inferior_ptid));
1593
d6b0e80f 1594 /* A specific PTID means `step only this process id'. */
d90e17a7
PA
1595 resume_many = (ptid_equal (minus_one_ptid, ptid)
1596 || ptid_is_pid (ptid));
4c28f408 1597
e3e9f5a2
PA
1598 /* Mark the lwps we're resuming as resumed. */
1599 iterate_over_lwps (ptid, resume_set_callback, NULL);
d6b0e80f 1600
d90e17a7
PA
1601 /* See if it's the current inferior that should be handled
1602 specially. */
1603 if (resume_many)
1604 lp = find_lwp_pid (inferior_ptid);
1605 else
1606 lp = find_lwp_pid (ptid);
9f0bdab8 1607 gdb_assert (lp != NULL);
d6b0e80f 1608
9f0bdab8 1609 /* Remember if we're stepping. */
25289eb2 1610 lp->last_resume_kind = step ? resume_step : resume_continue;
d6b0e80f 1611
9f0bdab8
DJ
1612 /* If we have a pending wait status for this thread, there is no
1613 point in resuming the process. But first make sure that
1614 linux_nat_wait won't preemptively handle the event - we
1615 should never take this short-circuit if we are going to
1616 leave LP running, since we have skipped resuming all the
1617 other threads. This bit of code needs to be synchronized
1618 with linux_nat_wait. */
76f50ad1 1619
9f0bdab8
DJ
1620 if (lp->status && WIFSTOPPED (lp->status))
1621 {
2455069d
UW
1622 if (!lp->step
1623 && WSTOPSIG (lp->status)
1624 && sigismember (&pass_mask, WSTOPSIG (lp->status)))
d6b0e80f 1625 {
9f0bdab8
DJ
1626 if (debug_linux_nat)
1627 fprintf_unfiltered (gdb_stdlog,
1628 "LLR: Not short circuiting for ignored "
1629 "status 0x%x\n", lp->status);
1630
d6b0e80f
AC
1631 /* FIXME: What should we do if we are supposed to continue
1632 this thread with a signal? */
a493e3e2 1633 gdb_assert (signo == GDB_SIGNAL_0);
2ea28649 1634 signo = gdb_signal_from_host (WSTOPSIG (lp->status));
9f0bdab8
DJ
1635 lp->status = 0;
1636 }
1637 }
76f50ad1 1638
8a99810d 1639 if (lwp_status_pending_p (lp))
9f0bdab8
DJ
1640 {
1641 /* FIXME: What should we do if we are supposed to continue
1642 this thread with a signal? */
a493e3e2 1643 gdb_assert (signo == GDB_SIGNAL_0);
76f50ad1 1644
9f0bdab8
DJ
1645 if (debug_linux_nat)
1646 fprintf_unfiltered (gdb_stdlog,
1647 "LLR: Short circuiting for status 0x%x\n",
1648 lp->status);
d6b0e80f 1649
7feb7d06
PA
1650 if (target_can_async_p ())
1651 {
6a3753b3 1652 target_async (1);
7feb7d06
PA
1653 /* Tell the event loop we have something to process. */
1654 async_file_mark ();
1655 }
9f0bdab8 1656 return;
d6b0e80f
AC
1657 }
1658
d90e17a7 1659 if (resume_many)
8817a6f2 1660 iterate_over_lwps (ptid, linux_nat_resume_callback, lp);
d90e17a7 1661
d6b0e80f
AC
1662 if (debug_linux_nat)
1663 fprintf_unfiltered (gdb_stdlog,
1664 "LLR: %s %s, %s (resume event thread)\n",
1665 step ? "PTRACE_SINGLESTEP" : "PTRACE_CONT",
2bf6fb9d 1666 target_pid_to_str (lp->ptid),
a493e3e2 1667 (signo != GDB_SIGNAL_0
2ea28649 1668 ? strsignal (gdb_signal_to_host (signo)) : "0"));
b84876c2 1669
2bf6fb9d
PA
1670 linux_resume_one_lwp (lp, step, signo);
1671
b84876c2 1672 if (target_can_async_p ())
6a3753b3 1673 target_async (1);
d6b0e80f
AC
1674}
1675
c5f62d5f 1676/* Send a signal to an LWP. */
d6b0e80f
AC
1677
1678static int
1679kill_lwp (int lwpid, int signo)
1680{
4a6ed09b 1681 int ret;
d6b0e80f 1682
4a6ed09b
PA
1683 errno = 0;
1684 ret = syscall (__NR_tkill, lwpid, signo);
1685 if (errno == ENOSYS)
1686 {
1687 /* If tkill fails, then we are not using nptl threads, a
1688 configuration we no longer support. */
1689 perror_with_name (("tkill"));
1690 }
1691 return ret;
d6b0e80f
AC
1692}
1693
ca2163eb
PA
1694/* Handle a GNU/Linux syscall trap wait response. If we see a syscall
1695 event, check if the core is interested in it: if not, ignore the
1696 event, and keep waiting; otherwise, we need to toggle the LWP's
1697 syscall entry/exit status, since the ptrace event itself doesn't
1698 indicate it, and report the trap to higher layers. */
1699
1700static int
1701linux_handle_syscall_trap (struct lwp_info *lp, int stopping)
1702{
1703 struct target_waitstatus *ourstatus = &lp->waitstatus;
1704 struct gdbarch *gdbarch = target_thread_architecture (lp->ptid);
1705 int syscall_number = (int) gdbarch_get_syscall_number (gdbarch, lp->ptid);
1706
1707 if (stopping)
1708 {
1709 /* If we're stopping threads, there's a SIGSTOP pending, which
1710 makes it so that the LWP reports an immediate syscall return,
1711 followed by the SIGSTOP. Skip seeing that "return" using
1712 PTRACE_CONT directly, and let stop_wait_callback collect the
1713 SIGSTOP. Later when the thread is resumed, a new syscall
1714 entry event. If we didn't do this (and returned 0), we'd
1715 leave a syscall entry pending, and our caller, by using
1716 PTRACE_CONT to collect the SIGSTOP, skips the syscall return
1717 itself. Later, when the user re-resumes this LWP, we'd see
1718 another syscall entry event and we'd mistake it for a return.
1719
1720 If stop_wait_callback didn't force the SIGSTOP out of the LWP
1721 (leaving immediately with LWP->signalled set, without issuing
1722 a PTRACE_CONT), it would still be problematic to leave this
1723 syscall enter pending, as later when the thread is resumed,
1724 it would then see the same syscall exit mentioned above,
1725 followed by the delayed SIGSTOP, while the syscall didn't
1726 actually get to execute. It seems it would be even more
1727 confusing to the user. */
1728
1729 if (debug_linux_nat)
1730 fprintf_unfiltered (gdb_stdlog,
1731 "LHST: ignoring syscall %d "
1732 "for LWP %ld (stopping threads), "
1733 "resuming with PTRACE_CONT for SIGSTOP\n",
1734 syscall_number,
dfd4cc63 1735 ptid_get_lwp (lp->ptid));
ca2163eb
PA
1736
1737 lp->syscall_state = TARGET_WAITKIND_IGNORE;
dfd4cc63 1738 ptrace (PTRACE_CONT, ptid_get_lwp (lp->ptid), 0, 0);
8817a6f2 1739 lp->stopped = 0;
ca2163eb
PA
1740 return 1;
1741 }
1742
bfd09d20
JS
1743 /* Always update the entry/return state, even if this particular
1744 syscall isn't interesting to the core now. In async mode,
1745 the user could install a new catchpoint for this syscall
1746 between syscall enter/return, and we'll need to know to
1747 report a syscall return if that happens. */
1748 lp->syscall_state = (lp->syscall_state == TARGET_WAITKIND_SYSCALL_ENTRY
1749 ? TARGET_WAITKIND_SYSCALL_RETURN
1750 : TARGET_WAITKIND_SYSCALL_ENTRY);
1751
ca2163eb
PA
1752 if (catch_syscall_enabled ())
1753 {
ca2163eb
PA
1754 if (catching_syscall_number (syscall_number))
1755 {
1756 /* Alright, an event to report. */
1757 ourstatus->kind = lp->syscall_state;
1758 ourstatus->value.syscall_number = syscall_number;
1759
1760 if (debug_linux_nat)
1761 fprintf_unfiltered (gdb_stdlog,
1762 "LHST: stopping for %s of syscall %d"
1763 " for LWP %ld\n",
3e43a32a
MS
1764 lp->syscall_state
1765 == TARGET_WAITKIND_SYSCALL_ENTRY
ca2163eb
PA
1766 ? "entry" : "return",
1767 syscall_number,
dfd4cc63 1768 ptid_get_lwp (lp->ptid));
ca2163eb
PA
1769 return 0;
1770 }
1771
1772 if (debug_linux_nat)
1773 fprintf_unfiltered (gdb_stdlog,
1774 "LHST: ignoring %s of syscall %d "
1775 "for LWP %ld\n",
1776 lp->syscall_state == TARGET_WAITKIND_SYSCALL_ENTRY
1777 ? "entry" : "return",
1778 syscall_number,
dfd4cc63 1779 ptid_get_lwp (lp->ptid));
ca2163eb
PA
1780 }
1781 else
1782 {
1783 /* If we had been syscall tracing, and hence used PT_SYSCALL
1784 before on this LWP, it could happen that the user removes all
1785 syscall catchpoints before we get to process this event.
1786 There are two noteworthy issues here:
1787
1788 - When stopped at a syscall entry event, resuming with
1789 PT_STEP still resumes executing the syscall and reports a
1790 syscall return.
1791
1792 - Only PT_SYSCALL catches syscall enters. If we last
1793 single-stepped this thread, then this event can't be a
1794 syscall enter. If we last single-stepped this thread, this
1795 has to be a syscall exit.
1796
1797 The points above mean that the next resume, be it PT_STEP or
1798 PT_CONTINUE, can not trigger a syscall trace event. */
1799 if (debug_linux_nat)
1800 fprintf_unfiltered (gdb_stdlog,
3e43a32a
MS
1801 "LHST: caught syscall event "
1802 "with no syscall catchpoints."
ca2163eb
PA
1803 " %d for LWP %ld, ignoring\n",
1804 syscall_number,
dfd4cc63 1805 ptid_get_lwp (lp->ptid));
ca2163eb
PA
1806 lp->syscall_state = TARGET_WAITKIND_IGNORE;
1807 }
1808
1809 /* The core isn't interested in this event. For efficiency, avoid
1810 stopping all threads only to have the core resume them all again.
1811 Since we're not stopping threads, if we're still syscall tracing
1812 and not stepping, we can't use PTRACE_CONT here, as we'd miss any
1813 subsequent syscall. Simply resume using the inf-ptrace layer,
1814 which knows when to use PT_SYSCALL or PT_CONTINUE. */
1815
8a99810d 1816 linux_resume_one_lwp (lp, lp->step, GDB_SIGNAL_0);
ca2163eb
PA
1817 return 1;
1818}
1819
3d799a95
DJ
1820/* Handle a GNU/Linux extended wait response. If we see a clone
1821 event, we need to add the new LWP to our list (and not report the
1822 trap to higher layers). This function returns non-zero if the
1823 event should be ignored and we should wait again. If STOPPING is
1824 true, the new LWP remains stopped, otherwise it is continued. */
d6b0e80f
AC
1825
1826static int
4dd63d48 1827linux_handle_extended_wait (struct lwp_info *lp, int status)
d6b0e80f 1828{
dfd4cc63 1829 int pid = ptid_get_lwp (lp->ptid);
3d799a95 1830 struct target_waitstatus *ourstatus = &lp->waitstatus;
89a5711c 1831 int event = linux_ptrace_get_extended_event (status);
d6b0e80f 1832
bfd09d20
JS
1833 /* All extended events we currently use are mid-syscall. Only
1834 PTRACE_EVENT_STOP is delivered more like a signal-stop, but
1835 you have to be using PTRACE_SEIZE to get that. */
1836 lp->syscall_state = TARGET_WAITKIND_SYSCALL_ENTRY;
1837
3d799a95
DJ
1838 if (event == PTRACE_EVENT_FORK || event == PTRACE_EVENT_VFORK
1839 || event == PTRACE_EVENT_CLONE)
d6b0e80f 1840 {
3d799a95
DJ
1841 unsigned long new_pid;
1842 int ret;
1843
1844 ptrace (PTRACE_GETEVENTMSG, pid, 0, &new_pid);
6fc19103 1845
3d799a95
DJ
1846 /* If we haven't already seen the new PID stop, wait for it now. */
1847 if (! pull_pid_from_list (&stopped_pids, new_pid, &status))
1848 {
1849 /* The new child has a pending SIGSTOP. We can't affect it until it
1850 hits the SIGSTOP, but we're already attached. */
4a6ed09b 1851 ret = my_waitpid (new_pid, &status, __WALL);
3d799a95
DJ
1852 if (ret == -1)
1853 perror_with_name (_("waiting for new child"));
1854 else if (ret != new_pid)
1855 internal_error (__FILE__, __LINE__,
1856 _("wait returned unexpected PID %d"), ret);
1857 else if (!WIFSTOPPED (status))
1858 internal_error (__FILE__, __LINE__,
1859 _("wait returned unexpected status 0x%x"), status);
1860 }
1861
3a3e9ee3 1862 ourstatus->value.related_pid = ptid_build (new_pid, new_pid, 0);
3d799a95 1863
26cb8b7c
PA
1864 if (event == PTRACE_EVENT_FORK || event == PTRACE_EVENT_VFORK)
1865 {
1866 /* The arch-specific native code may need to know about new
1867 forks even if those end up never mapped to an
1868 inferior. */
1869 if (linux_nat_new_fork != NULL)
1870 linux_nat_new_fork (lp, new_pid);
1871 }
1872
2277426b 1873 if (event == PTRACE_EVENT_FORK
dfd4cc63 1874 && linux_fork_checkpointing_p (ptid_get_pid (lp->ptid)))
2277426b 1875 {
2277426b
PA
1876 /* Handle checkpointing by linux-fork.c here as a special
1877 case. We don't want the follow-fork-mode or 'catch fork'
1878 to interfere with this. */
1879
1880 /* This won't actually modify the breakpoint list, but will
1881 physically remove the breakpoints from the child. */
d80ee84f 1882 detach_breakpoints (ptid_build (new_pid, new_pid, 0));
2277426b
PA
1883
1884 /* Retain child fork in ptrace (stopped) state. */
14571dad
MS
1885 if (!find_fork_pid (new_pid))
1886 add_fork (new_pid);
2277426b
PA
1887
1888 /* Report as spurious, so that infrun doesn't want to follow
1889 this fork. We're actually doing an infcall in
1890 linux-fork.c. */
1891 ourstatus->kind = TARGET_WAITKIND_SPURIOUS;
2277426b
PA
1892
1893 /* Report the stop to the core. */
1894 return 0;
1895 }
1896
3d799a95
DJ
1897 if (event == PTRACE_EVENT_FORK)
1898 ourstatus->kind = TARGET_WAITKIND_FORKED;
1899 else if (event == PTRACE_EVENT_VFORK)
1900 ourstatus->kind = TARGET_WAITKIND_VFORKED;
4dd63d48 1901 else if (event == PTRACE_EVENT_CLONE)
3d799a95 1902 {
78768c4a
JK
1903 struct lwp_info *new_lp;
1904
3d799a95 1905 ourstatus->kind = TARGET_WAITKIND_IGNORE;
78768c4a 1906
3c4d7e12
PA
1907 if (debug_linux_nat)
1908 fprintf_unfiltered (gdb_stdlog,
1909 "LHEW: Got clone event "
1910 "from LWP %d, new child is LWP %ld\n",
1911 pid, new_pid);
1912
dfd4cc63 1913 new_lp = add_lwp (ptid_build (ptid_get_pid (lp->ptid), new_pid, 0));
4c28f408 1914 new_lp->stopped = 1;
4dd63d48 1915 new_lp->resumed = 1;
d6b0e80f 1916
2db9a427
PA
1917 /* If the thread_db layer is active, let it record the user
1918 level thread id and status, and add the thread to GDB's
1919 list. */
1920 if (!thread_db_notice_clone (lp->ptid, new_lp->ptid))
3d799a95 1921 {
2db9a427
PA
1922 /* The process is not using thread_db. Add the LWP to
1923 GDB's list. */
1924 target_post_attach (ptid_get_lwp (new_lp->ptid));
1925 add_thread (new_lp->ptid);
1926 }
4c28f408 1927
2ee52aa4 1928 /* Even if we're stopping the thread for some reason
4dd63d48
PA
1929 internal to this module, from the perspective of infrun
1930 and the user/frontend, this new thread is running until
1931 it next reports a stop. */
2ee52aa4 1932 set_running (new_lp->ptid, 1);
4dd63d48 1933 set_executing (new_lp->ptid, 1);
4c28f408 1934
4dd63d48 1935 if (WSTOPSIG (status) != SIGSTOP)
79395f92 1936 {
4dd63d48
PA
1937 /* This can happen if someone starts sending signals to
1938 the new thread before it gets a chance to run, which
1939 have a lower number than SIGSTOP (e.g. SIGUSR1).
1940 This is an unlikely case, and harder to handle for
1941 fork / vfork than for clone, so we do not try - but
1942 we handle it for clone events here. */
1943
1944 new_lp->signalled = 1;
1945
79395f92
PA
1946 /* We created NEW_LP so it cannot yet contain STATUS. */
1947 gdb_assert (new_lp->status == 0);
1948
1949 /* Save the wait status to report later. */
1950 if (debug_linux_nat)
1951 fprintf_unfiltered (gdb_stdlog,
1952 "LHEW: waitpid of new LWP %ld, "
1953 "saving status %s\n",
dfd4cc63 1954 (long) ptid_get_lwp (new_lp->ptid),
79395f92
PA
1955 status_to_str (status));
1956 new_lp->status = status;
1957 }
aa01bd36
PA
1958 else if (report_thread_events)
1959 {
1960 new_lp->waitstatus.kind = TARGET_WAITKIND_THREAD_CREATED;
1961 new_lp->status = status;
1962 }
79395f92 1963
3d799a95
DJ
1964 return 1;
1965 }
1966
1967 return 0;
d6b0e80f
AC
1968 }
1969
3d799a95
DJ
1970 if (event == PTRACE_EVENT_EXEC)
1971 {
a75724bc
PA
1972 if (debug_linux_nat)
1973 fprintf_unfiltered (gdb_stdlog,
1974 "LHEW: Got exec event from LWP %ld\n",
dfd4cc63 1975 ptid_get_lwp (lp->ptid));
a75724bc 1976
3d799a95
DJ
1977 ourstatus->kind = TARGET_WAITKIND_EXECD;
1978 ourstatus->value.execd_pathname
8dd27370 1979 = xstrdup (linux_child_pid_to_exec_file (NULL, pid));
3d799a95 1980
8af756ef
PA
1981 /* The thread that execed must have been resumed, but, when a
1982 thread execs, it changes its tid to the tgid, and the old
1983 tgid thread might have not been resumed. */
1984 lp->resumed = 1;
6c95b8df
PA
1985 return 0;
1986 }
1987
1988 if (event == PTRACE_EVENT_VFORK_DONE)
1989 {
1990 if (current_inferior ()->waiting_for_vfork_done)
3d799a95 1991 {
6c95b8df 1992 if (debug_linux_nat)
3e43a32a
MS
1993 fprintf_unfiltered (gdb_stdlog,
1994 "LHEW: Got expected PTRACE_EVENT_"
1995 "VFORK_DONE from LWP %ld: stopping\n",
dfd4cc63 1996 ptid_get_lwp (lp->ptid));
3d799a95 1997
6c95b8df
PA
1998 ourstatus->kind = TARGET_WAITKIND_VFORK_DONE;
1999 return 0;
3d799a95
DJ
2000 }
2001
6c95b8df 2002 if (debug_linux_nat)
3e43a32a
MS
2003 fprintf_unfiltered (gdb_stdlog,
2004 "LHEW: Got PTRACE_EVENT_VFORK_DONE "
20ba1ce6 2005 "from LWP %ld: ignoring\n",
dfd4cc63 2006 ptid_get_lwp (lp->ptid));
6c95b8df 2007 return 1;
3d799a95
DJ
2008 }
2009
2010 internal_error (__FILE__, __LINE__,
2011 _("unknown ptrace event %d"), event);
d6b0e80f
AC
2012}
2013
2014/* Wait for LP to stop. Returns the wait status, or 0 if the LWP has
2015 exited. */
2016
2017static int
2018wait_lwp (struct lwp_info *lp)
2019{
2020 pid_t pid;
432b4d03 2021 int status = 0;
d6b0e80f 2022 int thread_dead = 0;
432b4d03 2023 sigset_t prev_mask;
d6b0e80f
AC
2024
2025 gdb_assert (!lp->stopped);
2026 gdb_assert (lp->status == 0);
2027
432b4d03
JK
2028 /* Make sure SIGCHLD is blocked for sigsuspend avoiding a race below. */
2029 block_child_signals (&prev_mask);
2030
2031 for (;;)
d6b0e80f 2032 {
4a6ed09b 2033 pid = my_waitpid (ptid_get_lwp (lp->ptid), &status, __WALL | WNOHANG);
a9f4bb21
PA
2034 if (pid == -1 && errno == ECHILD)
2035 {
2036 /* The thread has previously exited. We need to delete it
4a6ed09b
PA
2037 now because if this was a non-leader thread execing, we
2038 won't get an exit event. See comments on exec events at
2039 the top of the file. */
a9f4bb21
PA
2040 thread_dead = 1;
2041 if (debug_linux_nat)
2042 fprintf_unfiltered (gdb_stdlog, "WL: %s vanished.\n",
2043 target_pid_to_str (lp->ptid));
2044 }
432b4d03
JK
2045 if (pid != 0)
2046 break;
2047
2048 /* Bugs 10970, 12702.
2049 Thread group leader may have exited in which case we'll lock up in
2050 waitpid if there are other threads, even if they are all zombies too.
2051 Basically, we're not supposed to use waitpid this way.
4a6ed09b
PA
2052 tkill(pid,0) cannot be used here as it gets ESRCH for both
2053 for zombie and running processes.
432b4d03
JK
2054
2055 As a workaround, check if we're waiting for the thread group leader and
2056 if it's a zombie, and avoid calling waitpid if it is.
2057
2058 This is racy, what if the tgl becomes a zombie right after we check?
2059 Therefore always use WNOHANG with sigsuspend - it is equivalent to
5f572dec 2060 waiting waitpid but linux_proc_pid_is_zombie is safe this way. */
432b4d03 2061
dfd4cc63
LM
2062 if (ptid_get_pid (lp->ptid) == ptid_get_lwp (lp->ptid)
2063 && linux_proc_pid_is_zombie (ptid_get_lwp (lp->ptid)))
d6b0e80f 2064 {
d6b0e80f
AC
2065 thread_dead = 1;
2066 if (debug_linux_nat)
432b4d03
JK
2067 fprintf_unfiltered (gdb_stdlog,
2068 "WL: Thread group leader %s vanished.\n",
d6b0e80f 2069 target_pid_to_str (lp->ptid));
432b4d03 2070 break;
d6b0e80f 2071 }
432b4d03
JK
2072
2073 /* Wait for next SIGCHLD and try again. This may let SIGCHLD handlers
2074 get invoked despite our caller had them intentionally blocked by
2075 block_child_signals. This is sensitive only to the loop of
2076 linux_nat_wait_1 and there if we get called my_waitpid gets called
2077 again before it gets to sigsuspend so we can safely let the handlers
2078 get executed here. */
2079
d36bf488
DE
2080 if (debug_linux_nat)
2081 fprintf_unfiltered (gdb_stdlog, "WL: about to sigsuspend\n");
432b4d03
JK
2082 sigsuspend (&suspend_mask);
2083 }
2084
2085 restore_child_signals_mask (&prev_mask);
2086
d6b0e80f
AC
2087 if (!thread_dead)
2088 {
dfd4cc63 2089 gdb_assert (pid == ptid_get_lwp (lp->ptid));
d6b0e80f
AC
2090
2091 if (debug_linux_nat)
2092 {
2093 fprintf_unfiltered (gdb_stdlog,
2094 "WL: waitpid %s received %s\n",
2095 target_pid_to_str (lp->ptid),
2096 status_to_str (status));
2097 }
d6b0e80f 2098
a9f4bb21
PA
2099 /* Check if the thread has exited. */
2100 if (WIFEXITED (status) || WIFSIGNALED (status))
2101 {
aa01bd36
PA
2102 if (report_thread_events
2103 || ptid_get_pid (lp->ptid) == ptid_get_lwp (lp->ptid))
69dde7dc
PA
2104 {
2105 if (debug_linux_nat)
aa01bd36 2106 fprintf_unfiltered (gdb_stdlog, "WL: LWP %d exited.\n",
69dde7dc
PA
2107 ptid_get_pid (lp->ptid));
2108
aa01bd36 2109 /* If this is the leader exiting, it means the whole
69dde7dc
PA
2110 process is gone. Store the status to report to the
2111 core. Store it in lp->waitstatus, because lp->status
2112 would be ambiguous (W_EXITCODE(0,0) == 0). */
2113 store_waitstatus (&lp->waitstatus, status);
2114 return 0;
2115 }
2116
a9f4bb21
PA
2117 thread_dead = 1;
2118 if (debug_linux_nat)
2119 fprintf_unfiltered (gdb_stdlog, "WL: %s exited.\n",
2120 target_pid_to_str (lp->ptid));
2121 }
d6b0e80f
AC
2122 }
2123
2124 if (thread_dead)
2125 {
e26af52f 2126 exit_lwp (lp);
d6b0e80f
AC
2127 return 0;
2128 }
2129
2130 gdb_assert (WIFSTOPPED (status));
8817a6f2 2131 lp->stopped = 1;
d6b0e80f 2132
8784d563
PA
2133 if (lp->must_set_ptrace_flags)
2134 {
2135 struct inferior *inf = find_inferior_pid (ptid_get_pid (lp->ptid));
de0d863e 2136 int options = linux_nat_ptrace_options (inf->attach_flag);
8784d563 2137
de0d863e 2138 linux_enable_event_reporting (ptid_get_lwp (lp->ptid), options);
8784d563
PA
2139 lp->must_set_ptrace_flags = 0;
2140 }
2141
ca2163eb
PA
2142 /* Handle GNU/Linux's syscall SIGTRAPs. */
2143 if (WIFSTOPPED (status) && WSTOPSIG (status) == SYSCALL_SIGTRAP)
2144 {
2145 /* No longer need the sysgood bit. The ptrace event ends up
2146 recorded in lp->waitstatus if we care for it. We can carry
2147 on handling the event like a regular SIGTRAP from here
2148 on. */
2149 status = W_STOPCODE (SIGTRAP);
2150 if (linux_handle_syscall_trap (lp, 1))
2151 return wait_lwp (lp);
2152 }
bfd09d20
JS
2153 else
2154 {
2155 /* Almost all other ptrace-stops are known to be outside of system
2156 calls, with further exceptions in linux_handle_extended_wait. */
2157 lp->syscall_state = TARGET_WAITKIND_IGNORE;
2158 }
ca2163eb 2159
d6b0e80f 2160 /* Handle GNU/Linux's extended waitstatus for trace events. */
89a5711c
DB
2161 if (WIFSTOPPED (status) && WSTOPSIG (status) == SIGTRAP
2162 && linux_is_extended_waitstatus (status))
d6b0e80f
AC
2163 {
2164 if (debug_linux_nat)
2165 fprintf_unfiltered (gdb_stdlog,
2166 "WL: Handling extended status 0x%06x\n",
2167 status);
4dd63d48 2168 linux_handle_extended_wait (lp, status);
20ba1ce6 2169 return 0;
d6b0e80f
AC
2170 }
2171
2172 return status;
2173}
2174
2175/* Send a SIGSTOP to LP. */
2176
2177static int
2178stop_callback (struct lwp_info *lp, void *data)
2179{
2180 if (!lp->stopped && !lp->signalled)
2181 {
2182 int ret;
2183
2184 if (debug_linux_nat)
2185 {
2186 fprintf_unfiltered (gdb_stdlog,
2187 "SC: kill %s **<SIGSTOP>**\n",
2188 target_pid_to_str (lp->ptid));
2189 }
2190 errno = 0;
dfd4cc63 2191 ret = kill_lwp (ptid_get_lwp (lp->ptid), SIGSTOP);
d6b0e80f
AC
2192 if (debug_linux_nat)
2193 {
2194 fprintf_unfiltered (gdb_stdlog,
2195 "SC: lwp kill %d %s\n",
2196 ret,
2197 errno ? safe_strerror (errno) : "ERRNO-OK");
2198 }
2199
2200 lp->signalled = 1;
2201 gdb_assert (lp->status == 0);
2202 }
2203
2204 return 0;
2205}
2206
7b50312a
PA
2207/* Request a stop on LWP. */
2208
2209void
2210linux_stop_lwp (struct lwp_info *lwp)
2211{
2212 stop_callback (lwp, NULL);
2213}
2214
2db9a427
PA
2215/* See linux-nat.h */
2216
2217void
2218linux_stop_and_wait_all_lwps (void)
2219{
2220 /* Stop all LWP's ... */
2221 iterate_over_lwps (minus_one_ptid, stop_callback, NULL);
2222
2223 /* ... and wait until all of them have reported back that
2224 they're no longer running. */
2225 iterate_over_lwps (minus_one_ptid, stop_wait_callback, NULL);
2226}
2227
2228/* See linux-nat.h */
2229
2230void
2231linux_unstop_all_lwps (void)
2232{
2233 iterate_over_lwps (minus_one_ptid,
2234 resume_stopped_resumed_lwps, &minus_one_ptid);
2235}
2236
57380f4e 2237/* Return non-zero if LWP PID has a pending SIGINT. */
d6b0e80f
AC
2238
2239static int
57380f4e
DJ
2240linux_nat_has_pending_sigint (int pid)
2241{
2242 sigset_t pending, blocked, ignored;
57380f4e
DJ
2243
2244 linux_proc_pending_signals (pid, &pending, &blocked, &ignored);
2245
2246 if (sigismember (&pending, SIGINT)
2247 && !sigismember (&ignored, SIGINT))
2248 return 1;
2249
2250 return 0;
2251}
2252
2253/* Set a flag in LP indicating that we should ignore its next SIGINT. */
2254
2255static int
2256set_ignore_sigint (struct lwp_info *lp, void *data)
d6b0e80f 2257{
57380f4e
DJ
2258 /* If a thread has a pending SIGINT, consume it; otherwise, set a
2259 flag to consume the next one. */
2260 if (lp->stopped && lp->status != 0 && WIFSTOPPED (lp->status)
2261 && WSTOPSIG (lp->status) == SIGINT)
2262 lp->status = 0;
2263 else
2264 lp->ignore_sigint = 1;
2265
2266 return 0;
2267}
2268
2269/* If LP does not have a SIGINT pending, then clear the ignore_sigint flag.
2270 This function is called after we know the LWP has stopped; if the LWP
2271 stopped before the expected SIGINT was delivered, then it will never have
2272 arrived. Also, if the signal was delivered to a shared queue and consumed
2273 by a different thread, it will never be delivered to this LWP. */
d6b0e80f 2274
57380f4e
DJ
2275static void
2276maybe_clear_ignore_sigint (struct lwp_info *lp)
2277{
2278 if (!lp->ignore_sigint)
2279 return;
2280
dfd4cc63 2281 if (!linux_nat_has_pending_sigint (ptid_get_lwp (lp->ptid)))
57380f4e
DJ
2282 {
2283 if (debug_linux_nat)
2284 fprintf_unfiltered (gdb_stdlog,
2285 "MCIS: Clearing bogus flag for %s\n",
2286 target_pid_to_str (lp->ptid));
2287 lp->ignore_sigint = 0;
2288 }
2289}
2290
ebec9a0f
PA
2291/* Fetch the possible triggered data watchpoint info and store it in
2292 LP.
2293
2294 On some archs, like x86, that use debug registers to set
2295 watchpoints, it's possible that the way to know which watched
2296 address trapped, is to check the register that is used to select
2297 which address to watch. Problem is, between setting the watchpoint
2298 and reading back which data address trapped, the user may change
2299 the set of watchpoints, and, as a consequence, GDB changes the
2300 debug registers in the inferior. To avoid reading back a stale
2301 stopped-data-address when that happens, we cache in LP the fact
2302 that a watchpoint trapped, and the corresponding data address, as
2303 soon as we see LP stop with a SIGTRAP. If GDB changes the debug
2304 registers meanwhile, we have the cached data we can rely on. */
2305
9c02b525
PA
2306static int
2307check_stopped_by_watchpoint (struct lwp_info *lp)
ebec9a0f
PA
2308{
2309 struct cleanup *old_chain;
2310
2311 if (linux_ops->to_stopped_by_watchpoint == NULL)
9c02b525 2312 return 0;
ebec9a0f
PA
2313
2314 old_chain = save_inferior_ptid ();
2315 inferior_ptid = lp->ptid;
2316
9c02b525 2317 if (linux_ops->to_stopped_by_watchpoint (linux_ops))
ebec9a0f 2318 {
15c66dd6 2319 lp->stop_reason = TARGET_STOPPED_BY_WATCHPOINT;
9c02b525 2320
ebec9a0f
PA
2321 if (linux_ops->to_stopped_data_address != NULL)
2322 lp->stopped_data_address_p =
2323 linux_ops->to_stopped_data_address (&current_target,
2324 &lp->stopped_data_address);
2325 else
2326 lp->stopped_data_address_p = 0;
2327 }
2328
2329 do_cleanups (old_chain);
9c02b525 2330
15c66dd6 2331 return lp->stop_reason == TARGET_STOPPED_BY_WATCHPOINT;
9c02b525
PA
2332}
2333
9c02b525 2334/* Returns true if the LWP had stopped for a watchpoint. */
ebec9a0f
PA
2335
2336static int
6a109b6b 2337linux_nat_stopped_by_watchpoint (struct target_ops *ops)
ebec9a0f
PA
2338{
2339 struct lwp_info *lp = find_lwp_pid (inferior_ptid);
2340
2341 gdb_assert (lp != NULL);
2342
15c66dd6 2343 return lp->stop_reason == TARGET_STOPPED_BY_WATCHPOINT;
ebec9a0f
PA
2344}
2345
2346static int
2347linux_nat_stopped_data_address (struct target_ops *ops, CORE_ADDR *addr_p)
2348{
2349 struct lwp_info *lp = find_lwp_pid (inferior_ptid);
2350
2351 gdb_assert (lp != NULL);
2352
2353 *addr_p = lp->stopped_data_address;
2354
2355 return lp->stopped_data_address_p;
2356}
2357
26ab7092
JK
2358/* Commonly any breakpoint / watchpoint generate only SIGTRAP. */
2359
2360static int
2361sigtrap_is_event (int status)
2362{
2363 return WIFSTOPPED (status) && WSTOPSIG (status) == SIGTRAP;
2364}
2365
26ab7092
JK
2366/* Set alternative SIGTRAP-like events recognizer. If
2367 breakpoint_inserted_here_p there then gdbarch_decr_pc_after_break will be
2368 applied. */
2369
2370void
2371linux_nat_set_status_is_event (struct target_ops *t,
2372 int (*status_is_event) (int status))
2373{
2374 linux_nat_status_is_event = status_is_event;
2375}
2376
57380f4e
DJ
2377/* Wait until LP is stopped. */
2378
2379static int
2380stop_wait_callback (struct lwp_info *lp, void *data)
2381{
c9657e70 2382 struct inferior *inf = find_inferior_ptid (lp->ptid);
6c95b8df
PA
2383
2384 /* If this is a vfork parent, bail out, it is not going to report
2385 any SIGSTOP until the vfork is done with. */
2386 if (inf->vfork_child != NULL)
2387 return 0;
2388
d6b0e80f
AC
2389 if (!lp->stopped)
2390 {
2391 int status;
2392
2393 status = wait_lwp (lp);
2394 if (status == 0)
2395 return 0;
2396
57380f4e
DJ
2397 if (lp->ignore_sigint && WIFSTOPPED (status)
2398 && WSTOPSIG (status) == SIGINT)
d6b0e80f 2399 {
57380f4e 2400 lp->ignore_sigint = 0;
d6b0e80f
AC
2401
2402 errno = 0;
dfd4cc63 2403 ptrace (PTRACE_CONT, ptid_get_lwp (lp->ptid), 0, 0);
8817a6f2 2404 lp->stopped = 0;
d6b0e80f
AC
2405 if (debug_linux_nat)
2406 fprintf_unfiltered (gdb_stdlog,
3e43a32a
MS
2407 "PTRACE_CONT %s, 0, 0 (%s) "
2408 "(discarding SIGINT)\n",
d6b0e80f
AC
2409 target_pid_to_str (lp->ptid),
2410 errno ? safe_strerror (errno) : "OK");
2411
57380f4e 2412 return stop_wait_callback (lp, NULL);
d6b0e80f
AC
2413 }
2414
57380f4e
DJ
2415 maybe_clear_ignore_sigint (lp);
2416
d6b0e80f
AC
2417 if (WSTOPSIG (status) != SIGSTOP)
2418 {
e5ef252a 2419 /* The thread was stopped with a signal other than SIGSTOP. */
7feb7d06 2420
e5ef252a
PA
2421 if (debug_linux_nat)
2422 fprintf_unfiltered (gdb_stdlog,
2423 "SWC: Pending event %s in %s\n",
2424 status_to_str ((int) status),
2425 target_pid_to_str (lp->ptid));
2426
2427 /* Save the sigtrap event. */
2428 lp->status = status;
e5ef252a 2429 gdb_assert (lp->signalled);
e7ad2f14 2430 save_stop_reason (lp);
d6b0e80f
AC
2431 }
2432 else
2433 {
2434 /* We caught the SIGSTOP that we intended to catch, so
2435 there's no SIGSTOP pending. */
e5ef252a
PA
2436
2437 if (debug_linux_nat)
2438 fprintf_unfiltered (gdb_stdlog,
2bf6fb9d 2439 "SWC: Expected SIGSTOP caught for %s.\n",
e5ef252a
PA
2440 target_pid_to_str (lp->ptid));
2441
e5ef252a
PA
2442 /* Reset SIGNALLED only after the stop_wait_callback call
2443 above as it does gdb_assert on SIGNALLED. */
d6b0e80f
AC
2444 lp->signalled = 0;
2445 }
2446 }
2447
2448 return 0;
2449}
2450
9c02b525
PA
2451/* Return non-zero if LP has a wait status pending. Discard the
2452 pending event and resume the LWP if the event that originally
2453 caused the stop became uninteresting. */
d6b0e80f
AC
2454
2455static int
2456status_callback (struct lwp_info *lp, void *data)
2457{
2458 /* Only report a pending wait status if we pretend that this has
2459 indeed been resumed. */
ca2163eb
PA
2460 if (!lp->resumed)
2461 return 0;
2462
eb54c8bf
PA
2463 if (!lwp_status_pending_p (lp))
2464 return 0;
2465
15c66dd6
PA
2466 if (lp->stop_reason == TARGET_STOPPED_BY_SW_BREAKPOINT
2467 || lp->stop_reason == TARGET_STOPPED_BY_HW_BREAKPOINT)
9c02b525
PA
2468 {
2469 struct regcache *regcache = get_thread_regcache (lp->ptid);
9c02b525
PA
2470 CORE_ADDR pc;
2471 int discard = 0;
2472
9c02b525
PA
2473 pc = regcache_read_pc (regcache);
2474
2475 if (pc != lp->stop_pc)
2476 {
2477 if (debug_linux_nat)
2478 fprintf_unfiltered (gdb_stdlog,
2479 "SC: PC of %s changed. was=%s, now=%s\n",
2480 target_pid_to_str (lp->ptid),
2481 paddress (target_gdbarch (), lp->stop_pc),
2482 paddress (target_gdbarch (), pc));
2483 discard = 1;
2484 }
faf09f01
PA
2485
2486#if !USE_SIGTRAP_SIGINFO
9c02b525
PA
2487 else if (!breakpoint_inserted_here_p (get_regcache_aspace (regcache), pc))
2488 {
2489 if (debug_linux_nat)
2490 fprintf_unfiltered (gdb_stdlog,
2491 "SC: previous breakpoint of %s, at %s gone\n",
2492 target_pid_to_str (lp->ptid),
2493 paddress (target_gdbarch (), lp->stop_pc));
2494
2495 discard = 1;
2496 }
faf09f01 2497#endif
9c02b525
PA
2498
2499 if (discard)
2500 {
2501 if (debug_linux_nat)
2502 fprintf_unfiltered (gdb_stdlog,
2503 "SC: pending event of %s cancelled.\n",
2504 target_pid_to_str (lp->ptid));
2505
2506 lp->status = 0;
2507 linux_resume_one_lwp (lp, lp->step, GDB_SIGNAL_0);
2508 return 0;
2509 }
9c02b525
PA
2510 }
2511
eb54c8bf 2512 return 1;
d6b0e80f
AC
2513}
2514
d6b0e80f
AC
2515/* Count the LWP's that have had events. */
2516
2517static int
2518count_events_callback (struct lwp_info *lp, void *data)
2519{
9a3c8263 2520 int *count = (int *) data;
d6b0e80f
AC
2521
2522 gdb_assert (count != NULL);
2523
9c02b525
PA
2524 /* Select only resumed LWPs that have an event pending. */
2525 if (lp->resumed && lwp_status_pending_p (lp))
d6b0e80f
AC
2526 (*count)++;
2527
2528 return 0;
2529}
2530
2531/* Select the LWP (if any) that is currently being single-stepped. */
2532
2533static int
2534select_singlestep_lwp_callback (struct lwp_info *lp, void *data)
2535{
25289eb2
PA
2536 if (lp->last_resume_kind == resume_step
2537 && lp->status != 0)
d6b0e80f
AC
2538 return 1;
2539 else
2540 return 0;
2541}
2542
8a99810d
PA
2543/* Returns true if LP has a status pending. */
2544
2545static int
2546lwp_status_pending_p (struct lwp_info *lp)
2547{
2548 /* We check for lp->waitstatus in addition to lp->status, because we
2549 can have pending process exits recorded in lp->status and
2550 W_EXITCODE(0,0) happens to be 0. */
2551 return lp->status != 0 || lp->waitstatus.kind != TARGET_WAITKIND_IGNORE;
2552}
2553
b90fc188 2554/* Select the Nth LWP that has had an event. */
d6b0e80f
AC
2555
2556static int
2557select_event_lwp_callback (struct lwp_info *lp, void *data)
2558{
9a3c8263 2559 int *selector = (int *) data;
d6b0e80f
AC
2560
2561 gdb_assert (selector != NULL);
2562
9c02b525
PA
2563 /* Select only resumed LWPs that have an event pending. */
2564 if (lp->resumed && lwp_status_pending_p (lp))
d6b0e80f
AC
2565 if ((*selector)-- == 0)
2566 return 1;
2567
2568 return 0;
2569}
2570
e7ad2f14
PA
2571/* Called when the LWP stopped for a signal/trap. If it stopped for a
2572 trap check what caused it (breakpoint, watchpoint, trace, etc.),
2573 and save the result in the LWP's stop_reason field. If it stopped
2574 for a breakpoint, decrement the PC if necessary on the lwp's
2575 architecture. */
9c02b525 2576
e7ad2f14
PA
2577static void
2578save_stop_reason (struct lwp_info *lp)
710151dd 2579{
e7ad2f14
PA
2580 struct regcache *regcache;
2581 struct gdbarch *gdbarch;
515630c5 2582 CORE_ADDR pc;
9c02b525 2583 CORE_ADDR sw_bp_pc;
faf09f01
PA
2584#if USE_SIGTRAP_SIGINFO
2585 siginfo_t siginfo;
2586#endif
9c02b525 2587
e7ad2f14
PA
2588 gdb_assert (lp->stop_reason == TARGET_STOPPED_BY_NO_REASON);
2589 gdb_assert (lp->status != 0);
2590
2591 if (!linux_nat_status_is_event (lp->status))
2592 return;
2593
2594 regcache = get_thread_regcache (lp->ptid);
2595 gdbarch = get_regcache_arch (regcache);
2596
9c02b525 2597 pc = regcache_read_pc (regcache);
527a273a 2598 sw_bp_pc = pc - gdbarch_decr_pc_after_break (gdbarch);
515630c5 2599
faf09f01
PA
2600#if USE_SIGTRAP_SIGINFO
2601 if (linux_nat_get_siginfo (lp->ptid, &siginfo))
2602 {
2603 if (siginfo.si_signo == SIGTRAP)
2604 {
e7ad2f14
PA
2605 if (GDB_ARCH_IS_TRAP_BRKPT (siginfo.si_code)
2606 && GDB_ARCH_IS_TRAP_HWBKPT (siginfo.si_code))
faf09f01 2607 {
e7ad2f14
PA
2608 /* The si_code is ambiguous on this arch -- check debug
2609 registers. */
2610 if (!check_stopped_by_watchpoint (lp))
2611 lp->stop_reason = TARGET_STOPPED_BY_SW_BREAKPOINT;
2612 }
2613 else if (GDB_ARCH_IS_TRAP_BRKPT (siginfo.si_code))
2614 {
2615 /* If we determine the LWP stopped for a SW breakpoint,
2616 trust it. Particularly don't check watchpoint
2617 registers, because at least on s390, we'd find
2618 stopped-by-watchpoint as long as there's a watchpoint
2619 set. */
faf09f01 2620 lp->stop_reason = TARGET_STOPPED_BY_SW_BREAKPOINT;
faf09f01 2621 }
e7ad2f14 2622 else if (GDB_ARCH_IS_TRAP_HWBKPT (siginfo.si_code))
faf09f01 2623 {
e7ad2f14
PA
2624 /* This can indicate either a hardware breakpoint or
2625 hardware watchpoint. Check debug registers. */
2626 if (!check_stopped_by_watchpoint (lp))
2627 lp->stop_reason = TARGET_STOPPED_BY_HW_BREAKPOINT;
faf09f01 2628 }
2bf6fb9d
PA
2629 else if (siginfo.si_code == TRAP_TRACE)
2630 {
2631 if (debug_linux_nat)
2632 fprintf_unfiltered (gdb_stdlog,
2633 "CSBB: %s stopped by trace\n",
2634 target_pid_to_str (lp->ptid));
e7ad2f14
PA
2635
2636 /* We may have single stepped an instruction that
2637 triggered a watchpoint. In that case, on some
2638 architectures (such as x86), instead of TRAP_HWBKPT,
2639 si_code indicates TRAP_TRACE, and we need to check
2640 the debug registers separately. */
2641 check_stopped_by_watchpoint (lp);
2bf6fb9d 2642 }
faf09f01
PA
2643 }
2644 }
2645#else
9c02b525
PA
2646 if ((!lp->step || lp->stop_pc == sw_bp_pc)
2647 && software_breakpoint_inserted_here_p (get_regcache_aspace (regcache),
2648 sw_bp_pc))
710151dd 2649 {
9c02b525
PA
2650 /* The LWP was either continued, or stepped a software
2651 breakpoint instruction. */
e7ad2f14
PA
2652 lp->stop_reason = TARGET_STOPPED_BY_SW_BREAKPOINT;
2653 }
2654
2655 if (hardware_breakpoint_inserted_here_p (get_regcache_aspace (regcache), pc))
2656 lp->stop_reason = TARGET_STOPPED_BY_HW_BREAKPOINT;
2657
2658 if (lp->stop_reason == TARGET_STOPPED_BY_NO_REASON)
2659 check_stopped_by_watchpoint (lp);
2660#endif
2661
2662 if (lp->stop_reason == TARGET_STOPPED_BY_SW_BREAKPOINT)
2663 {
710151dd
PA
2664 if (debug_linux_nat)
2665 fprintf_unfiltered (gdb_stdlog,
2bf6fb9d 2666 "CSBB: %s stopped by software breakpoint\n",
710151dd
PA
2667 target_pid_to_str (lp->ptid));
2668
2669 /* Back up the PC if necessary. */
9c02b525
PA
2670 if (pc != sw_bp_pc)
2671 regcache_write_pc (regcache, sw_bp_pc);
515630c5 2672
e7ad2f14
PA
2673 /* Update this so we record the correct stop PC below. */
2674 pc = sw_bp_pc;
710151dd 2675 }
e7ad2f14 2676 else if (lp->stop_reason == TARGET_STOPPED_BY_HW_BREAKPOINT)
9c02b525
PA
2677 {
2678 if (debug_linux_nat)
2679 fprintf_unfiltered (gdb_stdlog,
e7ad2f14
PA
2680 "CSBB: %s stopped by hardware breakpoint\n",
2681 target_pid_to_str (lp->ptid));
2682 }
2683 else if (lp->stop_reason == TARGET_STOPPED_BY_WATCHPOINT)
2684 {
2685 if (debug_linux_nat)
2686 fprintf_unfiltered (gdb_stdlog,
2687 "CSBB: %s stopped by hardware watchpoint\n",
9c02b525 2688 target_pid_to_str (lp->ptid));
9c02b525 2689 }
d6b0e80f 2690
e7ad2f14 2691 lp->stop_pc = pc;
d6b0e80f
AC
2692}
2693
faf09f01
PA
2694
2695/* Returns true if the LWP had stopped for a software breakpoint. */
2696
2697static int
2698linux_nat_stopped_by_sw_breakpoint (struct target_ops *ops)
2699{
2700 struct lwp_info *lp = find_lwp_pid (inferior_ptid);
2701
2702 gdb_assert (lp != NULL);
2703
2704 return lp->stop_reason == TARGET_STOPPED_BY_SW_BREAKPOINT;
2705}
2706
2707/* Implement the supports_stopped_by_sw_breakpoint method. */
2708
2709static int
2710linux_nat_supports_stopped_by_sw_breakpoint (struct target_ops *ops)
2711{
2712 return USE_SIGTRAP_SIGINFO;
2713}
2714
2715/* Returns true if the LWP had stopped for a hardware
2716 breakpoint/watchpoint. */
2717
2718static int
2719linux_nat_stopped_by_hw_breakpoint (struct target_ops *ops)
2720{
2721 struct lwp_info *lp = find_lwp_pid (inferior_ptid);
2722
2723 gdb_assert (lp != NULL);
2724
2725 return lp->stop_reason == TARGET_STOPPED_BY_HW_BREAKPOINT;
2726}
2727
2728/* Implement the supports_stopped_by_hw_breakpoint method. */
2729
2730static int
2731linux_nat_supports_stopped_by_hw_breakpoint (struct target_ops *ops)
2732{
2733 return USE_SIGTRAP_SIGINFO;
2734}
2735
d6b0e80f
AC
2736/* Select one LWP out of those that have events pending. */
2737
2738static void
d90e17a7 2739select_event_lwp (ptid_t filter, struct lwp_info **orig_lp, int *status)
d6b0e80f
AC
2740{
2741 int num_events = 0;
2742 int random_selector;
9c02b525 2743 struct lwp_info *event_lp = NULL;
d6b0e80f 2744
ac264b3b 2745 /* Record the wait status for the original LWP. */
d6b0e80f
AC
2746 (*orig_lp)->status = *status;
2747
9c02b525
PA
2748 /* In all-stop, give preference to the LWP that is being
2749 single-stepped. There will be at most one, and it will be the
2750 LWP that the core is most interested in. If we didn't do this,
2751 then we'd have to handle pending step SIGTRAPs somehow in case
2752 the core later continues the previously-stepped thread, as
2753 otherwise we'd report the pending SIGTRAP then, and the core, not
2754 having stepped the thread, wouldn't understand what the trap was
2755 for, and therefore would report it to the user as a random
2756 signal. */
fbea99ea 2757 if (!target_is_non_stop_p ())
d6b0e80f 2758 {
9c02b525
PA
2759 event_lp = iterate_over_lwps (filter,
2760 select_singlestep_lwp_callback, NULL);
2761 if (event_lp != NULL)
2762 {
2763 if (debug_linux_nat)
2764 fprintf_unfiltered (gdb_stdlog,
2765 "SEL: Select single-step %s\n",
2766 target_pid_to_str (event_lp->ptid));
2767 }
d6b0e80f 2768 }
9c02b525
PA
2769
2770 if (event_lp == NULL)
d6b0e80f 2771 {
9c02b525 2772 /* Pick one at random, out of those which have had events. */
d6b0e80f 2773
9c02b525 2774 /* First see how many events we have. */
d90e17a7 2775 iterate_over_lwps (filter, count_events_callback, &num_events);
8bf3b159 2776 gdb_assert (num_events > 0);
d6b0e80f 2777
9c02b525
PA
2778 /* Now randomly pick a LWP out of those that have had
2779 events. */
d6b0e80f
AC
2780 random_selector = (int)
2781 ((num_events * (double) rand ()) / (RAND_MAX + 1.0));
2782
2783 if (debug_linux_nat && num_events > 1)
2784 fprintf_unfiltered (gdb_stdlog,
9c02b525 2785 "SEL: Found %d events, selecting #%d\n",
d6b0e80f
AC
2786 num_events, random_selector);
2787
d90e17a7
PA
2788 event_lp = iterate_over_lwps (filter,
2789 select_event_lwp_callback,
d6b0e80f
AC
2790 &random_selector);
2791 }
2792
2793 if (event_lp != NULL)
2794 {
2795 /* Switch the event LWP. */
2796 *orig_lp = event_lp;
2797 *status = event_lp->status;
2798 }
2799
2800 /* Flush the wait status for the event LWP. */
2801 (*orig_lp)->status = 0;
2802}
2803
2804/* Return non-zero if LP has been resumed. */
2805
2806static int
2807resumed_callback (struct lwp_info *lp, void *data)
2808{
2809 return lp->resumed;
2810}
2811
02f3fc28 2812/* Check if we should go on and pass this event to common code.
9c02b525 2813 Return the affected lwp if we are, or NULL otherwise. */
12d9289a 2814
02f3fc28 2815static struct lwp_info *
9c02b525 2816linux_nat_filter_event (int lwpid, int status)
02f3fc28
PA
2817{
2818 struct lwp_info *lp;
89a5711c 2819 int event = linux_ptrace_get_extended_event (status);
02f3fc28
PA
2820
2821 lp = find_lwp_pid (pid_to_ptid (lwpid));
2822
2823 /* Check for stop events reported by a process we didn't already
2824 know about - anything not already in our LWP list.
2825
2826 If we're expecting to receive stopped processes after
2827 fork, vfork, and clone events, then we'll just add the
2828 new one to our list and go back to waiting for the event
2829 to be reported - the stopped process might be returned
0e5bf2a8
PA
2830 from waitpid before or after the event is.
2831
2832 But note the case of a non-leader thread exec'ing after the
2833 leader having exited, and gone from our lists. The non-leader
2834 thread changes its tid to the tgid. */
2835
2836 if (WIFSTOPPED (status) && lp == NULL
89a5711c 2837 && (WSTOPSIG (status) == SIGTRAP && event == PTRACE_EVENT_EXEC))
0e5bf2a8
PA
2838 {
2839 /* A multi-thread exec after we had seen the leader exiting. */
2840 if (debug_linux_nat)
2841 fprintf_unfiltered (gdb_stdlog,
2842 "LLW: Re-adding thread group leader LWP %d.\n",
2843 lwpid);
2844
dfd4cc63 2845 lp = add_lwp (ptid_build (lwpid, lwpid, 0));
0e5bf2a8
PA
2846 lp->stopped = 1;
2847 lp->resumed = 1;
2848 add_thread (lp->ptid);
2849 }
2850
02f3fc28
PA
2851 if (WIFSTOPPED (status) && !lp)
2852 {
3b27ef47
PA
2853 if (debug_linux_nat)
2854 fprintf_unfiltered (gdb_stdlog,
2855 "LHEW: saving LWP %ld status %s in stopped_pids list\n",
2856 (long) lwpid, status_to_str (status));
84636d28 2857 add_to_pid_list (&stopped_pids, lwpid, status);
02f3fc28
PA
2858 return NULL;
2859 }
2860
2861 /* Make sure we don't report an event for the exit of an LWP not in
1777feb0 2862 our list, i.e. not part of the current process. This can happen
fd62cb89 2863 if we detach from a program we originally forked and then it
02f3fc28
PA
2864 exits. */
2865 if (!WIFSTOPPED (status) && !lp)
2866 return NULL;
2867
8817a6f2
PA
2868 /* This LWP is stopped now. (And if dead, this prevents it from
2869 ever being continued.) */
2870 lp->stopped = 1;
2871
8784d563
PA
2872 if (WIFSTOPPED (status) && lp->must_set_ptrace_flags)
2873 {
2874 struct inferior *inf = find_inferior_pid (ptid_get_pid (lp->ptid));
de0d863e 2875 int options = linux_nat_ptrace_options (inf->attach_flag);
8784d563 2876
de0d863e 2877 linux_enable_event_reporting (ptid_get_lwp (lp->ptid), options);
8784d563
PA
2878 lp->must_set_ptrace_flags = 0;
2879 }
2880
ca2163eb
PA
2881 /* Handle GNU/Linux's syscall SIGTRAPs. */
2882 if (WIFSTOPPED (status) && WSTOPSIG (status) == SYSCALL_SIGTRAP)
2883 {
2884 /* No longer need the sysgood bit. The ptrace event ends up
2885 recorded in lp->waitstatus if we care for it. We can carry
2886 on handling the event like a regular SIGTRAP from here
2887 on. */
2888 status = W_STOPCODE (SIGTRAP);
2889 if (linux_handle_syscall_trap (lp, 0))
2890 return NULL;
2891 }
bfd09d20
JS
2892 else
2893 {
2894 /* Almost all other ptrace-stops are known to be outside of system
2895 calls, with further exceptions in linux_handle_extended_wait. */
2896 lp->syscall_state = TARGET_WAITKIND_IGNORE;
2897 }
02f3fc28 2898
ca2163eb 2899 /* Handle GNU/Linux's extended waitstatus for trace events. */
89a5711c
DB
2900 if (WIFSTOPPED (status) && WSTOPSIG (status) == SIGTRAP
2901 && linux_is_extended_waitstatus (status))
02f3fc28
PA
2902 {
2903 if (debug_linux_nat)
2904 fprintf_unfiltered (gdb_stdlog,
2905 "LLW: Handling extended status 0x%06x\n",
2906 status);
4dd63d48 2907 if (linux_handle_extended_wait (lp, status))
02f3fc28
PA
2908 return NULL;
2909 }
2910
2911 /* Check if the thread has exited. */
9c02b525
PA
2912 if (WIFEXITED (status) || WIFSIGNALED (status))
2913 {
aa01bd36
PA
2914 if (!report_thread_events
2915 && num_lwps (ptid_get_pid (lp->ptid)) > 1)
02f3fc28 2916 {
9c02b525
PA
2917 if (debug_linux_nat)
2918 fprintf_unfiltered (gdb_stdlog,
2919 "LLW: %s exited.\n",
2920 target_pid_to_str (lp->ptid));
2921
4a6ed09b
PA
2922 /* If there is at least one more LWP, then the exit signal
2923 was not the end of the debugged application and should be
2924 ignored. */
2925 exit_lwp (lp);
2926 return NULL;
02f3fc28
PA
2927 }
2928
77598427
PA
2929 /* Note that even if the leader was ptrace-stopped, it can still
2930 exit, if e.g., some other thread brings down the whole
2931 process (calls `exit'). So don't assert that the lwp is
2932 resumed. */
02f3fc28
PA
2933 if (debug_linux_nat)
2934 fprintf_unfiltered (gdb_stdlog,
aa01bd36 2935 "LWP %ld exited (resumed=%d)\n",
77598427 2936 ptid_get_lwp (lp->ptid), lp->resumed);
02f3fc28 2937
9c02b525
PA
2938 /* Dead LWP's aren't expected to reported a pending sigstop. */
2939 lp->signalled = 0;
2940
2941 /* Store the pending event in the waitstatus, because
2942 W_EXITCODE(0,0) == 0. */
2943 store_waitstatus (&lp->waitstatus, status);
2944 return lp;
02f3fc28
PA
2945 }
2946
02f3fc28
PA
2947 /* Make sure we don't report a SIGSTOP that we sent ourselves in
2948 an attempt to stop an LWP. */
2949 if (lp->signalled
2950 && WIFSTOPPED (status) && WSTOPSIG (status) == SIGSTOP)
2951 {
02f3fc28
PA
2952 lp->signalled = 0;
2953
2bf6fb9d 2954 if (lp->last_resume_kind == resume_stop)
25289eb2 2955 {
2bf6fb9d
PA
2956 if (debug_linux_nat)
2957 fprintf_unfiltered (gdb_stdlog,
2958 "LLW: resume_stop SIGSTOP caught for %s.\n",
2959 target_pid_to_str (lp->ptid));
2960 }
2961 else
2962 {
2963 /* This is a delayed SIGSTOP. Filter out the event. */
02f3fc28 2964
25289eb2
PA
2965 if (debug_linux_nat)
2966 fprintf_unfiltered (gdb_stdlog,
2bf6fb9d 2967 "LLW: %s %s, 0, 0 (discard delayed SIGSTOP)\n",
25289eb2
PA
2968 lp->step ?
2969 "PTRACE_SINGLESTEP" : "PTRACE_CONT",
2970 target_pid_to_str (lp->ptid));
02f3fc28 2971
2bf6fb9d 2972 linux_resume_one_lwp (lp, lp->step, GDB_SIGNAL_0);
25289eb2 2973 gdb_assert (lp->resumed);
25289eb2
PA
2974 return NULL;
2975 }
02f3fc28
PA
2976 }
2977
57380f4e
DJ
2978 /* Make sure we don't report a SIGINT that we have already displayed
2979 for another thread. */
2980 if (lp->ignore_sigint
2981 && WIFSTOPPED (status) && WSTOPSIG (status) == SIGINT)
2982 {
2983 if (debug_linux_nat)
2984 fprintf_unfiltered (gdb_stdlog,
2985 "LLW: Delayed SIGINT caught for %s.\n",
2986 target_pid_to_str (lp->ptid));
2987
2988 /* This is a delayed SIGINT. */
2989 lp->ignore_sigint = 0;
2990
8a99810d 2991 linux_resume_one_lwp (lp, lp->step, GDB_SIGNAL_0);
57380f4e
DJ
2992 if (debug_linux_nat)
2993 fprintf_unfiltered (gdb_stdlog,
2994 "LLW: %s %s, 0, 0 (discard SIGINT)\n",
2995 lp->step ?
2996 "PTRACE_SINGLESTEP" : "PTRACE_CONT",
2997 target_pid_to_str (lp->ptid));
57380f4e
DJ
2998 gdb_assert (lp->resumed);
2999
3000 /* Discard the event. */
3001 return NULL;
3002 }
3003
9c02b525
PA
3004 /* Don't report signals that GDB isn't interested in, such as
3005 signals that are neither printed nor stopped upon. Stopping all
3006 threads can be a bit time-consuming so if we want decent
3007 performance with heavily multi-threaded programs, especially when
3008 they're using a high frequency timer, we'd better avoid it if we
3009 can. */
3010 if (WIFSTOPPED (status))
3011 {
3012 enum gdb_signal signo = gdb_signal_from_host (WSTOPSIG (status));
3013
fbea99ea 3014 if (!target_is_non_stop_p ())
9c02b525
PA
3015 {
3016 /* Only do the below in all-stop, as we currently use SIGSTOP
3017 to implement target_stop (see linux_nat_stop) in
3018 non-stop. */
3019 if (signo == GDB_SIGNAL_INT && signal_pass_state (signo) == 0)
3020 {
3021 /* If ^C/BREAK is typed at the tty/console, SIGINT gets
3022 forwarded to the entire process group, that is, all LWPs
3023 will receive it - unless they're using CLONE_THREAD to
3024 share signals. Since we only want to report it once, we
3025 mark it as ignored for all LWPs except this one. */
3026 iterate_over_lwps (pid_to_ptid (ptid_get_pid (lp->ptid)),
3027 set_ignore_sigint, NULL);
3028 lp->ignore_sigint = 0;
3029 }
3030 else
3031 maybe_clear_ignore_sigint (lp);
3032 }
3033
3034 /* When using hardware single-step, we need to report every signal.
c9587f88
AT
3035 Otherwise, signals in pass_mask may be short-circuited
3036 except signals that might be caused by a breakpoint. */
9c02b525 3037 if (!lp->step
c9587f88
AT
3038 && WSTOPSIG (status) && sigismember (&pass_mask, WSTOPSIG (status))
3039 && !linux_wstatus_maybe_breakpoint (status))
9c02b525
PA
3040 {
3041 linux_resume_one_lwp (lp, lp->step, signo);
3042 if (debug_linux_nat)
3043 fprintf_unfiltered (gdb_stdlog,
3044 "LLW: %s %s, %s (preempt 'handle')\n",
3045 lp->step ?
3046 "PTRACE_SINGLESTEP" : "PTRACE_CONT",
3047 target_pid_to_str (lp->ptid),
3048 (signo != GDB_SIGNAL_0
3049 ? strsignal (gdb_signal_to_host (signo))
3050 : "0"));
3051 return NULL;
3052 }
3053 }
3054
02f3fc28
PA
3055 /* An interesting event. */
3056 gdb_assert (lp);
ca2163eb 3057 lp->status = status;
e7ad2f14 3058 save_stop_reason (lp);
02f3fc28
PA
3059 return lp;
3060}
3061
0e5bf2a8
PA
3062/* Detect zombie thread group leaders, and "exit" them. We can't reap
3063 their exits until all other threads in the group have exited. */
3064
3065static void
3066check_zombie_leaders (void)
3067{
3068 struct inferior *inf;
3069
3070 ALL_INFERIORS (inf)
3071 {
3072 struct lwp_info *leader_lp;
3073
3074 if (inf->pid == 0)
3075 continue;
3076
3077 leader_lp = find_lwp_pid (pid_to_ptid (inf->pid));
3078 if (leader_lp != NULL
3079 /* Check if there are other threads in the group, as we may
3080 have raced with the inferior simply exiting. */
3081 && num_lwps (inf->pid) > 1
5f572dec 3082 && linux_proc_pid_is_zombie (inf->pid))
0e5bf2a8
PA
3083 {
3084 if (debug_linux_nat)
3085 fprintf_unfiltered (gdb_stdlog,
3086 "CZL: Thread group leader %d zombie "
3087 "(it exited, or another thread execd).\n",
3088 inf->pid);
3089
3090 /* A leader zombie can mean one of two things:
3091
3092 - It exited, and there's an exit status pending
3093 available, or only the leader exited (not the whole
3094 program). In the latter case, we can't waitpid the
3095 leader's exit status until all other threads are gone.
3096
3097 - There are 3 or more threads in the group, and a thread
4a6ed09b
PA
3098 other than the leader exec'd. See comments on exec
3099 events at the top of the file. We could try
0e5bf2a8
PA
3100 distinguishing the exit and exec cases, by waiting once
3101 more, and seeing if something comes out, but it doesn't
3102 sound useful. The previous leader _does_ go away, and
3103 we'll re-add the new one once we see the exec event
3104 (which is just the same as what would happen if the
3105 previous leader did exit voluntarily before some other
3106 thread execs). */
3107
3108 if (debug_linux_nat)
3109 fprintf_unfiltered (gdb_stdlog,
3110 "CZL: Thread group leader %d vanished.\n",
3111 inf->pid);
3112 exit_lwp (leader_lp);
3113 }
3114 }
3115}
3116
aa01bd36
PA
3117/* Convenience function that is called when the kernel reports an exit
3118 event. This decides whether to report the event to GDB as a
3119 process exit event, a thread exit event, or to suppress the
3120 event. */
3121
3122static ptid_t
3123filter_exit_event (struct lwp_info *event_child,
3124 struct target_waitstatus *ourstatus)
3125{
3126 ptid_t ptid = event_child->ptid;
3127
3128 if (num_lwps (ptid_get_pid (ptid)) > 1)
3129 {
3130 if (report_thread_events)
3131 ourstatus->kind = TARGET_WAITKIND_THREAD_EXITED;
3132 else
3133 ourstatus->kind = TARGET_WAITKIND_IGNORE;
3134
3135 exit_lwp (event_child);
3136 }
3137
3138 return ptid;
3139}
3140
d6b0e80f 3141static ptid_t
7feb7d06 3142linux_nat_wait_1 (struct target_ops *ops,
47608cb1
PA
3143 ptid_t ptid, struct target_waitstatus *ourstatus,
3144 int target_options)
d6b0e80f 3145{
fc9b8e47 3146 sigset_t prev_mask;
4b60df3d 3147 enum resume_kind last_resume_kind;
12d9289a 3148 struct lwp_info *lp;
12d9289a 3149 int status;
d6b0e80f 3150
01124a23 3151 if (debug_linux_nat)
b84876c2
PA
3152 fprintf_unfiltered (gdb_stdlog, "LLW: enter\n");
3153
f973ed9c
DJ
3154 /* The first time we get here after starting a new inferior, we may
3155 not have added it to the LWP list yet - this is the earliest
3156 moment at which we know its PID. */
d90e17a7 3157 if (ptid_is_pid (inferior_ptid))
f973ed9c 3158 {
27c9d204
PA
3159 /* Upgrade the main thread's ptid. */
3160 thread_change_ptid (inferior_ptid,
dfd4cc63
LM
3161 ptid_build (ptid_get_pid (inferior_ptid),
3162 ptid_get_pid (inferior_ptid), 0));
27c9d204 3163
26cb8b7c 3164 lp = add_initial_lwp (inferior_ptid);
f973ed9c
DJ
3165 lp->resumed = 1;
3166 }
3167
12696c10 3168 /* Make sure SIGCHLD is blocked until the sigsuspend below. */
7feb7d06 3169 block_child_signals (&prev_mask);
d6b0e80f 3170
d6b0e80f 3171 /* First check if there is a LWP with a wait status pending. */
8a99810d
PA
3172 lp = iterate_over_lwps (ptid, status_callback, NULL);
3173 if (lp != NULL)
d6b0e80f
AC
3174 {
3175 if (debug_linux_nat)
d6b0e80f
AC
3176 fprintf_unfiltered (gdb_stdlog,
3177 "LLW: Using pending wait status %s for %s.\n",
ca2163eb 3178 status_to_str (lp->status),
d6b0e80f 3179 target_pid_to_str (lp->ptid));
d6b0e80f
AC
3180 }
3181
9c02b525
PA
3182 /* But if we don't find a pending event, we'll have to wait. Always
3183 pull all events out of the kernel. We'll randomly select an
3184 event LWP out of all that have events, to prevent starvation. */
7feb7d06 3185
d90e17a7 3186 while (lp == NULL)
d6b0e80f
AC
3187 {
3188 pid_t lwpid;
3189
0e5bf2a8
PA
3190 /* Always use -1 and WNOHANG, due to couple of a kernel/ptrace
3191 quirks:
3192
3193 - If the thread group leader exits while other threads in the
3194 thread group still exist, waitpid(TGID, ...) hangs. That
3195 waitpid won't return an exit status until the other threads
3196 in the group are reapped.
3197
3198 - When a non-leader thread execs, that thread just vanishes
3199 without reporting an exit (so we'd hang if we waited for it
3200 explicitly in that case). The exec event is reported to
3201 the TGID pid. */
3202
3203 errno = 0;
4a6ed09b 3204 lwpid = my_waitpid (-1, &status, __WALL | WNOHANG);
0e5bf2a8
PA
3205
3206 if (debug_linux_nat)
3207 fprintf_unfiltered (gdb_stdlog,
3208 "LNW: waitpid(-1, ...) returned %d, %s\n",
3209 lwpid, errno ? safe_strerror (errno) : "ERRNO-OK");
b84876c2 3210
d6b0e80f
AC
3211 if (lwpid > 0)
3212 {
d6b0e80f
AC
3213 if (debug_linux_nat)
3214 {
3215 fprintf_unfiltered (gdb_stdlog,
3216 "LLW: waitpid %ld received %s\n",
3217 (long) lwpid, status_to_str (status));
3218 }
3219
9c02b525 3220 linux_nat_filter_event (lwpid, status);
0e5bf2a8
PA
3221 /* Retry until nothing comes out of waitpid. A single
3222 SIGCHLD can indicate more than one child stopped. */
3223 continue;
d6b0e80f
AC
3224 }
3225
20ba1ce6
PA
3226 /* Now that we've pulled all events out of the kernel, resume
3227 LWPs that don't have an interesting event to report. */
3228 iterate_over_lwps (minus_one_ptid,
3229 resume_stopped_resumed_lwps, &minus_one_ptid);
3230
3231 /* ... and find an LWP with a status to report to the core, if
3232 any. */
9c02b525
PA
3233 lp = iterate_over_lwps (ptid, status_callback, NULL);
3234 if (lp != NULL)
3235 break;
3236
0e5bf2a8
PA
3237 /* Check for zombie thread group leaders. Those can't be reaped
3238 until all other threads in the thread group are. */
3239 check_zombie_leaders ();
d6b0e80f 3240
0e5bf2a8
PA
3241 /* If there are no resumed children left, bail. We'd be stuck
3242 forever in the sigsuspend call below otherwise. */
3243 if (iterate_over_lwps (ptid, resumed_callback, NULL) == NULL)
3244 {
3245 if (debug_linux_nat)
3246 fprintf_unfiltered (gdb_stdlog, "LLW: exit (no resumed LWP)\n");
b84876c2 3247
0e5bf2a8 3248 ourstatus->kind = TARGET_WAITKIND_NO_RESUMED;
b84876c2 3249
0e5bf2a8
PA
3250 restore_child_signals_mask (&prev_mask);
3251 return minus_one_ptid;
d6b0e80f 3252 }
28736962 3253
0e5bf2a8
PA
3254 /* No interesting event to report to the core. */
3255
3256 if (target_options & TARGET_WNOHANG)
3257 {
01124a23 3258 if (debug_linux_nat)
28736962
PA
3259 fprintf_unfiltered (gdb_stdlog, "LLW: exit (ignore)\n");
3260
0e5bf2a8 3261 ourstatus->kind = TARGET_WAITKIND_IGNORE;
28736962
PA
3262 restore_child_signals_mask (&prev_mask);
3263 return minus_one_ptid;
3264 }
d6b0e80f
AC
3265
3266 /* We shouldn't end up here unless we want to try again. */
d90e17a7 3267 gdb_assert (lp == NULL);
0e5bf2a8
PA
3268
3269 /* Block until we get an event reported with SIGCHLD. */
d36bf488
DE
3270 if (debug_linux_nat)
3271 fprintf_unfiltered (gdb_stdlog, "LNW: about to sigsuspend\n");
0e5bf2a8 3272 sigsuspend (&suspend_mask);
d6b0e80f
AC
3273 }
3274
d6b0e80f
AC
3275 gdb_assert (lp);
3276
ca2163eb
PA
3277 status = lp->status;
3278 lp->status = 0;
3279
fbea99ea 3280 if (!target_is_non_stop_p ())
4c28f408
PA
3281 {
3282 /* Now stop all other LWP's ... */
d90e17a7 3283 iterate_over_lwps (minus_one_ptid, stop_callback, NULL);
4c28f408
PA
3284
3285 /* ... and wait until all of them have reported back that
3286 they're no longer running. */
d90e17a7 3287 iterate_over_lwps (minus_one_ptid, stop_wait_callback, NULL);
9c02b525
PA
3288 }
3289
3290 /* If we're not waiting for a specific LWP, choose an event LWP from
3291 among those that have had events. Giving equal priority to all
3292 LWPs that have had events helps prevent starvation. */
3293 if (ptid_equal (ptid, minus_one_ptid) || ptid_is_pid (ptid))
3294 select_event_lwp (ptid, &lp, &status);
3295
3296 gdb_assert (lp != NULL);
3297
3298 /* Now that we've selected our final event LWP, un-adjust its PC if
faf09f01
PA
3299 it was a software breakpoint, and we can't reliably support the
3300 "stopped by software breakpoint" stop reason. */
3301 if (lp->stop_reason == TARGET_STOPPED_BY_SW_BREAKPOINT
3302 && !USE_SIGTRAP_SIGINFO)
9c02b525
PA
3303 {
3304 struct regcache *regcache = get_thread_regcache (lp->ptid);
3305 struct gdbarch *gdbarch = get_regcache_arch (regcache);
527a273a 3306 int decr_pc = gdbarch_decr_pc_after_break (gdbarch);
4c28f408 3307
9c02b525
PA
3308 if (decr_pc != 0)
3309 {
3310 CORE_ADDR pc;
d6b0e80f 3311
9c02b525
PA
3312 pc = regcache_read_pc (regcache);
3313 regcache_write_pc (regcache, pc + decr_pc);
3314 }
3315 }
e3e9f5a2 3316
9c02b525
PA
3317 /* We'll need this to determine whether to report a SIGSTOP as
3318 GDB_SIGNAL_0. Need to take a copy because resume_clear_callback
3319 clears it. */
3320 last_resume_kind = lp->last_resume_kind;
4b60df3d 3321
fbea99ea 3322 if (!target_is_non_stop_p ())
9c02b525 3323 {
e3e9f5a2
PA
3324 /* In all-stop, from the core's perspective, all LWPs are now
3325 stopped until a new resume action is sent over. */
3326 iterate_over_lwps (minus_one_ptid, resume_clear_callback, NULL);
3327 }
3328 else
25289eb2 3329 {
4b60df3d 3330 resume_clear_callback (lp, NULL);
25289eb2 3331 }
d6b0e80f 3332
26ab7092 3333 if (linux_nat_status_is_event (status))
d6b0e80f 3334 {
d6b0e80f
AC
3335 if (debug_linux_nat)
3336 fprintf_unfiltered (gdb_stdlog,
4fdebdd0
PA
3337 "LLW: trap ptid is %s.\n",
3338 target_pid_to_str (lp->ptid));
d6b0e80f 3339 }
d6b0e80f
AC
3340
3341 if (lp->waitstatus.kind != TARGET_WAITKIND_IGNORE)
3342 {
3343 *ourstatus = lp->waitstatus;
3344 lp->waitstatus.kind = TARGET_WAITKIND_IGNORE;
3345 }
3346 else
3347 store_waitstatus (ourstatus, status);
3348
01124a23 3349 if (debug_linux_nat)
b84876c2
PA
3350 fprintf_unfiltered (gdb_stdlog, "LLW: exit\n");
3351
7feb7d06 3352 restore_child_signals_mask (&prev_mask);
1e225492 3353
4b60df3d 3354 if (last_resume_kind == resume_stop
25289eb2
PA
3355 && ourstatus->kind == TARGET_WAITKIND_STOPPED
3356 && WSTOPSIG (status) == SIGSTOP)
3357 {
3358 /* A thread that has been requested to stop by GDB with
3359 target_stop, and it stopped cleanly, so report as SIG0. The
3360 use of SIGSTOP is an implementation detail. */
a493e3e2 3361 ourstatus->value.sig = GDB_SIGNAL_0;
25289eb2
PA
3362 }
3363
1e225492
JK
3364 if (ourstatus->kind == TARGET_WAITKIND_EXITED
3365 || ourstatus->kind == TARGET_WAITKIND_SIGNALLED)
3366 lp->core = -1;
3367 else
2e794194 3368 lp->core = linux_common_core_of_thread (lp->ptid);
1e225492 3369
aa01bd36
PA
3370 if (ourstatus->kind == TARGET_WAITKIND_EXITED)
3371 return filter_exit_event (lp, ourstatus);
3372
f973ed9c 3373 return lp->ptid;
d6b0e80f
AC
3374}
3375
e3e9f5a2
PA
3376/* Resume LWPs that are currently stopped without any pending status
3377 to report, but are resumed from the core's perspective. */
3378
3379static int
3380resume_stopped_resumed_lwps (struct lwp_info *lp, void *data)
3381{
9a3c8263 3382 ptid_t *wait_ptid_p = (ptid_t *) data;
e3e9f5a2 3383
4dd63d48
PA
3384 if (!lp->stopped)
3385 {
3386 if (debug_linux_nat)
3387 fprintf_unfiltered (gdb_stdlog,
3388 "RSRL: NOT resuming LWP %s, not stopped\n",
3389 target_pid_to_str (lp->ptid));
3390 }
3391 else if (!lp->resumed)
3392 {
3393 if (debug_linux_nat)
3394 fprintf_unfiltered (gdb_stdlog,
3395 "RSRL: NOT resuming LWP %s, not resumed\n",
3396 target_pid_to_str (lp->ptid));
3397 }
3398 else if (lwp_status_pending_p (lp))
3399 {
3400 if (debug_linux_nat)
3401 fprintf_unfiltered (gdb_stdlog,
3402 "RSRL: NOT resuming LWP %s, has pending status\n",
3403 target_pid_to_str (lp->ptid));
3404 }
3405 else
e3e9f5a2 3406 {
336060f3
PA
3407 struct regcache *regcache = get_thread_regcache (lp->ptid);
3408 struct gdbarch *gdbarch = get_regcache_arch (regcache);
336060f3 3409
23f238d3 3410 TRY
e3e9f5a2 3411 {
23f238d3
PA
3412 CORE_ADDR pc = regcache_read_pc (regcache);
3413 int leave_stopped = 0;
e3e9f5a2 3414
23f238d3
PA
3415 /* Don't bother if there's a breakpoint at PC that we'd hit
3416 immediately, and we're not waiting for this LWP. */
3417 if (!ptid_match (lp->ptid, *wait_ptid_p))
3418 {
3419 if (breakpoint_inserted_here_p (get_regcache_aspace (regcache), pc))
3420 leave_stopped = 1;
3421 }
e3e9f5a2 3422
23f238d3
PA
3423 if (!leave_stopped)
3424 {
3425 if (debug_linux_nat)
3426 fprintf_unfiltered (gdb_stdlog,
3427 "RSRL: resuming stopped-resumed LWP %s at "
3428 "%s: step=%d\n",
3429 target_pid_to_str (lp->ptid),
3430 paddress (gdbarch, pc),
3431 lp->step);
3432
3433 linux_resume_one_lwp_throw (lp, lp->step, GDB_SIGNAL_0);
3434 }
3435 }
3436 CATCH (ex, RETURN_MASK_ERROR)
3437 {
3438 if (!check_ptrace_stopped_lwp_gone (lp))
3439 throw_exception (ex);
3440 }
3441 END_CATCH
e3e9f5a2
PA
3442 }
3443
3444 return 0;
3445}
3446
7feb7d06
PA
3447static ptid_t
3448linux_nat_wait (struct target_ops *ops,
47608cb1
PA
3449 ptid_t ptid, struct target_waitstatus *ourstatus,
3450 int target_options)
7feb7d06
PA
3451{
3452 ptid_t event_ptid;
3453
3454 if (debug_linux_nat)
09826ec5
PA
3455 {
3456 char *options_string;
3457
3458 options_string = target_options_to_string (target_options);
3459 fprintf_unfiltered (gdb_stdlog,
3460 "linux_nat_wait: [%s], [%s]\n",
3461 target_pid_to_str (ptid),
3462 options_string);
3463 xfree (options_string);
3464 }
7feb7d06
PA
3465
3466 /* Flush the async file first. */
d9d41e78 3467 if (target_is_async_p ())
7feb7d06
PA
3468 async_file_flush ();
3469
e3e9f5a2
PA
3470 /* Resume LWPs that are currently stopped without any pending status
3471 to report, but are resumed from the core's perspective. LWPs get
3472 in this state if we find them stopping at a time we're not
3473 interested in reporting the event (target_wait on a
3474 specific_process, for example, see linux_nat_wait_1), and
3475 meanwhile the event became uninteresting. Don't bother resuming
3476 LWPs we're not going to wait for if they'd stop immediately. */
fbea99ea 3477 if (target_is_non_stop_p ())
e3e9f5a2
PA
3478 iterate_over_lwps (minus_one_ptid, resume_stopped_resumed_lwps, &ptid);
3479
47608cb1 3480 event_ptid = linux_nat_wait_1 (ops, ptid, ourstatus, target_options);
7feb7d06
PA
3481
3482 /* If we requested any event, and something came out, assume there
3483 may be more. If we requested a specific lwp or process, also
3484 assume there may be more. */
d9d41e78 3485 if (target_is_async_p ()
6953d224
PA
3486 && ((ourstatus->kind != TARGET_WAITKIND_IGNORE
3487 && ourstatus->kind != TARGET_WAITKIND_NO_RESUMED)
7feb7d06
PA
3488 || !ptid_equal (ptid, minus_one_ptid)))
3489 async_file_mark ();
3490
7feb7d06
PA
3491 return event_ptid;
3492}
3493
1d2736d4
PA
3494/* Kill one LWP. */
3495
3496static void
3497kill_one_lwp (pid_t pid)
d6b0e80f 3498{
ed731959
JK
3499 /* PTRACE_KILL may resume the inferior. Send SIGKILL first. */
3500
3501 errno = 0;
1d2736d4 3502 kill_lwp (pid, SIGKILL);
ed731959 3503 if (debug_linux_nat)
57745c90
PA
3504 {
3505 int save_errno = errno;
3506
3507 fprintf_unfiltered (gdb_stdlog,
1d2736d4 3508 "KC: kill (SIGKILL) %ld, 0, 0 (%s)\n", (long) pid,
57745c90
PA
3509 save_errno ? safe_strerror (save_errno) : "OK");
3510 }
ed731959
JK
3511
3512 /* Some kernels ignore even SIGKILL for processes under ptrace. */
3513
d6b0e80f 3514 errno = 0;
1d2736d4 3515 ptrace (PTRACE_KILL, pid, 0, 0);
d6b0e80f 3516 if (debug_linux_nat)
57745c90
PA
3517 {
3518 int save_errno = errno;
3519
3520 fprintf_unfiltered (gdb_stdlog,
1d2736d4 3521 "KC: PTRACE_KILL %ld, 0, 0 (%s)\n", (long) pid,
57745c90
PA
3522 save_errno ? safe_strerror (save_errno) : "OK");
3523 }
d6b0e80f
AC
3524}
3525
1d2736d4
PA
3526/* Wait for an LWP to die. */
3527
3528static void
3529kill_wait_one_lwp (pid_t pid)
d6b0e80f 3530{
1d2736d4 3531 pid_t res;
d6b0e80f
AC
3532
3533 /* We must make sure that there are no pending events (delayed
3534 SIGSTOPs, pending SIGTRAPs, etc.) to make sure the current
3535 program doesn't interfere with any following debugging session. */
3536
d6b0e80f
AC
3537 do
3538 {
1d2736d4
PA
3539 res = my_waitpid (pid, NULL, __WALL);
3540 if (res != (pid_t) -1)
d6b0e80f 3541 {
e85a822c
DJ
3542 if (debug_linux_nat)
3543 fprintf_unfiltered (gdb_stdlog,
1d2736d4
PA
3544 "KWC: wait %ld received unknown.\n",
3545 (long) pid);
4a6ed09b
PA
3546 /* The Linux kernel sometimes fails to kill a thread
3547 completely after PTRACE_KILL; that goes from the stop
3548 point in do_fork out to the one in get_signal_to_deliver
3549 and waits again. So kill it again. */
1d2736d4 3550 kill_one_lwp (pid);
d6b0e80f
AC
3551 }
3552 }
1d2736d4
PA
3553 while (res == pid);
3554
3555 gdb_assert (res == -1 && errno == ECHILD);
3556}
3557
3558/* Callback for iterate_over_lwps. */
d6b0e80f 3559
1d2736d4
PA
3560static int
3561kill_callback (struct lwp_info *lp, void *data)
3562{
3563 kill_one_lwp (ptid_get_lwp (lp->ptid));
d6b0e80f
AC
3564 return 0;
3565}
3566
1d2736d4
PA
3567/* Callback for iterate_over_lwps. */
3568
3569static int
3570kill_wait_callback (struct lwp_info *lp, void *data)
3571{
3572 kill_wait_one_lwp (ptid_get_lwp (lp->ptid));
3573 return 0;
3574}
3575
3576/* Kill the fork children of any threads of inferior INF that are
3577 stopped at a fork event. */
3578
3579static void
3580kill_unfollowed_fork_children (struct inferior *inf)
3581{
3582 struct thread_info *thread;
3583
3584 ALL_NON_EXITED_THREADS (thread)
3585 if (thread->inf == inf)
3586 {
3587 struct target_waitstatus *ws = &thread->pending_follow;
3588
3589 if (ws->kind == TARGET_WAITKIND_FORKED
3590 || ws->kind == TARGET_WAITKIND_VFORKED)
3591 {
3592 ptid_t child_ptid = ws->value.related_pid;
3593 int child_pid = ptid_get_pid (child_ptid);
3594 int child_lwp = ptid_get_lwp (child_ptid);
1d2736d4
PA
3595
3596 kill_one_lwp (child_lwp);
3597 kill_wait_one_lwp (child_lwp);
3598
3599 /* Let the arch-specific native code know this process is
3600 gone. */
3601 linux_nat_forget_process (child_pid);
3602 }
3603 }
3604}
3605
d6b0e80f 3606static void
7d85a9c0 3607linux_nat_kill (struct target_ops *ops)
d6b0e80f 3608{
f973ed9c
DJ
3609 /* If we're stopped while forking and we haven't followed yet,
3610 kill the other task. We need to do this first because the
3611 parent will be sleeping if this is a vfork. */
1d2736d4 3612 kill_unfollowed_fork_children (current_inferior ());
f973ed9c
DJ
3613
3614 if (forks_exist_p ())
7feb7d06 3615 linux_fork_killall ();
f973ed9c
DJ
3616 else
3617 {
d90e17a7 3618 ptid_t ptid = pid_to_ptid (ptid_get_pid (inferior_ptid));
e0881a8e 3619
4c28f408
PA
3620 /* Stop all threads before killing them, since ptrace requires
3621 that the thread is stopped to sucessfully PTRACE_KILL. */
d90e17a7 3622 iterate_over_lwps (ptid, stop_callback, NULL);
4c28f408
PA
3623 /* ... and wait until all of them have reported back that
3624 they're no longer running. */
d90e17a7 3625 iterate_over_lwps (ptid, stop_wait_callback, NULL);
4c28f408 3626
f973ed9c 3627 /* Kill all LWP's ... */
d90e17a7 3628 iterate_over_lwps (ptid, kill_callback, NULL);
f973ed9c
DJ
3629
3630 /* ... and wait until we've flushed all events. */
d90e17a7 3631 iterate_over_lwps (ptid, kill_wait_callback, NULL);
f973ed9c
DJ
3632 }
3633
3634 target_mourn_inferior ();
d6b0e80f
AC
3635}
3636
3637static void
136d6dae 3638linux_nat_mourn_inferior (struct target_ops *ops)
d6b0e80f 3639{
26cb8b7c
PA
3640 int pid = ptid_get_pid (inferior_ptid);
3641
3642 purge_lwp_list (pid);
d6b0e80f 3643
f973ed9c 3644 if (! forks_exist_p ())
d90e17a7
PA
3645 /* Normal case, no other forks available. */
3646 linux_ops->to_mourn_inferior (ops);
f973ed9c
DJ
3647 else
3648 /* Multi-fork case. The current inferior_ptid has exited, but
3649 there are other viable forks to debug. Delete the exiting
3650 one and context-switch to the first available. */
3651 linux_fork_mourn_inferior ();
26cb8b7c
PA
3652
3653 /* Let the arch-specific native code know this process is gone. */
3654 linux_nat_forget_process (pid);
d6b0e80f
AC
3655}
3656
5b009018
PA
3657/* Convert a native/host siginfo object, into/from the siginfo in the
3658 layout of the inferiors' architecture. */
3659
3660static void
a5362b9a 3661siginfo_fixup (siginfo_t *siginfo, gdb_byte *inf_siginfo, int direction)
5b009018
PA
3662{
3663 int done = 0;
3664
3665 if (linux_nat_siginfo_fixup != NULL)
3666 done = linux_nat_siginfo_fixup (siginfo, inf_siginfo, direction);
3667
3668 /* If there was no callback, or the callback didn't do anything,
3669 then just do a straight memcpy. */
3670 if (!done)
3671 {
3672 if (direction == 1)
a5362b9a 3673 memcpy (siginfo, inf_siginfo, sizeof (siginfo_t));
5b009018 3674 else
a5362b9a 3675 memcpy (inf_siginfo, siginfo, sizeof (siginfo_t));
5b009018
PA
3676 }
3677}
3678
9b409511 3679static enum target_xfer_status
4aa995e1
PA
3680linux_xfer_siginfo (struct target_ops *ops, enum target_object object,
3681 const char *annex, gdb_byte *readbuf,
9b409511
YQ
3682 const gdb_byte *writebuf, ULONGEST offset, ULONGEST len,
3683 ULONGEST *xfered_len)
4aa995e1 3684{
4aa995e1 3685 int pid;
a5362b9a
TS
3686 siginfo_t siginfo;
3687 gdb_byte inf_siginfo[sizeof (siginfo_t)];
4aa995e1
PA
3688
3689 gdb_assert (object == TARGET_OBJECT_SIGNAL_INFO);
3690 gdb_assert (readbuf || writebuf);
3691
dfd4cc63 3692 pid = ptid_get_lwp (inferior_ptid);
4aa995e1 3693 if (pid == 0)
dfd4cc63 3694 pid = ptid_get_pid (inferior_ptid);
4aa995e1
PA
3695
3696 if (offset > sizeof (siginfo))
2ed4b548 3697 return TARGET_XFER_E_IO;
4aa995e1
PA
3698
3699 errno = 0;
3700 ptrace (PTRACE_GETSIGINFO, pid, (PTRACE_TYPE_ARG3) 0, &siginfo);
3701 if (errno != 0)
2ed4b548 3702 return TARGET_XFER_E_IO;
4aa995e1 3703
5b009018
PA
3704 /* When GDB is built as a 64-bit application, ptrace writes into
3705 SIGINFO an object with 64-bit layout. Since debugging a 32-bit
3706 inferior with a 64-bit GDB should look the same as debugging it
3707 with a 32-bit GDB, we need to convert it. GDB core always sees
3708 the converted layout, so any read/write will have to be done
3709 post-conversion. */
3710 siginfo_fixup (&siginfo, inf_siginfo, 0);
3711
4aa995e1
PA
3712 if (offset + len > sizeof (siginfo))
3713 len = sizeof (siginfo) - offset;
3714
3715 if (readbuf != NULL)
5b009018 3716 memcpy (readbuf, inf_siginfo + offset, len);
4aa995e1
PA
3717 else
3718 {
5b009018
PA
3719 memcpy (inf_siginfo + offset, writebuf, len);
3720
3721 /* Convert back to ptrace layout before flushing it out. */
3722 siginfo_fixup (&siginfo, inf_siginfo, 1);
3723
4aa995e1
PA
3724 errno = 0;
3725 ptrace (PTRACE_SETSIGINFO, pid, (PTRACE_TYPE_ARG3) 0, &siginfo);
3726 if (errno != 0)
2ed4b548 3727 return TARGET_XFER_E_IO;
4aa995e1
PA
3728 }
3729
9b409511
YQ
3730 *xfered_len = len;
3731 return TARGET_XFER_OK;
4aa995e1
PA
3732}
3733
9b409511 3734static enum target_xfer_status
10d6c8cd
DJ
3735linux_nat_xfer_partial (struct target_ops *ops, enum target_object object,
3736 const char *annex, gdb_byte *readbuf,
3737 const gdb_byte *writebuf,
9b409511 3738 ULONGEST offset, ULONGEST len, ULONGEST *xfered_len)
d6b0e80f 3739{
4aa995e1 3740 struct cleanup *old_chain;
9b409511 3741 enum target_xfer_status xfer;
d6b0e80f 3742
4aa995e1
PA
3743 if (object == TARGET_OBJECT_SIGNAL_INFO)
3744 return linux_xfer_siginfo (ops, object, annex, readbuf, writebuf,
9b409511 3745 offset, len, xfered_len);
4aa995e1 3746
c35b1492
PA
3747 /* The target is connected but no live inferior is selected. Pass
3748 this request down to a lower stratum (e.g., the executable
3749 file). */
3750 if (object == TARGET_OBJECT_MEMORY && ptid_equal (inferior_ptid, null_ptid))
9b409511 3751 return TARGET_XFER_EOF;
c35b1492 3752
4aa995e1
PA
3753 old_chain = save_inferior_ptid ();
3754
dfd4cc63
LM
3755 if (ptid_lwp_p (inferior_ptid))
3756 inferior_ptid = pid_to_ptid (ptid_get_lwp (inferior_ptid));
d6b0e80f 3757
10d6c8cd 3758 xfer = linux_ops->to_xfer_partial (ops, object, annex, readbuf, writebuf,
9b409511 3759 offset, len, xfered_len);
d6b0e80f
AC
3760
3761 do_cleanups (old_chain);
3762 return xfer;
3763}
3764
28439f5e
PA
3765static int
3766linux_nat_thread_alive (struct target_ops *ops, ptid_t ptid)
3767{
4a6ed09b
PA
3768 /* As long as a PTID is in lwp list, consider it alive. */
3769 return find_lwp_pid (ptid) != NULL;
28439f5e
PA
3770}
3771
8a06aea7
PA
3772/* Implement the to_update_thread_list target method for this
3773 target. */
3774
3775static void
3776linux_nat_update_thread_list (struct target_ops *ops)
3777{
a6904d5a
PA
3778 struct lwp_info *lwp;
3779
4a6ed09b
PA
3780 /* We add/delete threads from the list as clone/exit events are
3781 processed, so just try deleting exited threads still in the
3782 thread list. */
3783 delete_exited_threads ();
a6904d5a
PA
3784
3785 /* Update the processor core that each lwp/thread was last seen
3786 running on. */
3787 ALL_LWPS (lwp)
3788 lwp->core = linux_common_core_of_thread (lwp->ptid);
8a06aea7
PA
3789}
3790
d6b0e80f 3791static char *
117de6a9 3792linux_nat_pid_to_str (struct target_ops *ops, ptid_t ptid)
d6b0e80f
AC
3793{
3794 static char buf[64];
3795
dfd4cc63
LM
3796 if (ptid_lwp_p (ptid)
3797 && (ptid_get_pid (ptid) != ptid_get_lwp (ptid)
3798 || num_lwps (ptid_get_pid (ptid)) > 1))
d6b0e80f 3799 {
dfd4cc63 3800 snprintf (buf, sizeof (buf), "LWP %ld", ptid_get_lwp (ptid));
d6b0e80f
AC
3801 return buf;
3802 }
3803
3804 return normal_pid_to_str (ptid);
3805}
3806
73ede765 3807static const char *
503a628d 3808linux_nat_thread_name (struct target_ops *self, struct thread_info *thr)
4694da01 3809{
79efa585 3810 return linux_proc_tid_get_name (thr->ptid);
4694da01
TT
3811}
3812
dba24537
AC
3813/* Accepts an integer PID; Returns a string representing a file that
3814 can be opened to get the symbols for the child process. */
3815
6d8fd2b7 3816static char *
8dd27370 3817linux_child_pid_to_exec_file (struct target_ops *self, int pid)
dba24537 3818{
e0d86d2c 3819 return linux_proc_pid_to_exec_file (pid);
dba24537
AC
3820}
3821
10d6c8cd
DJ
3822/* Implement the to_xfer_partial interface for memory reads using the /proc
3823 filesystem. Because we can use a single read() call for /proc, this
3824 can be much more efficient than banging away at PTRACE_PEEKTEXT,
3825 but it doesn't support writes. */
3826
9b409511 3827static enum target_xfer_status
10d6c8cd
DJ
3828linux_proc_xfer_partial (struct target_ops *ops, enum target_object object,
3829 const char *annex, gdb_byte *readbuf,
3830 const gdb_byte *writebuf,
9b409511 3831 ULONGEST offset, LONGEST len, ULONGEST *xfered_len)
dba24537 3832{
10d6c8cd
DJ
3833 LONGEST ret;
3834 int fd;
dba24537
AC
3835 char filename[64];
3836
10d6c8cd 3837 if (object != TARGET_OBJECT_MEMORY || !readbuf)
f486487f 3838 return TARGET_XFER_EOF;
dba24537
AC
3839
3840 /* Don't bother for one word. */
3841 if (len < 3 * sizeof (long))
9b409511 3842 return TARGET_XFER_EOF;
dba24537
AC
3843
3844 /* We could keep this file open and cache it - possibly one per
3845 thread. That requires some juggling, but is even faster. */
cde33bf1
YQ
3846 xsnprintf (filename, sizeof filename, "/proc/%d/mem",
3847 ptid_get_pid (inferior_ptid));
614c279d 3848 fd = gdb_open_cloexec (filename, O_RDONLY | O_LARGEFILE, 0);
dba24537 3849 if (fd == -1)
9b409511 3850 return TARGET_XFER_EOF;
dba24537
AC
3851
3852 /* If pread64 is available, use it. It's faster if the kernel
3853 supports it (only one syscall), and it's 64-bit safe even on
3854 32-bit platforms (for instance, SPARC debugging a SPARC64
3855 application). */
3856#ifdef HAVE_PREAD64
10d6c8cd 3857 if (pread64 (fd, readbuf, len, offset) != len)
dba24537 3858#else
10d6c8cd 3859 if (lseek (fd, offset, SEEK_SET) == -1 || read (fd, readbuf, len) != len)
dba24537
AC
3860#endif
3861 ret = 0;
3862 else
3863 ret = len;
3864
3865 close (fd);
9b409511
YQ
3866
3867 if (ret == 0)
3868 return TARGET_XFER_EOF;
3869 else
3870 {
3871 *xfered_len = ret;
3872 return TARGET_XFER_OK;
3873 }
dba24537
AC
3874}
3875
efcbbd14
UW
3876
3877/* Enumerate spufs IDs for process PID. */
3878static LONGEST
b55e14c7 3879spu_enumerate_spu_ids (int pid, gdb_byte *buf, ULONGEST offset, ULONGEST len)
efcbbd14 3880{
f5656ead 3881 enum bfd_endian byte_order = gdbarch_byte_order (target_gdbarch ());
efcbbd14
UW
3882 LONGEST pos = 0;
3883 LONGEST written = 0;
3884 char path[128];
3885 DIR *dir;
3886 struct dirent *entry;
3887
3888 xsnprintf (path, sizeof path, "/proc/%d/fd", pid);
3889 dir = opendir (path);
3890 if (!dir)
3891 return -1;
3892
3893 rewinddir (dir);
3894 while ((entry = readdir (dir)) != NULL)
3895 {
3896 struct stat st;
3897 struct statfs stfs;
3898 int fd;
3899
3900 fd = atoi (entry->d_name);
3901 if (!fd)
3902 continue;
3903
3904 xsnprintf (path, sizeof path, "/proc/%d/fd/%d", pid, fd);
3905 if (stat (path, &st) != 0)
3906 continue;
3907 if (!S_ISDIR (st.st_mode))
3908 continue;
3909
3910 if (statfs (path, &stfs) != 0)
3911 continue;
3912 if (stfs.f_type != SPUFS_MAGIC)
3913 continue;
3914
3915 if (pos >= offset && pos + 4 <= offset + len)
3916 {
3917 store_unsigned_integer (buf + pos - offset, 4, byte_order, fd);
3918 written += 4;
3919 }
3920 pos += 4;
3921 }
3922
3923 closedir (dir);
3924 return written;
3925}
3926
3927/* Implement the to_xfer_partial interface for the TARGET_OBJECT_SPU
3928 object type, using the /proc file system. */
9b409511
YQ
3929
3930static enum target_xfer_status
efcbbd14
UW
3931linux_proc_xfer_spu (struct target_ops *ops, enum target_object object,
3932 const char *annex, gdb_byte *readbuf,
3933 const gdb_byte *writebuf,
9b409511 3934 ULONGEST offset, ULONGEST len, ULONGEST *xfered_len)
efcbbd14
UW
3935{
3936 char buf[128];
3937 int fd = 0;
3938 int ret = -1;
dfd4cc63 3939 int pid = ptid_get_pid (inferior_ptid);
efcbbd14
UW
3940
3941 if (!annex)
3942 {
3943 if (!readbuf)
2ed4b548 3944 return TARGET_XFER_E_IO;
efcbbd14 3945 else
9b409511
YQ
3946 {
3947 LONGEST l = spu_enumerate_spu_ids (pid, readbuf, offset, len);
3948
3949 if (l < 0)
3950 return TARGET_XFER_E_IO;
3951 else if (l == 0)
3952 return TARGET_XFER_EOF;
3953 else
3954 {
3955 *xfered_len = (ULONGEST) l;
3956 return TARGET_XFER_OK;
3957 }
3958 }
efcbbd14
UW
3959 }
3960
3961 xsnprintf (buf, sizeof buf, "/proc/%d/fd/%s", pid, annex);
614c279d 3962 fd = gdb_open_cloexec (buf, writebuf? O_WRONLY : O_RDONLY, 0);
efcbbd14 3963 if (fd <= 0)
2ed4b548 3964 return TARGET_XFER_E_IO;
efcbbd14
UW
3965
3966 if (offset != 0
3967 && lseek (fd, (off_t) offset, SEEK_SET) != (off_t) offset)
3968 {
3969 close (fd);
9b409511 3970 return TARGET_XFER_EOF;
efcbbd14
UW
3971 }
3972
3973 if (writebuf)
3974 ret = write (fd, writebuf, (size_t) len);
3975 else if (readbuf)
3976 ret = read (fd, readbuf, (size_t) len);
3977
3978 close (fd);
9b409511
YQ
3979
3980 if (ret < 0)
3981 return TARGET_XFER_E_IO;
3982 else if (ret == 0)
3983 return TARGET_XFER_EOF;
3984 else
3985 {
3986 *xfered_len = (ULONGEST) ret;
3987 return TARGET_XFER_OK;
3988 }
efcbbd14
UW
3989}
3990
3991
dba24537
AC
3992/* Parse LINE as a signal set and add its set bits to SIGS. */
3993
3994static void
3995add_line_to_sigset (const char *line, sigset_t *sigs)
3996{
3997 int len = strlen (line) - 1;
3998 const char *p;
3999 int signum;
4000
4001 if (line[len] != '\n')
8a3fe4f8 4002 error (_("Could not parse signal set: %s"), line);
dba24537
AC
4003
4004 p = line;
4005 signum = len * 4;
4006 while (len-- > 0)
4007 {
4008 int digit;
4009
4010 if (*p >= '0' && *p <= '9')
4011 digit = *p - '0';
4012 else if (*p >= 'a' && *p <= 'f')
4013 digit = *p - 'a' + 10;
4014 else
8a3fe4f8 4015 error (_("Could not parse signal set: %s"), line);
dba24537
AC
4016
4017 signum -= 4;
4018
4019 if (digit & 1)
4020 sigaddset (sigs, signum + 1);
4021 if (digit & 2)
4022 sigaddset (sigs, signum + 2);
4023 if (digit & 4)
4024 sigaddset (sigs, signum + 3);
4025 if (digit & 8)
4026 sigaddset (sigs, signum + 4);
4027
4028 p++;
4029 }
4030}
4031
4032/* Find process PID's pending signals from /proc/pid/status and set
4033 SIGS to match. */
4034
4035void
3e43a32a
MS
4036linux_proc_pending_signals (int pid, sigset_t *pending,
4037 sigset_t *blocked, sigset_t *ignored)
dba24537
AC
4038{
4039 FILE *procfile;
d8d2a3ee 4040 char buffer[PATH_MAX], fname[PATH_MAX];
7c8a8b04 4041 struct cleanup *cleanup;
dba24537
AC
4042
4043 sigemptyset (pending);
4044 sigemptyset (blocked);
4045 sigemptyset (ignored);
cde33bf1 4046 xsnprintf (fname, sizeof fname, "/proc/%d/status", pid);
614c279d 4047 procfile = gdb_fopen_cloexec (fname, "r");
dba24537 4048 if (procfile == NULL)
8a3fe4f8 4049 error (_("Could not open %s"), fname);
7c8a8b04 4050 cleanup = make_cleanup_fclose (procfile);
dba24537 4051
d8d2a3ee 4052 while (fgets (buffer, PATH_MAX, procfile) != NULL)
dba24537
AC
4053 {
4054 /* Normal queued signals are on the SigPnd line in the status
4055 file. However, 2.6 kernels also have a "shared" pending
4056 queue for delivering signals to a thread group, so check for
4057 a ShdPnd line also.
4058
4059 Unfortunately some Red Hat kernels include the shared pending
4060 queue but not the ShdPnd status field. */
4061
61012eef 4062 if (startswith (buffer, "SigPnd:\t"))
dba24537 4063 add_line_to_sigset (buffer + 8, pending);
61012eef 4064 else if (startswith (buffer, "ShdPnd:\t"))
dba24537 4065 add_line_to_sigset (buffer + 8, pending);
61012eef 4066 else if (startswith (buffer, "SigBlk:\t"))
dba24537 4067 add_line_to_sigset (buffer + 8, blocked);
61012eef 4068 else if (startswith (buffer, "SigIgn:\t"))
dba24537
AC
4069 add_line_to_sigset (buffer + 8, ignored);
4070 }
4071
7c8a8b04 4072 do_cleanups (cleanup);
dba24537
AC
4073}
4074
9b409511 4075static enum target_xfer_status
07e059b5 4076linux_nat_xfer_osdata (struct target_ops *ops, enum target_object object,
e0881a8e 4077 const char *annex, gdb_byte *readbuf,
9b409511
YQ
4078 const gdb_byte *writebuf, ULONGEST offset, ULONGEST len,
4079 ULONGEST *xfered_len)
07e059b5 4080{
07e059b5
VP
4081 gdb_assert (object == TARGET_OBJECT_OSDATA);
4082
9b409511
YQ
4083 *xfered_len = linux_common_xfer_osdata (annex, readbuf, offset, len);
4084 if (*xfered_len == 0)
4085 return TARGET_XFER_EOF;
4086 else
4087 return TARGET_XFER_OK;
07e059b5
VP
4088}
4089
9b409511 4090static enum target_xfer_status
10d6c8cd
DJ
4091linux_xfer_partial (struct target_ops *ops, enum target_object object,
4092 const char *annex, gdb_byte *readbuf,
9b409511
YQ
4093 const gdb_byte *writebuf, ULONGEST offset, ULONGEST len,
4094 ULONGEST *xfered_len)
10d6c8cd 4095{
9b409511 4096 enum target_xfer_status xfer;
10d6c8cd
DJ
4097
4098 if (object == TARGET_OBJECT_AUXV)
9f2982ff 4099 return memory_xfer_auxv (ops, object, annex, readbuf, writebuf,
9b409511 4100 offset, len, xfered_len);
10d6c8cd 4101
07e059b5
VP
4102 if (object == TARGET_OBJECT_OSDATA)
4103 return linux_nat_xfer_osdata (ops, object, annex, readbuf, writebuf,
9b409511 4104 offset, len, xfered_len);
07e059b5 4105
efcbbd14
UW
4106 if (object == TARGET_OBJECT_SPU)
4107 return linux_proc_xfer_spu (ops, object, annex, readbuf, writebuf,
9b409511 4108 offset, len, xfered_len);
efcbbd14 4109
8f313923
JK
4110 /* GDB calculates all the addresses in possibly larget width of the address.
4111 Address width needs to be masked before its final use - either by
4112 linux_proc_xfer_partial or inf_ptrace_xfer_partial.
4113
4114 Compare ADDR_BIT first to avoid a compiler warning on shift overflow. */
4115
4116 if (object == TARGET_OBJECT_MEMORY)
4117 {
f5656ead 4118 int addr_bit = gdbarch_addr_bit (target_gdbarch ());
8f313923
JK
4119
4120 if (addr_bit < (sizeof (ULONGEST) * HOST_CHAR_BIT))
4121 offset &= ((ULONGEST) 1 << addr_bit) - 1;
4122 }
4123
10d6c8cd 4124 xfer = linux_proc_xfer_partial (ops, object, annex, readbuf, writebuf,
9b409511
YQ
4125 offset, len, xfered_len);
4126 if (xfer != TARGET_XFER_EOF)
10d6c8cd
DJ
4127 return xfer;
4128
4129 return super_xfer_partial (ops, object, annex, readbuf, writebuf,
9b409511 4130 offset, len, xfered_len);
10d6c8cd
DJ
4131}
4132
5808517f
YQ
4133static void
4134cleanup_target_stop (void *arg)
4135{
4136 ptid_t *ptid = (ptid_t *) arg;
4137
4138 gdb_assert (arg != NULL);
4139
4140 /* Unpause all */
a493e3e2 4141 target_resume (*ptid, 0, GDB_SIGNAL_0);
5808517f
YQ
4142}
4143
4144static VEC(static_tracepoint_marker_p) *
c686c57f
TT
4145linux_child_static_tracepoint_markers_by_strid (struct target_ops *self,
4146 const char *strid)
5808517f
YQ
4147{
4148 char s[IPA_CMD_BUF_SIZE];
4149 struct cleanup *old_chain;
4150 int pid = ptid_get_pid (inferior_ptid);
4151 VEC(static_tracepoint_marker_p) *markers = NULL;
4152 struct static_tracepoint_marker *marker = NULL;
4153 char *p = s;
4154 ptid_t ptid = ptid_build (pid, 0, 0);
4155
4156 /* Pause all */
4157 target_stop (ptid);
4158
4159 memcpy (s, "qTfSTM", sizeof ("qTfSTM"));
4160 s[sizeof ("qTfSTM")] = 0;
4161
42476b70 4162 agent_run_command (pid, s, strlen (s) + 1);
5808517f
YQ
4163
4164 old_chain = make_cleanup (free_current_marker, &marker);
4165 make_cleanup (cleanup_target_stop, &ptid);
4166
4167 while (*p++ == 'm')
4168 {
4169 if (marker == NULL)
4170 marker = XCNEW (struct static_tracepoint_marker);
4171
4172 do
4173 {
4174 parse_static_tracepoint_marker_definition (p, &p, marker);
4175
4176 if (strid == NULL || strcmp (strid, marker->str_id) == 0)
4177 {
4178 VEC_safe_push (static_tracepoint_marker_p,
4179 markers, marker);
4180 marker = NULL;
4181 }
4182 else
4183 {
4184 release_static_tracepoint_marker (marker);
4185 memset (marker, 0, sizeof (*marker));
4186 }
4187 }
4188 while (*p++ == ','); /* comma-separated list */
4189
4190 memcpy (s, "qTsSTM", sizeof ("qTsSTM"));
4191 s[sizeof ("qTsSTM")] = 0;
42476b70 4192 agent_run_command (pid, s, strlen (s) + 1);
5808517f
YQ
4193 p = s;
4194 }
4195
4196 do_cleanups (old_chain);
4197
4198 return markers;
4199}
4200
e9efe249 4201/* Create a prototype generic GNU/Linux target. The client can override
10d6c8cd
DJ
4202 it with local methods. */
4203
910122bf
UW
4204static void
4205linux_target_install_ops (struct target_ops *t)
10d6c8cd 4206{
6d8fd2b7 4207 t->to_insert_fork_catchpoint = linux_child_insert_fork_catchpoint;
eb73ad13 4208 t->to_remove_fork_catchpoint = linux_child_remove_fork_catchpoint;
6d8fd2b7 4209 t->to_insert_vfork_catchpoint = linux_child_insert_vfork_catchpoint;
eb73ad13 4210 t->to_remove_vfork_catchpoint = linux_child_remove_vfork_catchpoint;
6d8fd2b7 4211 t->to_insert_exec_catchpoint = linux_child_insert_exec_catchpoint;
eb73ad13 4212 t->to_remove_exec_catchpoint = linux_child_remove_exec_catchpoint;
a96d9b2e 4213 t->to_set_syscall_catchpoint = linux_child_set_syscall_catchpoint;
6d8fd2b7 4214 t->to_pid_to_exec_file = linux_child_pid_to_exec_file;
10d6c8cd 4215 t->to_post_startup_inferior = linux_child_post_startup_inferior;
6d8fd2b7
UW
4216 t->to_post_attach = linux_child_post_attach;
4217 t->to_follow_fork = linux_child_follow_fork;
10d6c8cd
DJ
4218
4219 super_xfer_partial = t->to_xfer_partial;
4220 t->to_xfer_partial = linux_xfer_partial;
5808517f
YQ
4221
4222 t->to_static_tracepoint_markers_by_strid
4223 = linux_child_static_tracepoint_markers_by_strid;
910122bf
UW
4224}
4225
4226struct target_ops *
4227linux_target (void)
4228{
4229 struct target_ops *t;
4230
4231 t = inf_ptrace_target ();
4232 linux_target_install_ops (t);
4233
4234 return t;
4235}
4236
4237struct target_ops *
7714d83a 4238linux_trad_target (CORE_ADDR (*register_u_offset)(struct gdbarch *, int, int))
910122bf
UW
4239{
4240 struct target_ops *t;
4241
4242 t = inf_ptrace_trad_target (register_u_offset);
4243 linux_target_install_ops (t);
10d6c8cd 4244
10d6c8cd
DJ
4245 return t;
4246}
4247
b84876c2
PA
4248/* target_is_async_p implementation. */
4249
4250static int
6a109b6b 4251linux_nat_is_async_p (struct target_ops *ops)
b84876c2 4252{
198297aa 4253 return linux_is_async_p ();
b84876c2
PA
4254}
4255
4256/* target_can_async_p implementation. */
4257
4258static int
6a109b6b 4259linux_nat_can_async_p (struct target_ops *ops)
b84876c2
PA
4260{
4261 /* NOTE: palves 2008-03-21: We're only async when the user requests
7feb7d06 4262 it explicitly with the "set target-async" command.
b84876c2 4263 Someday, linux will always be async. */
3dd5b83d 4264 return target_async_permitted;
b84876c2
PA
4265}
4266
9908b566 4267static int
2a9a2795 4268linux_nat_supports_non_stop (struct target_ops *self)
9908b566
VP
4269{
4270 return 1;
4271}
4272
fbea99ea
PA
4273/* to_always_non_stop_p implementation. */
4274
4275static int
4276linux_nat_always_non_stop_p (struct target_ops *self)
4277{
f12899e9 4278 return 1;
fbea99ea
PA
4279}
4280
d90e17a7
PA
4281/* True if we want to support multi-process. To be removed when GDB
4282 supports multi-exec. */
4283
2277426b 4284int linux_multi_process = 1;
d90e17a7
PA
4285
4286static int
86ce2668 4287linux_nat_supports_multi_process (struct target_ops *self)
d90e17a7
PA
4288{
4289 return linux_multi_process;
4290}
4291
03583c20 4292static int
2bfc0540 4293linux_nat_supports_disable_randomization (struct target_ops *self)
03583c20
UW
4294{
4295#ifdef HAVE_PERSONALITY
4296 return 1;
4297#else
4298 return 0;
4299#endif
4300}
4301
b84876c2
PA
4302static int async_terminal_is_ours = 1;
4303
4d4ca2a1
DE
4304/* target_terminal_inferior implementation.
4305
4306 This is a wrapper around child_terminal_inferior to add async support. */
b84876c2
PA
4307
4308static void
d2f640d4 4309linux_nat_terminal_inferior (struct target_ops *self)
b84876c2 4310{
d6b64346 4311 child_terminal_inferior (self);
b84876c2 4312
d9d2d8b6 4313 /* Calls to target_terminal_*() are meant to be idempotent. */
b84876c2
PA
4314 if (!async_terminal_is_ours)
4315 return;
4316
4317 delete_file_handler (input_fd);
4318 async_terminal_is_ours = 0;
4319 set_sigint_trap ();
4320}
4321
4d4ca2a1
DE
4322/* target_terminal_ours implementation.
4323
4324 This is a wrapper around child_terminal_ours to add async support (and
4325 implement the target_terminal_ours vs target_terminal_ours_for_output
4326 distinction). child_terminal_ours is currently no different than
4327 child_terminal_ours_for_output.
4328 We leave target_terminal_ours_for_output alone, leaving it to
4329 child_terminal_ours_for_output. */
b84876c2 4330
2c0b251b 4331static void
e3594fd1 4332linux_nat_terminal_ours (struct target_ops *self)
b84876c2 4333{
b84876c2
PA
4334 /* GDB should never give the terminal to the inferior if the
4335 inferior is running in the background (run&, continue&, etc.),
4336 but claiming it sure should. */
d6b64346 4337 child_terminal_ours (self);
b84876c2 4338
b84876c2
PA
4339 if (async_terminal_is_ours)
4340 return;
4341
4342 clear_sigint_trap ();
4343 add_file_handler (input_fd, stdin_event_handler, 0);
4344 async_terminal_is_ours = 1;
4345}
4346
7feb7d06
PA
4347/* SIGCHLD handler that serves two purposes: In non-stop/async mode,
4348 so we notice when any child changes state, and notify the
4349 event-loop; it allows us to use sigsuspend in linux_nat_wait_1
4350 above to wait for the arrival of a SIGCHLD. */
4351
b84876c2 4352static void
7feb7d06 4353sigchld_handler (int signo)
b84876c2 4354{
7feb7d06
PA
4355 int old_errno = errno;
4356
01124a23
DE
4357 if (debug_linux_nat)
4358 ui_file_write_async_safe (gdb_stdlog,
4359 "sigchld\n", sizeof ("sigchld\n") - 1);
7feb7d06
PA
4360
4361 if (signo == SIGCHLD
4362 && linux_nat_event_pipe[0] != -1)
4363 async_file_mark (); /* Let the event loop know that there are
4364 events to handle. */
4365
4366 errno = old_errno;
4367}
4368
4369/* Callback registered with the target events file descriptor. */
4370
4371static void
4372handle_target_event (int error, gdb_client_data client_data)
4373{
6a3753b3 4374 inferior_event_handler (INF_REG_EVENT, NULL);
7feb7d06
PA
4375}
4376
4377/* Create/destroy the target events pipe. Returns previous state. */
4378
4379static int
4380linux_async_pipe (int enable)
4381{
198297aa 4382 int previous = linux_is_async_p ();
7feb7d06
PA
4383
4384 if (previous != enable)
4385 {
4386 sigset_t prev_mask;
4387
12696c10
PA
4388 /* Block child signals while we create/destroy the pipe, as
4389 their handler writes to it. */
7feb7d06
PA
4390 block_child_signals (&prev_mask);
4391
4392 if (enable)
4393 {
614c279d 4394 if (gdb_pipe_cloexec (linux_nat_event_pipe) == -1)
7feb7d06
PA
4395 internal_error (__FILE__, __LINE__,
4396 "creating event pipe failed.");
4397
4398 fcntl (linux_nat_event_pipe[0], F_SETFL, O_NONBLOCK);
4399 fcntl (linux_nat_event_pipe[1], F_SETFL, O_NONBLOCK);
4400 }
4401 else
4402 {
4403 close (linux_nat_event_pipe[0]);
4404 close (linux_nat_event_pipe[1]);
4405 linux_nat_event_pipe[0] = -1;
4406 linux_nat_event_pipe[1] = -1;
4407 }
4408
4409 restore_child_signals_mask (&prev_mask);
4410 }
4411
4412 return previous;
b84876c2
PA
4413}
4414
4415/* target_async implementation. */
4416
4417static void
6a3753b3 4418linux_nat_async (struct target_ops *ops, int enable)
b84876c2 4419{
6a3753b3 4420 if (enable)
b84876c2 4421 {
7feb7d06
PA
4422 if (!linux_async_pipe (1))
4423 {
4424 add_file_handler (linux_nat_event_pipe[0],
4425 handle_target_event, NULL);
4426 /* There may be pending events to handle. Tell the event loop
4427 to poll them. */
4428 async_file_mark ();
4429 }
b84876c2
PA
4430 }
4431 else
4432 {
b84876c2 4433 delete_file_handler (linux_nat_event_pipe[0]);
7feb7d06 4434 linux_async_pipe (0);
b84876c2
PA
4435 }
4436 return;
4437}
4438
a493e3e2 4439/* Stop an LWP, and push a GDB_SIGNAL_0 stop status if no other
252fbfc8
PA
4440 event came out. */
4441
4c28f408 4442static int
252fbfc8 4443linux_nat_stop_lwp (struct lwp_info *lwp, void *data)
4c28f408 4444{
d90e17a7 4445 if (!lwp->stopped)
252fbfc8 4446 {
d90e17a7
PA
4447 if (debug_linux_nat)
4448 fprintf_unfiltered (gdb_stdlog,
4449 "LNSL: running -> suspending %s\n",
4450 target_pid_to_str (lwp->ptid));
252fbfc8 4451
252fbfc8 4452
25289eb2
PA
4453 if (lwp->last_resume_kind == resume_stop)
4454 {
4455 if (debug_linux_nat)
4456 fprintf_unfiltered (gdb_stdlog,
4457 "linux-nat: already stopping LWP %ld at "
4458 "GDB's request\n",
4459 ptid_get_lwp (lwp->ptid));
4460 return 0;
4461 }
252fbfc8 4462
25289eb2
PA
4463 stop_callback (lwp, NULL);
4464 lwp->last_resume_kind = resume_stop;
d90e17a7
PA
4465 }
4466 else
4467 {
4468 /* Already known to be stopped; do nothing. */
252fbfc8 4469
d90e17a7
PA
4470 if (debug_linux_nat)
4471 {
e09875d4 4472 if (find_thread_ptid (lwp->ptid)->stop_requested)
3e43a32a
MS
4473 fprintf_unfiltered (gdb_stdlog,
4474 "LNSL: already stopped/stop_requested %s\n",
d90e17a7
PA
4475 target_pid_to_str (lwp->ptid));
4476 else
3e43a32a
MS
4477 fprintf_unfiltered (gdb_stdlog,
4478 "LNSL: already stopped/no "
4479 "stop_requested yet %s\n",
d90e17a7 4480 target_pid_to_str (lwp->ptid));
252fbfc8
PA
4481 }
4482 }
4c28f408
PA
4483 return 0;
4484}
4485
4486static void
1eab8a48 4487linux_nat_stop (struct target_ops *self, ptid_t ptid)
4c28f408 4488{
bfedc46a
PA
4489 iterate_over_lwps (ptid, linux_nat_stop_lwp, NULL);
4490}
4491
d90e17a7 4492static void
de90e03d 4493linux_nat_close (struct target_ops *self)
d90e17a7
PA
4494{
4495 /* Unregister from the event loop. */
9debeba0 4496 if (linux_nat_is_async_p (self))
6a3753b3 4497 linux_nat_async (self, 0);
d90e17a7 4498
d90e17a7 4499 if (linux_ops->to_close)
de90e03d 4500 linux_ops->to_close (linux_ops);
6a3cb8e8
PA
4501
4502 super_close (self);
d90e17a7
PA
4503}
4504
c0694254
PA
4505/* When requests are passed down from the linux-nat layer to the
4506 single threaded inf-ptrace layer, ptids of (lwpid,0,0) form are
4507 used. The address space pointer is stored in the inferior object,
4508 but the common code that is passed such ptid can't tell whether
4509 lwpid is a "main" process id or not (it assumes so). We reverse
4510 look up the "main" process id from the lwp here. */
4511
70221824 4512static struct address_space *
c0694254
PA
4513linux_nat_thread_address_space (struct target_ops *t, ptid_t ptid)
4514{
4515 struct lwp_info *lwp;
4516 struct inferior *inf;
4517 int pid;
4518
dfd4cc63 4519 if (ptid_get_lwp (ptid) == 0)
c0694254
PA
4520 {
4521 /* An (lwpid,0,0) ptid. Look up the lwp object to get at the
4522 tgid. */
4523 lwp = find_lwp_pid (ptid);
dfd4cc63 4524 pid = ptid_get_pid (lwp->ptid);
c0694254
PA
4525 }
4526 else
4527 {
4528 /* A (pid,lwpid,0) ptid. */
dfd4cc63 4529 pid = ptid_get_pid (ptid);
c0694254
PA
4530 }
4531
4532 inf = find_inferior_pid (pid);
4533 gdb_assert (inf != NULL);
4534 return inf->aspace;
4535}
4536
dc146f7c
VP
4537/* Return the cached value of the processor core for thread PTID. */
4538
70221824 4539static int
dc146f7c
VP
4540linux_nat_core_of_thread (struct target_ops *ops, ptid_t ptid)
4541{
4542 struct lwp_info *info = find_lwp_pid (ptid);
e0881a8e 4543
dc146f7c
VP
4544 if (info)
4545 return info->core;
4546 return -1;
4547}
4548
7a6a1731
GB
4549/* Implementation of to_filesystem_is_local. */
4550
4551static int
4552linux_nat_filesystem_is_local (struct target_ops *ops)
4553{
4554 struct inferior *inf = current_inferior ();
4555
4556 if (inf->fake_pid_p || inf->pid == 0)
4557 return 1;
4558
4559 return linux_ns_same (inf->pid, LINUX_NS_MNT);
4560}
4561
4562/* Convert the INF argument passed to a to_fileio_* method
4563 to a process ID suitable for passing to its corresponding
4564 linux_mntns_* function. If INF is non-NULL then the
4565 caller is requesting the filesystem seen by INF. If INF
4566 is NULL then the caller is requesting the filesystem seen
4567 by the GDB. We fall back to GDB's filesystem in the case
4568 that INF is non-NULL but its PID is unknown. */
4569
4570static pid_t
4571linux_nat_fileio_pid_of (struct inferior *inf)
4572{
4573 if (inf == NULL || inf->fake_pid_p || inf->pid == 0)
4574 return getpid ();
4575 else
4576 return inf->pid;
4577}
4578
4579/* Implementation of to_fileio_open. */
4580
4581static int
4582linux_nat_fileio_open (struct target_ops *self,
4583 struct inferior *inf, const char *filename,
4313b8c0
GB
4584 int flags, int mode, int warn_if_slow,
4585 int *target_errno)
7a6a1731
GB
4586{
4587 int nat_flags;
4588 mode_t nat_mode;
4589 int fd;
4590
4591 if (fileio_to_host_openflags (flags, &nat_flags) == -1
4592 || fileio_to_host_mode (mode, &nat_mode) == -1)
4593 {
4594 *target_errno = FILEIO_EINVAL;
4595 return -1;
4596 }
4597
4598 fd = linux_mntns_open_cloexec (linux_nat_fileio_pid_of (inf),
4599 filename, nat_flags, nat_mode);
4600 if (fd == -1)
4601 *target_errno = host_to_fileio_error (errno);
4602
4603 return fd;
4604}
4605
4606/* Implementation of to_fileio_readlink. */
4607
4608static char *
4609linux_nat_fileio_readlink (struct target_ops *self,
4610 struct inferior *inf, const char *filename,
4611 int *target_errno)
4612{
4613 char buf[PATH_MAX];
4614 int len;
4615 char *ret;
4616
4617 len = linux_mntns_readlink (linux_nat_fileio_pid_of (inf),
4618 filename, buf, sizeof (buf));
4619 if (len < 0)
4620 {
4621 *target_errno = host_to_fileio_error (errno);
4622 return NULL;
4623 }
4624
224c3ddb 4625 ret = (char *) xmalloc (len + 1);
7a6a1731
GB
4626 memcpy (ret, buf, len);
4627 ret[len] = '\0';
4628 return ret;
4629}
4630
4631/* Implementation of to_fileio_unlink. */
4632
4633static int
4634linux_nat_fileio_unlink (struct target_ops *self,
4635 struct inferior *inf, const char *filename,
4636 int *target_errno)
4637{
4638 int ret;
4639
4640 ret = linux_mntns_unlink (linux_nat_fileio_pid_of (inf),
4641 filename);
4642 if (ret == -1)
4643 *target_errno = host_to_fileio_error (errno);
4644
4645 return ret;
4646}
4647
aa01bd36
PA
4648/* Implementation of the to_thread_events method. */
4649
4650static void
4651linux_nat_thread_events (struct target_ops *ops, int enable)
4652{
4653 report_thread_events = enable;
4654}
4655
f973ed9c
DJ
4656void
4657linux_nat_add_target (struct target_ops *t)
4658{
f973ed9c
DJ
4659 /* Save the provided single-threaded target. We save this in a separate
4660 variable because another target we've inherited from (e.g. inf-ptrace)
4661 may have saved a pointer to T; we want to use it for the final
4662 process stratum target. */
4663 linux_ops_saved = *t;
4664 linux_ops = &linux_ops_saved;
4665
4666 /* Override some methods for multithreading. */
b84876c2 4667 t->to_create_inferior = linux_nat_create_inferior;
f973ed9c
DJ
4668 t->to_attach = linux_nat_attach;
4669 t->to_detach = linux_nat_detach;
4670 t->to_resume = linux_nat_resume;
4671 t->to_wait = linux_nat_wait;
2455069d 4672 t->to_pass_signals = linux_nat_pass_signals;
f973ed9c
DJ
4673 t->to_xfer_partial = linux_nat_xfer_partial;
4674 t->to_kill = linux_nat_kill;
4675 t->to_mourn_inferior = linux_nat_mourn_inferior;
4676 t->to_thread_alive = linux_nat_thread_alive;
8a06aea7 4677 t->to_update_thread_list = linux_nat_update_thread_list;
f973ed9c 4678 t->to_pid_to_str = linux_nat_pid_to_str;
4694da01 4679 t->to_thread_name = linux_nat_thread_name;
f973ed9c 4680 t->to_has_thread_control = tc_schedlock;
c0694254 4681 t->to_thread_address_space = linux_nat_thread_address_space;
ebec9a0f
PA
4682 t->to_stopped_by_watchpoint = linux_nat_stopped_by_watchpoint;
4683 t->to_stopped_data_address = linux_nat_stopped_data_address;
faf09f01
PA
4684 t->to_stopped_by_sw_breakpoint = linux_nat_stopped_by_sw_breakpoint;
4685 t->to_supports_stopped_by_sw_breakpoint = linux_nat_supports_stopped_by_sw_breakpoint;
4686 t->to_stopped_by_hw_breakpoint = linux_nat_stopped_by_hw_breakpoint;
4687 t->to_supports_stopped_by_hw_breakpoint = linux_nat_supports_stopped_by_hw_breakpoint;
aa01bd36 4688 t->to_thread_events = linux_nat_thread_events;
f973ed9c 4689
b84876c2
PA
4690 t->to_can_async_p = linux_nat_can_async_p;
4691 t->to_is_async_p = linux_nat_is_async_p;
9908b566 4692 t->to_supports_non_stop = linux_nat_supports_non_stop;
fbea99ea 4693 t->to_always_non_stop_p = linux_nat_always_non_stop_p;
b84876c2 4694 t->to_async = linux_nat_async;
b84876c2
PA
4695 t->to_terminal_inferior = linux_nat_terminal_inferior;
4696 t->to_terminal_ours = linux_nat_terminal_ours;
6a3cb8e8
PA
4697
4698 super_close = t->to_close;
d90e17a7 4699 t->to_close = linux_nat_close;
b84876c2 4700
4c28f408
PA
4701 t->to_stop = linux_nat_stop;
4702
d90e17a7
PA
4703 t->to_supports_multi_process = linux_nat_supports_multi_process;
4704
03583c20
UW
4705 t->to_supports_disable_randomization
4706 = linux_nat_supports_disable_randomization;
4707
dc146f7c
VP
4708 t->to_core_of_thread = linux_nat_core_of_thread;
4709
7a6a1731
GB
4710 t->to_filesystem_is_local = linux_nat_filesystem_is_local;
4711 t->to_fileio_open = linux_nat_fileio_open;
4712 t->to_fileio_readlink = linux_nat_fileio_readlink;
4713 t->to_fileio_unlink = linux_nat_fileio_unlink;
4714
f973ed9c
DJ
4715 /* We don't change the stratum; this target will sit at
4716 process_stratum and thread_db will set at thread_stratum. This
4717 is a little strange, since this is a multi-threaded-capable
4718 target, but we want to be on the stack below thread_db, and we
4719 also want to be used for single-threaded processes. */
4720
4721 add_target (t);
f973ed9c
DJ
4722}
4723
9f0bdab8
DJ
4724/* Register a method to call whenever a new thread is attached. */
4725void
7b50312a
PA
4726linux_nat_set_new_thread (struct target_ops *t,
4727 void (*new_thread) (struct lwp_info *))
9f0bdab8
DJ
4728{
4729 /* Save the pointer. We only support a single registered instance
4730 of the GNU/Linux native target, so we do not need to map this to
4731 T. */
4732 linux_nat_new_thread = new_thread;
4733}
4734
26cb8b7c
PA
4735/* See declaration in linux-nat.h. */
4736
4737void
4738linux_nat_set_new_fork (struct target_ops *t,
4739 linux_nat_new_fork_ftype *new_fork)
4740{
4741 /* Save the pointer. */
4742 linux_nat_new_fork = new_fork;
4743}
4744
4745/* See declaration in linux-nat.h. */
4746
4747void
4748linux_nat_set_forget_process (struct target_ops *t,
4749 linux_nat_forget_process_ftype *fn)
4750{
4751 /* Save the pointer. */
4752 linux_nat_forget_process_hook = fn;
4753}
4754
4755/* See declaration in linux-nat.h. */
4756
4757void
4758linux_nat_forget_process (pid_t pid)
4759{
4760 if (linux_nat_forget_process_hook != NULL)
4761 linux_nat_forget_process_hook (pid);
4762}
4763
5b009018
PA
4764/* Register a method that converts a siginfo object between the layout
4765 that ptrace returns, and the layout in the architecture of the
4766 inferior. */
4767void
4768linux_nat_set_siginfo_fixup (struct target_ops *t,
a5362b9a 4769 int (*siginfo_fixup) (siginfo_t *,
5b009018
PA
4770 gdb_byte *,
4771 int))
4772{
4773 /* Save the pointer. */
4774 linux_nat_siginfo_fixup = siginfo_fixup;
4775}
4776
7b50312a
PA
4777/* Register a method to call prior to resuming a thread. */
4778
4779void
4780linux_nat_set_prepare_to_resume (struct target_ops *t,
4781 void (*prepare_to_resume) (struct lwp_info *))
4782{
4783 /* Save the pointer. */
4784 linux_nat_prepare_to_resume = prepare_to_resume;
4785}
4786
f865ee35
JK
4787/* See linux-nat.h. */
4788
4789int
4790linux_nat_get_siginfo (ptid_t ptid, siginfo_t *siginfo)
9f0bdab8 4791{
da559b09 4792 int pid;
9f0bdab8 4793
dfd4cc63 4794 pid = ptid_get_lwp (ptid);
da559b09 4795 if (pid == 0)
dfd4cc63 4796 pid = ptid_get_pid (ptid);
f865ee35 4797
da559b09
JK
4798 errno = 0;
4799 ptrace (PTRACE_GETSIGINFO, pid, (PTRACE_TYPE_ARG3) 0, siginfo);
4800 if (errno != 0)
4801 {
4802 memset (siginfo, 0, sizeof (*siginfo));
4803 return 0;
4804 }
f865ee35 4805 return 1;
9f0bdab8
DJ
4806}
4807
7b669087
GB
4808/* See nat/linux-nat.h. */
4809
4810ptid_t
4811current_lwp_ptid (void)
4812{
4813 gdb_assert (ptid_lwp_p (inferior_ptid));
4814 return inferior_ptid;
4815}
4816
2c0b251b
PA
4817/* Provide a prototype to silence -Wmissing-prototypes. */
4818extern initialize_file_ftype _initialize_linux_nat;
4819
d6b0e80f
AC
4820void
4821_initialize_linux_nat (void)
4822{
ccce17b0
YQ
4823 add_setshow_zuinteger_cmd ("lin-lwp", class_maintenance,
4824 &debug_linux_nat, _("\
b84876c2
PA
4825Set debugging of GNU/Linux lwp module."), _("\
4826Show debugging of GNU/Linux lwp module."), _("\
4827Enables printf debugging output."),
ccce17b0
YQ
4828 NULL,
4829 show_debug_linux_nat,
4830 &setdebuglist, &showdebuglist);
b84876c2 4831
7a6a1731
GB
4832 add_setshow_boolean_cmd ("linux-namespaces", class_maintenance,
4833 &debug_linux_namespaces, _("\
4834Set debugging of GNU/Linux namespaces module."), _("\
4835Show debugging of GNU/Linux namespaces module."), _("\
4836Enables printf debugging output."),
4837 NULL,
4838 NULL,
4839 &setdebuglist, &showdebuglist);
4840
b84876c2 4841 /* Save this mask as the default. */
d6b0e80f
AC
4842 sigprocmask (SIG_SETMASK, NULL, &normal_mask);
4843
7feb7d06
PA
4844 /* Install a SIGCHLD handler. */
4845 sigchld_action.sa_handler = sigchld_handler;
4846 sigemptyset (&sigchld_action.sa_mask);
4847 sigchld_action.sa_flags = SA_RESTART;
b84876c2
PA
4848
4849 /* Make it the default. */
7feb7d06 4850 sigaction (SIGCHLD, &sigchld_action, NULL);
d6b0e80f
AC
4851
4852 /* Make sure we don't block SIGCHLD during a sigsuspend. */
4853 sigprocmask (SIG_SETMASK, NULL, &suspend_mask);
4854 sigdelset (&suspend_mask, SIGCHLD);
4855
7feb7d06 4856 sigemptyset (&blocked_mask);
d6b0e80f
AC
4857}
4858\f
4859
4860/* FIXME: kettenis/2000-08-26: The stuff on this page is specific to
4861 the GNU/Linux Threads library and therefore doesn't really belong
4862 here. */
4863
d6b0e80f
AC
4864/* Return the set of signals used by the threads library in *SET. */
4865
4866void
4867lin_thread_get_thread_signals (sigset_t *set)
4868{
d6b0e80f
AC
4869 sigemptyset (set);
4870
4a6ed09b
PA
4871 /* NPTL reserves the first two RT signals, but does not provide any
4872 way for the debugger to query the signal numbers - fortunately
4873 they don't change. */
4874 sigaddset (set, __SIGRTMIN);
4875 sigaddset (set, __SIGRTMIN + 1);
d6b0e80f 4876}
This page took 1.877665 seconds and 4 git commands to generate.