1: /*      enet.c	Stanford	25 April 1983 */
   2: 
   3: /*
   4:  *  Ethernet packet filter layer,
   5:  *  	formerly: Ethernet interface driver
   6:  *
   7:  **********************************************************************
   8:  * HISTORY
   9:  * 7 October 1985	Jeff Mogul	Stanford
  10:  *	Removed ENMAXOPENS limitation; available minors are now
  11:  *	dynamically allocated to interfaces, out of pool of NENETFILTER
  12:  *	minors.
  13:  *	Certain arrays formerly in the enState structure are now global.
  14:  *	Depends on modified openi() function so that enetopen() need
  15:  *	only be called once.
  16:  *	Remove support for "kernel access", it won't ever be used again.
  17:  *	Added EIOCMFREE ioctl.
  18:  *
  19:  * 17 October 1984	Jeff Mogul	Stanford
  20:  *    More performance improvements:
  21:  *	Added ENF_CAND, ENF_COR, ENF_CNAND, and ENF_CNOR, short-circuit
  22:  *	operators, to make filters run faster.
  23:  *	All evaluate "(*sp++ == *sp++)":
  24:  *	ENF_CAND: returns false immediately if result is false, otherwise
  25:  *		continue
  26:  *	ENF_COR: returns true immediately if result is true, otherwise
  27:  *		continue
  28:  *	ENF_CNAND: returns true immediately if result is false, otherwise
  29:  *		continue
  30:  *	ENF_CNOR: returns false immediately if result is true, otherwise
  31:  *		continue
  32:  *	Also added ENF_NEQ to complement ENF_EQ
  33:  *    - Maintain count of received packets per filter, dynamically
  34:  *	re-organize filter queue to keep highly active filters in
  35:  *	front of queue (but maintaining priority order), if they are
  36:  *	"high priority" filters.
  37:  *
  38:  * 2 October 1984	Jeff Mogul	Stanford
  39:  *	Made a few changes to enDoFilter() to speed it up, since profiling
  40:  *	shows it to be rather popular:
  41:  *	- precompute maximum word in packet and address of end of
  42:  *	filters (thereby moving this code out of "inner loop").
  43:  *	- minor re-arrangement to avoid re-evaluating a
  44:  *	common subexpression.
  45:  *	- changed #ifdef DEBUG in a few routines to #ifdef INNERDEBUG,
  46:  *	so that code in inner loops isn't always testing the enDebug
  47:  *	flag; this not only costs directly, but also breaks up some
  48:  *	basic blocks that the optimizer could play with.
  49:  *	- added enOneCopy flag; if true, then never deliver more than
  50:  *	one copy of a packet.  This is equivalent to giving everyone
  51:  *	a "high priority" device, and cuts down the number of superfluous
  52:  *	calls to enDoFilter(). [Temporary hack, will remove this!]
  53:  *
  54:  * 24 August 1984	Jeff Mogul	Stanford
  55:  *	YA bug with sleeping in enetwrite(); straightened out handling
  56:  *	of counts in enKludgeSleep so that they indicate the number
  57:  *	of sleeps in progress. Maybe I've got this right, now?
  58:  *	Also, don't sleep forever (since the net might be down).
  59:  *
  60:  * 17 July 1984	Jeff Mogul	Stanford
  61:  *	Bug fix: in enetwrite(), several problems with sleeping on
  62:  *	IF_QFULL:
  63:  *	- don't do it for kernel mode writes.
  64:  *	- count # of procs sleeping, to avoid lost wakeups.  Old
  65:  *		scheme would only wake up the first sleeper.
  66:  *	- using sleeper-count, avoid using more than one timeout
  67:  *		table entry per device; old scheme caused timeout table panics
  68:  *	- trap interupted sleeps using setjmp, so that we can deallocate
  69:  *		packet header and mbufs; otherwise we lost them and panicked.
  70:  *
  71:  * 5 July 1984	Jeff Mogul	Stanford
  72:  *	Bug fix: in enetwrite() make sure enP_RefCount is zero before
  73:  *	deallocating "packet".  Otherwise, "packets" get lost, and
  74:  *	take mbufs (and ultimately, the system) with them.
  75:  *
  76:  * 8 December 1983	Jeffrey Mogul	Stanford
  77:  *	Fixed bug in enetwrite() that eventually caused	allocator
  78:  *	to run out of packets and panic.  If enetwrite() returns
  79:  *	an error it should first deallocate any packets it has allocated.
  80:  *
  81:  * 10 November 1983	Jeffrey Mogul	Stanford
  82:  *	Slight restructuring for support of 10mb ethers;
  83:  *	- added the EIOCDEVP ioctl
  84:  *	- removed the EIOCMTU ioctl (subsumed by EIOCDEVP)
  85:  *	This requires an additional parameter to the enetattach
  86:  *	call so that the device driver can specify things.
  87:  *
  88:  *	Also, cleaned up the enDebug scheme by adding symbolic
  89:  *	definitions for the bits.
  90:  *
  91:  * 25-Apr-83	Jeffrey Mogul	Stanford
  92:  *	Began conversion to 4.2BSD.  This involves removing all
  93:  *		references to the actual hardware.
  94:  *	Changed read/write interface to use uio scheme.
  95:  *	Changed ioctl interface to "new style"; this places a hard
  96:  *		limit on the size of a filter (about 128 bytes).
  97:  *	"Packets" now point to mbufs, not private buffers.
  98:  *	Filter can only access data in first mbuf (about 50 words worst case);
  99:  *		this is long enough for all Pup purposes.
 100:  *	Added EIOCMTU ioctl to get MTU (max packet size).
 101:  *	Added an enetselect() routine and other select() support.
 102:  *	Other stuff is (more or less) left intact.
 103:  *	Most previous history comments removed.
 104:  *	Changed some names from enXXXX to enetXXXX to avoid confusion(?)
 105:  *
 106:  * 10-Aug-82  Mike Accetta (mja) at Carnegie-Mellon University
 107:  *	Added new EIOCMBIS and EIOCMBIC ioctl calls to set and clear
 108:  *	bits in mode word;  added mode bit ENHOLDSIG which suppresses
 109:  *	the resetting of an enabled signal after it is sent (to be
 110:  *	used inconjunction with the SIGHOLD mechanism);  changed
 111:  *	EIOCGETP to zero pad word for future compatibility;  changed enwrite()
 112:  *	to enforce correct source host address on output packets (V3.05e).
 113:  *	(Stanford already uses long timeout value and has no pad word - JCM)
 114:  *	[Last change before 4.2BSD conversion starts.]
 115:  *
 116:  * 01-Dec-81  Mike Accetta (mja) at Carnegie-Mellon University
 117:  *	Fixed bug in timeout handling caused by missing "break" in the
 118:  *	"switch" state check within enetread().  This caused all reads
 119:  *	to be preceeded by a bogus timeout.  In addition, fixed another
 120:  *	bug in signal processing by also recording process ID of
 121:  *	process to signal when an input packet is available.  This is
 122:  *	necessary because it is possible for a process with an enabled
 123:  *	signal to fork and exit with no guarantee that the child will
 124:  *	reenable the signal.  Thus under appropriately bizarre race
 125:  *	conditions, an incoming packet to the child can cause a signal
 126:  *	to be sent to the unsuspecting process which inherited the
 127:  *	process slot of the parent.  Of course, if the PID's wrap around
 128:  *	AND the inheriting process has the same PID, well ... (V3.03d).
 129:  *
 130:  * 22-Feb-80  Rick Rashid (rfr) at Carnegie-Mellon University
 131:  *	Rewritten to provide multiple user access via user settable
 132:  *	filters (V1.05).
 133:  *
 134:  * 18-Jan-80  Mike Accetta (mja) at Carnegie-Mellon University
 135:  *      Created (V1.00).
 136:  *
 137:  **********************************************************************
 138:  */
 139: 
 140: #include "en.h"
 141: #include "ec.h"
 142: #include "il.h"
 143: #include "de.h"
 144: #include "enetfilter.h"
 145: 
 146: /* number of potential units */
 147: #define NENET   (NEC + NEN + NIL + NDE)
 148: 
 149: #if (NENETFILTER > 0)
 150: 
 151: #define SUN_OPENI
 152: 
 153: #include "param.h"
 154: #include "systm.h"
 155: #include "mbuf.h"
 156: #include "buf.h"
 157: #include "dir.h"
 158: #include "user.h"
 159: #include "ioctl.h"
 160: #include "map.h"
 161: #include "proc.h"
 162: #include "inode.h"
 163: #include "file.h"
 164: #include "tty.h"
 165: #include "uio.h"
 166: 
 167: #include "protosw.h"
 168: #include "socket.h"
 169: #include "../net/if.h"
 170: 
 171: #undef  queue
 172: #undef  dequeue
 173: #include "../net/enet.h"
 174: #include "../net/enetdefs.h"
 175: 
 176: #if (NENETFILTER < 32)
 177: #undef  NENETFILTER
 178: #define NENETFILTER 32
 179: #endif
 180: 
 181: #if (NENETFILTER > 256)
 182: #undef  NENETFILTER
 183: #define NENETFILTER 256     /* maximum number of minor devices */
 184: #endif
 185: 
 186: #define DEBUG   1
 187: /* #define INNERDEBUG 1 */  /* define only when debugging enDoFilter()
 188: 					or enInputDone()  */
 189: 
 190: #define enprintf(flags) if (enDebug&(flags)) printf
 191: 
 192: /*
 193:  * Symbolic definitions for enDebug flag bits
 194:  *	ENDBG_TRACE should be 1 because it is the most common
 195:  *	use in the code, and the compiler generates faster code
 196:  *	for testing the low bit in a word.
 197:  */
 198: 
 199: #define ENDBG_TRACE 1   /* trace most operations */
 200: #define ENDBG_DESQ  2   /* trace descriptor queues */
 201: #define ENDBG_INIT  4   /* initialization info */
 202: #define ENDBG_SCAV  8   /* scavenger operation */
 203: #define ENDBG_ABNORM    16  /* abnormal events */
 204: 
 205: 
 206: #define min(a,b)        ( ((a)<=(b)) ? (a) : (b) )
 207: 
 208: #define splenet splimp  /* used to be spl6 but I'm paranoid */
 209: 
 210: #define PRINET  26          /* interruptible */
 211: 
 212: /*
 213:  *  'enQueueElts' is the pool of packet headers used by the driver.
 214:  *  'enPackets'   is the pool of packets used by the driver (these should
 215:  *	          be allocated dynamically when this becomes possible).
 216:  *  'enFreeq'     is the queue of available packets
 217:  *  'enState'     is the driver state table per logical unit number
 218:  *  'enUnit'  	  is the physical unit number table per logical unit number;
 219:  *		  the first "attach"ed ethernet is logical unit 0, etc.
 220:  *  'enUnitMap'	  maps minor device numbers onto interface unit #s
 221:  *  'enAllocMap'  indicates if minor device is allocated or free
 222:  *  'enAllDescriptors' stores OpenDescriptors, indexed by minor device #
 223:  *  'enFreeqMin'  is the minimum number of packets ever in the free queue
 224:  *		  (for statistics purposes)
 225:  *  'enScavenges' is the number of scavenges of the active input queues
 226:  *		  (for statustics purposes)
 227:  *  'enDebug'	  is a collection of debugging bits which enable trace and/or
 228:  *		  diagnostic output as defined above (ENDBG_*)
 229:  *  'enUnits'	  is the number of attached units
 230:  *  'enOneCopy'   if true, then no packet is delivered to more than one minor
 231:  *		  device
 232:  */
 233: struct enPacket enQueueElts[ENPACKETS];
 234: struct enQueue  enFreeq;
 235: struct enState  enState[NENET];
 236: char        enUnitMap[NENETFILTER];
 237: char        enAllocMap[NENETFILTER];
 238: struct enOpenDescriptor
 239:         enAllDescriptors[NENETFILTER];
 240: int     enFreeqMin = ENPACKETS;
 241: int     enScavenges = 0;
 242: int     enDebug = ENDBG_ABNORM;
 243: int     enUnits = 0;
 244: int     enOneCopy = 0;
 245: int     enMaxMinors = NENETFILTER;
 246: 
 247: /*
 248:  *  Forward declarations for subroutines which return other
 249:  *  than integer types.
 250:  */
 251: extern boolean enDoFilter();
 252: 
 253: 
 254: /*
 255:  * Linkages to if_en.c
 256:  */
 257: 
 258: struct enet_info {
 259:     struct  ifnet *ifp; /* which ifp for output */
 260: } enet_info[NENET];
 261: 
 262: struct sockaddr enetaf = { AF_IMPLINK };
 263: 
 264: 
 265: /****************************************************************
 266:  *								*
 267:  *		Various Macros & Routines			*
 268:  *								*
 269:  ****************************************************************/
 270: 
 271: /*
 272:  *  forAllOpenDescriptors(p) -- a macro for iterating
 273:  *  over all currently open devices.  Use it in place of
 274:  *      "for ( ...; ... ; ... )"
 275:  *  and supply your own loop body.  The loop variable is the
 276:  *  parameter p which is set to point to the descriptor for
 277:  *  each open device in turn.
 278:  */
 279: 
 280: #define forAllOpenDescriptors(p)                    \
 281:     for ((p) = (struct enOpenDescriptor *)enDesq.enQ_F;     \
 282:           (struct Queue *)(&enDesq) != &((p)->enOD_Link);       \
 283:           (p) = (struct enOpenDescriptor *)(p)->enOD_Link.F)
 284: 
 285: /*
 286:  *  enEnqueue - add an element to a queue
 287:  */
 288: 
 289: #define enEnqueue(q, elt)                       \
 290: {                                   \
 291:     enqueue((struct Queue *)(q), (struct Queue *)(elt));        \
 292:     (q)->enQ_NumQueued++;                       \
 293: }
 294: 
 295: /*
 296:  *  enFlushQueue - release all packets from queue, freeing any
 297:  *  whose reference counts drop to 0.  Assumes caller
 298:  *  is at high IPL so that queue will not be modified while
 299:  *  it is being flushed.
 300:  */
 301: 
 302: enFlushQueue(q)
 303: register struct enQueue *q;
 304: {
 305: 
 306:     register struct enPacket *qelt;
 307: 
 308:     while((qelt=(struct enPacket *)dequeue((struct Queue *)q)) != NULL)
 309:     {
 310:     if (0 == --(qelt->enP_RefCount))
 311:     {
 312:         enEnqueue(&enFreeq, qelt);
 313:     }
 314:     }
 315:     q->enQ_NumQueued = 0;
 316: 
 317: }
 318: 
 319: /*
 320:  *  enInitWaitQueue - initialize an empty packet wait queue
 321:  */
 322: 
 323: enInitWaitQueue(wq)
 324: register struct enWaitQueue *wq;
 325: {
 326: 
 327:     wq->enWQ_Head = 0;
 328:     wq->enWQ_Tail = 0;
 329:     wq->enWQ_NumQueued = 0;
 330:     wq->enWQ_MaxWaiting = ENDEFWAITING;
 331: 
 332: }
 333: 
 334: /*
 335:  *  enEnWaitQueue - add a packet to a wait queue
 336:  */
 337: 
 338: enEnWaitQueue(wq, p)
 339: register struct enWaitQueue *wq;
 340: struct enPacket *p;
 341: {
 342: 
 343:     wq->enWQ_Packets[wq->enWQ_Tail] = p;
 344:     wq->enWQ_NumQueued++;
 345:     enNextWaitQueueIndex(wq->enWQ_Tail);
 346: 
 347: }
 348: 
 349: /*
 350:  *  enDeWaitQueue - remove a packet from a wait queue
 351:  */
 352: 
 353: struct enPacket *
 354: enDeWaitQueue(wq)
 355: register struct enWaitQueue *wq;
 356: {
 357: 
 358:     struct enPacket *p;
 359: 
 360:     wq->enWQ_NumQueued--;
 361:     if (wq->enWQ_NumQueued < 0)
 362:     panic("enDeWaitQueue");
 363:     p = wq->enWQ_Packets[wq->enWQ_Head];
 364:     enNextWaitQueueIndex(wq->enWQ_Head);
 365: 
 366:     return(p);
 367: 
 368: }
 369: 
 370: /*
 371:  *  enTrimWaitQueue - cut a wait queue back to size
 372:  */
 373: enTrimWaitQueue(wq, threshold)
 374: register struct enWaitQueue *wq;
 375: {
 376: 
 377:     register int Counter = (wq->enWQ_NumQueued - threshold);
 378:     register struct enPacket *p;
 379: 
 380: #ifdef  DEBUG
 381:     enprintf(ENDBG_SCAV)
 382:             ("enTrimWaitQueue(%x, %d): %d\n", wq, threshold, Counter);
 383: #endif
 384:     while (Counter-- > 0)
 385:     {
 386:     wq->enWQ_NumQueued--;
 387:     enPrevWaitQueueIndex(wq->enWQ_Tail);
 388:     p = wq->enWQ_Packets[wq->enWQ_Tail];
 389:     if (0 == --(p->enP_RefCount))
 390:     {
 391:         m_freem(p->enP_mbuf);
 392:         enEnqueue(&enFreeq, p);
 393:     }
 394:     }
 395: }
 396: /*
 397:  *  enFlushWaitQueue - remove all packets from wait queue
 398:  */
 399: 
 400: #define enFlushWaitQueue(wq)    enTrimWaitQueue(wq, 0)
 401: 
 402: /*
 403:  *  scavenging thresholds:
 404:  *
 405:  *  index by number of active files;  for N open files, each queue may retain
 406:  *  up to 1/Nth of the packets not guaranteed to be freed on scavenge.  The
 407:  *  total number of available packets is computed less one for sending.
 408:  *
 409:  *  (assumes high IPL)
 410:  */
 411: char enScavLevel[NENETFILTER+1];
 412: 
 413: /*
 414:  *  enInitScavenge -- set up ScavLevel table
 415:  */
 416: enInitScavenge()
 417: {
 418:     register int PoolSize = (ENPACKETS-ENMINSCAVENGE);
 419:     register int i = sizeof(enScavLevel);
 420: 
 421:     PoolSize--;     /* leave one for transmitter */
 422:     while (--i>0)
 423:         enScavLevel[i] = (PoolSize / i);
 424: }
 425: 
 426: /*
 427:  *  enScavenge -- scan all OpenDescriptors for all ethernets, releasing
 428:  *    any queued buffers beyond the prescribed limit and freeing any whose
 429:  *    refcounts drop to 0.
 430:  *    Assumes caller is at high IPL so that it is safe to modify the queues.
 431:  */
 432: enScavenge()
 433: {
 434: 
 435:     register struct enOpenDescriptor *d;
 436:     register int threshold = 0;
 437:     register struct enState *enStatep;
 438: 
 439:     for (enStatep=enState; enStatep < &enState[NENET]; enStatep++)
 440:     threshold += enCurOpens;
 441:     threshold = enScavLevel[threshold];
 442: 
 443:     /* recalculate thresholds based on current allocations */
 444:     enInitScavenge();
 445: 
 446:     enScavenges++;
 447: #ifdef  DEBUG
 448:     enprintf(ENDBG_SCAV)("enScavenge: %d\n", threshold);
 449: #endif
 450:     for (enStatep=enState; enStatep < &enState[NENET]; enStatep++)
 451:     {
 452:     if (enDesq.enQ_F == 0)
 453:         continue;           /* never initialized */
 454:     forAllOpenDescriptors(d)
 455:     {
 456:         enTrimWaitQueue(&(d->enOD_Waiting), threshold);
 457:     }
 458:     }
 459: 
 460: }
 461: 
 462: /*
 463:  *  enAllocatePacket - allocate the next packet from the free list
 464:  *
 465:  *  Assumes IPL is at high priority so that it is safe to touch the
 466:  *  packet queue.  If the queue is currently empty, scavenge for
 467:  *  more packets.
 468:  */
 469: 
 470: struct enPacket *
 471: enAllocatePacket()
 472: {
 473: 
 474:     register struct enPacket *p;
 475: 
 476:     if (0 == enFreeq.enQ_NumQueued)
 477:     enScavenge();
 478:     p = (struct enPacket *)dequeue((struct Queue *)&enFreeq);
 479:     if (p == NULL)
 480:     panic("enAllocatePacket");
 481:     if (enFreeqMin > --enFreeq.enQ_NumQueued)
 482:     enFreeqMin = enFreeq.enQ_NumQueued;
 483: 
 484:     p->enP_RefCount = 0;    /* just in case */
 485: 
 486:     return(p);
 487: 
 488: }
 489: 
 490: /*
 491:  *  enDeallocatePacket - place the packet back on the free packet queue
 492:  *
 493:  *  (High IPL assumed).
 494:  */
 495: 
 496: #define enDeallocatePacket(p)                       \
 497: {                                   \
 498:     if (p->enP_RefCount) panic("enDeallocatePacket: refcount != 0");\
 499:     enqueue((struct Queue *)&enFreeq, (struct Queue *)(p));     \
 500:     enFreeq.enQ_NumQueued++;                    \
 501: }
 502: 
 503: /****************************************************************
 504:  *								*
 505:  *	    Routines to move uio data to/from mbufs		*
 506:  *								*
 507:  ****************************************************************/
 508: 
 509: /*
 510:  * These two routines were inspired by/stolen from ../sys/uipc_socket.c
 511:  *	   Both return error code (or 0 if success).
 512:  */
 513: 
 514: /*
 515:  * read: return contents of mbufs to user.  DO NOT free them, since
 516:  *	there may be multiple claims on the packet!
 517:  */
 518: enrmove(m, uio, count)
 519: register struct mbuf *m;
 520: register struct uio *uio;
 521: register int count;
 522: {
 523:     register int len;
 524:     register int error = 0;
 525: 
 526:     count = min(count, uio->uio_resid); /* # of bytes to return */
 527: 
 528:     while ((count > 0) && m && (error == 0)) {
 529:         len = min(count, m->m_len); /* length of this transfer */
 530:         count -= len;
 531:         error = uiomove(mtod(m, caddr_t), (int)len, UIO_READ, uio);
 532: 
 533:         m = m->m_next;
 534:     }
 535:     return(error);
 536: }
 537: 
 538: enwmove(uio, mbufp)
 539: register struct uio *uio;
 540: register struct mbuf **mbufp;   /* top mbuf is returned by reference */
 541: {
 542:     struct mbuf *mtop = 0;
 543:     register struct mbuf *m;
 544:     register struct mbuf **mp = &mtop;
 545:     register struct iovec *iov;
 546:     register int len;
 547:     int error = 0;
 548: 
 549:     while ((uio->uio_resid > 0) && (error == 0)) {
 550:         iov = uio->uio_iov;
 551: 
 552:         if (iov->iov_len == 0) {
 553:             uio->uio_iov++;
 554:         uio->uio_iovcnt--;
 555:         if (uio->uio_iovcnt < 0)
 556:             panic("enwmove: uio_iovcnt < 0 while uio_resid > 0");
 557:         }
 558:         MGET(m, M_WAIT, MT_DATA);
 559:         if (m == NULL) {
 560:             error = ENOBUFS;
 561:         break;
 562:         }
 563:         if (iov->iov_len >= CLBYTES) {  /* big enough to use a page */
 564:             register struct mbuf *p;
 565:         MCLGET(p, 1);
 566:         if (p == 0)
 567:             goto nopages;
 568:         m->m_off = (int)p - (int)m;
 569:         len = CLBYTES;
 570:         }
 571:         else {
 572: nopages:
 573:         len = MIN(MLEN, iov->iov_len);
 574:         }
 575:         error = uiomove(mtod(m, caddr_t), len, UIO_WRITE, uio);
 576:         m->m_len = len;
 577:         *mp = m;
 578:         mp = &(m->m_next);
 579:     }
 580: 
 581:     if (error) {        /* probably uiomove fouled up */
 582:         if (mtop)
 583:         m_freem(mtop);
 584:     }
 585:     else {
 586:         *mbufp = mtop;  /* return ptr to top mbuf */
 587:     }
 588:     return(error);
 589: }
 590: 
 591: /*
 592:  *  enetopen - open ether net device
 593:  *
 594:  *  Errors:	ENXIO	- illegal minor device number
 595:  *		EBUSY	- minor device already in use
 596:  */
 597: 
 598: /* ARGSUSED */
 599: enetopen(dev, flag, newmin)
 600: dev_t dev;
 601: int flag;
 602: int *newmin;
 603: {
 604:     register int md;
 605:     register int unit = minor(dev);
 606:     register struct enState *enStatep;
 607: #ifndef SUN_OPENI
 608:     register int error;
 609: #endif	SUN_OPENI
 610: 
 611:     /*
 612:      * Each open enet file has a different minor device number.
 613:      * When a user tries to open any of them, we actually open
 614:      * any available minor device and associate it with the
 615:      * corresponding unit.
 616:      *
 617:      * This is not elegant, but UNIX will call
 618:      * open for each new open file using the same inode but calls
 619:      * close only when the last open file referring to the inode
 620:      * is released. This means that we cannot know inside the
 621:      * driver code when the resources associated with a particular
 622:      * open of the same inode should be deallocated.  Thus, we have
 623:      * to make up a temporary inode to represent each simultaneous
 624:      * open of the ethernet.  Each inode has a different minor device number.
 625:      */
 626: 
 627: #ifdef  DEBUG
 628:     enprintf(ENDBG_TRACE)("enetopen(%o, %x):\n", unit, flag);
 629: #endif
 630: 
 631:     /* check for illegal minor dev */
 632:     if ( (unit >= enUnits)              /* bad unit */
 633:         || (enet_info[unit].ifp == 0)           /* ifp not known */
 634:     || ((enet_info[unit].ifp->if_flags & IFF_UP) == 0) )
 635:                             /* or if down */
 636:     {
 637:     return(ENXIO);
 638:     }
 639: 
 640:     md = enFindMinor();
 641: #ifdef  DEBUG
 642:     enprintf(ENDBG_TRACE)("enetopen: md = %d\n", md);
 643: #endif
 644:     if (md < 0)
 645:     {
 646:     return(EBUSY);
 647:     }
 648: 
 649:     enUnitMap[md] = unit;
 650:     enAllocMap[md] = TRUE;
 651: 
 652: #ifdef  SUN_OPENI
 653:     *newmin = md;
 654: #else
 655:     error = mkpseudo(makedev(major(dev), md)));
 656:     if (error) {
 657:     enAllocMap[md] = FALSE;
 658:     return(error);
 659:     }
 660: #endif	SUN_OPENI
 661: 
 662:     enStatep = &enState[unit];
 663:     enprintf(ENDBG_DESQ)
 664:         ("enetopen: Desq: %x, %x\n", enDesq.enQ_F, enDesq.enQ_B);
 665:     enInitDescriptor(&enAllDescriptors[md], flag);
 666:     enInsertDescriptor(&(enDesq), &enAllDescriptors[md]);
 667: 
 668:     return(0);
 669: }
 670: 
 671: /*
 672:  * enFindMinor - find a free logical device on specified unit
 673:  */
 674: enFindMinor()
 675: {
 676:     register int md;
 677: 
 678:     for (md = 0; md < enMaxMinors; md++) {
 679:         if (enAllocMap[md] == FALSE)
 680:             return(md);
 681:     }
 682:     return(-1);
 683: }
 684: 
 685: /*
 686:  *  enInit - intialize ethernet unit (called by enetattach)
 687:  */
 688: 
 689: enInit(enStatep, unit)
 690: register struct enState *enStatep;
 691: register int unit;
 692: {
 693: 
 694: #ifdef  DEBUG
 695:     enprintf(ENDBG_INIT)("enInit(%x %d):\n", enStatep, unit);
 696: #endif
 697: 
 698:     /*  initialize free queue if not already done  */
 699:     if (enFreeq.enQ_F == 0)
 700:     {
 701:     register int i;
 702: 
 703:     initqueue((struct Queue *)&enFreeq);
 704:     for (i=0; i<ENPACKETS; i++)
 705:     {
 706:         register struct enPacket *p;
 707: 
 708:         p = &enQueueElts[i];
 709:         p->enP_RefCount = 0;
 710:         enDeallocatePacket(p);
 711:     }
 712:     /* also a good time to init enAllocMap */
 713:     for (i = 0; i < enMaxMinors; i++)
 714:         enAllocMap[i] = FALSE;
 715:     }
 716:     initqueue((struct Queue *)&enDesq); /* init descriptor queue */
 717: }
 718: 
 719: /*
 720:  *  enetclose - ether net device close routine
 721:  */
 722: 
 723: /* ARGSUSED */
 724: enetclose(dev, flag)
 725: {
 726:     register int md = ENINDEX(dev);
 727:     register struct enState *enStatep = &enState[ENUNIT(dev)];
 728:     register struct enOpenDescriptor *d = &enAllDescriptors[md];
 729:     int ipl;
 730: 
 731:     enAllocMap[md] = FALSE;
 732: 
 733: #ifdef  DEBUG
 734:     enprintf(ENDBG_TRACE)("enetclose(%d, %x):\n", md, flag);
 735: #endif
 736: 
 737:     /*
 738:      *  insure that receiver doesn't try to queue something
 739:      *  for the device as we are decommissioning it.
 740:      *  (I don't think this is necessary, but I'm a coward.)
 741:      */
 742:     ipl = splenet();
 743:     dequeue((struct Queue *)d->enOD_Link.B);
 744:     enCurOpens--;
 745:     enprintf(ENDBG_DESQ)
 746:             ("enetclose: Desq: %x, %x\n", enDesq.enQ_F, enDesq.enQ_B);
 747:     enFlushWaitQueue(&(d->enOD_Waiting));
 748:     splx(ipl);
 749: 
 750: }
 751: 
 752: /*
 753:  *  enetread - read next packet from net
 754:  */
 755: 
 756: /* VARARGS */
 757: enetread(dev, uio)
 758: dev_t dev;
 759: register struct uio *uio;
 760: {
 761:     register struct enOpenDescriptor *d = &enAllDescriptors[ENINDEX(dev)];
 762:     register struct enPacket *p;
 763:     int ipl;
 764:     int error;
 765:     extern enTimeout();
 766: 
 767: #if DEBUG
 768:     enprintf(ENDBG_TRACE)("enetread(%x):", dev);
 769: #endif
 770: 
 771:     ipl = splenet();
 772:     /*
 773:      *  If nothing is on the queue of packets waiting for
 774:      *  this open enet file, then set timer and sleep until
 775:      *  either the timeout has occurred or a packet has
 776:      *  arrived.
 777:      */
 778: 
 779:     while (0 == d->enOD_Waiting.enWQ_NumQueued)
 780:     {
 781:     if (d->enOD_Timeout < 0)
 782:     {
 783:         splx(ipl);
 784:         return(0);
 785:     }
 786:         if (d->enOD_Timeout)
 787:     {
 788:         /*
 789: 	     *  If there was a previous timeout pending for this file,
 790: 	     *  cancel it before setting another.  This is necessary since
 791: 	     *  a cancel after the sleep might never happen if the read is
 792: 	     *  interrupted by a signal.
 793: 	     */
 794:         if (d->enOD_RecvState == ENRECVTIMING)
 795:         untimeout(enTimeout, (caddr_t)d);
 796:             timeout(enTimeout, (caddr_t)d, (int)(d->enOD_Timeout));
 797:             d->enOD_RecvState = ENRECVTIMING;
 798:     }
 799:         else
 800:             d->enOD_RecvState = ENRECVIDLE;
 801: 
 802:         sleep((caddr_t)d, PRINET);
 803: 
 804:         switch (d->enOD_RecvState)
 805:     {
 806:             case ENRECVTIMING:
 807:         {
 808:                 untimeout(enTimeout, (caddr_t)d);
 809:                 d->enOD_RecvState = ENRECVIDLE;
 810:         break;
 811:         }
 812:             case ENRECVTIMEDOUT:
 813:         {
 814:                 splx(ipl);
 815:         return(0);
 816:         }
 817:     }
 818:     }
 819: 
 820:     p = enDeWaitQueue(&(d->enOD_Waiting));
 821:     splx(ipl);
 822: 
 823:     /*
 824:      * Move data from packet into user space.
 825:      */
 826:     error = enrmove(p->enP_mbuf, uio, p->enP_ByteCount);
 827: 
 828:     ipl = splenet();
 829:     if (0 == --(p->enP_RefCount))   /* if no more claims on this packet */
 830:     {
 831:     m_freem(p->enP_mbuf);   /* release mbuf */
 832:     enDeallocatePacket(p);  /* and packet */
 833:     }
 834:     splx(ipl);
 835: 
 836:     return(error);
 837: }
 838: 
 839: 
 840: 
 841: /*
 842:  *  enTimeout - process ethernet read timeout
 843:  */
 844: 
 845: enTimeout(d)
 846: register struct enOpenDescriptor * d;
 847: {
 848:     register int ipl;
 849: 
 850: #ifdef  DEBUG
 851:     enprintf(ENDBG_TRACE)("enTimeout(%x):\n", d);
 852: #endif
 853:     ipl = splenet();
 854:     d->enOD_RecvState = ENRECVTIMEDOUT;
 855:     wakeup((caddr_t)d);
 856:     enetwakeup(d);
 857:     splx(ipl);
 858: 
 859: }
 860: 
 861: /*
 862:  *  enetwrite - write next packet to net
 863:  */
 864: 
 865: int enKludgeSleep[NENET];   /* Are we sleeping on IF_QFULL? */
 866:                 /*  really, # of procs sleeping on IF_QFULL */
 867: 
 868: /* VARARGS */
 869: enetwrite(dev, uio)
 870: dev_t dev;
 871: register struct uio *uio;
 872: {
 873:     register int unit = ENUNIT(dev);
 874:     register struct enState *enStatep = &enState[unit];
 875:     struct mbuf *mp;
 876:     register struct ifnet *ifp = enet_info[unit].ifp;
 877:     int ipl;
 878:     int error;
 879:     int sleepcount;
 880:     int enKludgeTime();
 881: 
 882: #if DEBUG
 883:     enprintf(ENDBG_TRACE)("enetwrite(%x):\n", dev);
 884: #endif
 885: 
 886:      if (uio->uio_resid == 0)
 887:      return(0);
 888:      if (uio->uio_resid > ifp->if_mtu)  /* too large */
 889:      return(EMSGSIZE);
 890: 
 891:     /*
 892:      * Copy user data into mbufs
 893:      */
 894:      if (error = enwmove(uio, &mp)) {
 895:      return(error);
 896:      }
 897: 
 898:     ipl = splenet();
 899:     /*
 900:      * if the queue is full,
 901:      * hang around until there's room or until process is interrupted
 902:      */
 903:     sleepcount = 0;
 904:     while (IF_QFULL(&(ifp->if_snd))) {
 905:     extern int hz;
 906:     if (sleepcount++ > 2) { /* don't sleep too long */
 907:         splx(ipl);
 908:         return(ETIMEDOUT);
 909:     }
 910:     /* if nobody else has a timeout pending for this unit, set one */
 911:     if (enKludgeSleep[unit] == 0)
 912:         timeout(enKludgeTime, (caddr_t)unit, 2 * hz);
 913:     enKludgeSleep[unit]++;  /* record that we are sleeping */
 914:     if (setjmp(&u.u_qsave)) {
 915:         /* sleep (following) was interrupted, clean up */
 916: #if DEBUG
 917:         enprintf(ENDBG_ABNORM)
 918:             ("enetwrite(%x): enet%d sleep %d interrupted\n", dev,
 919:             unit, enKludgeSleep[unit]);
 920: #endif	DEBUG
 921:         enKludgeSleep[unit]--;  /* we're no longer sleeping */
 922:         m_freem(mp);
 923:         splx(ipl);
 924:         return(EINTR);
 925:     }
 926:     sleep((caddr_t)&(enKludgeSleep[unit]), PRINET);
 927:     enKludgeSleep[unit]--;  /* we are no longer sleeping */
 928:     }
 929: 
 930:     /* place mbuf chain on outgoing queue & start if necessary */
 931:     error = (*ifp->if_output)(ifp, mp, &enetaf);
 932:             /* this always frees the mbuf chain */
 933:     enXcnt++;
 934: 
 935:     splx(ipl);
 936: 
 937:     return(error);
 938: }
 939: 
 940: enKludgeTime(unit)
 941: int unit;
 942: {
 943:     /* XXX perhaps we should always wakeup? */
 944:     if (enKludgeSleep[unit]) {
 945:         wakeup((caddr_t)&(enKludgeSleep[unit]));
 946:         /* XXX should we restart transmitter? */
 947:     }
 948: }
 949: 
 950: /*
 951:  *  enetioctl - ether net control
 952:  *
 953:  *  EIOCGETP	 - get ethernet parameters
 954:  *  EIOCSETP	 - set ethernet read timeout
 955:  *  EIOCSETF	 - set ethernet read filter
 956:  *  EIOCENBS	 - enable signal when read packet available
 957:  *  EIOCINHS     - inhibit signal when read packet available
 958:  *  FIONREAD	 - check for read packet available
 959:  *  EIOCSETW	 - set maximum read packet waiting queue length
 960:  *  EIOCFLUSH	 - flush read packet waiting queue
 961:  *  EIOCMBIS	 - set mode bits
 962:  *  EIOCMBIC	 - clear mode bits
 963:  *  EICODEVP	 - get device parameters
 964:  *  EIOCMFREE	 - number of free minors
 965:  */
 966: 
 967: /* ARGSUSED */
 968: enetioctl(dev, cmd, addr, flag)
 969: caddr_t addr;
 970: dev_t flag;
 971: {
 972: 
 973:     register struct enState *enStatep = &enState[ENUNIT(dev)];
 974:     register struct enOpenDescriptor * d = &enAllDescriptors[ENINDEX(dev)];
 975:     int ipl;
 976: 
 977: #if DEBUG
 978:     enprintf(ENDBG_TRACE)
 979:             ("enetioctl(%x, %x, %x, %x):\n", dev, cmd, addr, flag);
 980: #endif
 981: 
 982:     switch (cmd)
 983:     {
 984:     case EIOCGETP:
 985:     {
 986:             struct eniocb t;
 987: 
 988:         t.en_maxwaiting = ENMAXWAITING;
 989:         t.en_maxpriority = ENMAXPRI;
 990:             t.en_rtout = d->enOD_Timeout;
 991:         t.en_addr = -1;
 992:         t.en_maxfilters = ENMAXFILTERS;
 993: 
 994:         bcopy((caddr_t)&t, addr, sizeof t);
 995:     }
 996:         endcase
 997: 
 998:         case EIOCSETP:
 999:     {
1000:             struct eniocb t;
1001: 
1002:             bcopy(addr, (caddr_t)&t, sizeof t);
1003:             d->enOD_Timeout = t.en_rtout;
1004:     }
1005:         endcase
1006: 
1007:         case EIOCSETF:
1008:     {
1009:             struct enfilter f;
1010:         unsigned short *fp;
1011: 
1012:             bcopy(addr, (caddr_t)&f, sizeof f);
1013:         if (f.enf_FilterLen > ENMAXFILTERS)
1014:         {
1015:         return(EINVAL);
1016:         }
1017:             /* insure that filter is installed indivisibly */
1018:             ipl = splenet();
1019:             bcopy((caddr_t)&f, (caddr_t)&(d->enOD_OpenFilter), sizeof f);
1020:         /* pre-compute length of filter */
1021:         fp = &(d->enOD_OpenFilter.enf_Filter[0]);
1022:         d->enOD_FiltEnd = &(fp[d->enOD_OpenFilter.enf_FilterLen]);
1023:         d->enOD_RecvCount = 0;  /* reset count when filter changes */
1024:         dequeue((struct Queue *)d->enOD_Link.B);
1025:         enDesq.enQ_NumQueued--;
1026:         enInsertDescriptor(&(enDesq), d);
1027:             splx(ipl);
1028:     }
1029:         endcase
1030: 
1031:     /*
1032: 	 *  Enable signal n on input packet
1033: 	 */
1034:     case EIOCENBS:
1035:     {
1036:             int snum;
1037: 
1038:         bcopy(addr, (caddr_t)&snum, sizeof snum);
1039:         if (snum < NSIG) {
1040:             d->enOD_SigProc = u.u_procp;
1041:             d->enOD_SigPid  = u.u_procp->p_pid;
1042:             d->enOD_SigNumb = snum; /* This must be set last */
1043:         } else {
1044:             goto bad;
1045:         }
1046:     }
1047:     endcase
1048: 
1049:     /*
1050: 	 *  Disable signal on input packet
1051: 	 */
1052:     case EIOCINHS:
1053:     {
1054:         d->enOD_SigNumb = 0;
1055:     }
1056:     endcase
1057: 
1058:     /*
1059: 	 *  Check for packet waiting
1060: 	 */
1061:     case FIONREAD:
1062:     {
1063:             int n;
1064:             register struct enWaitQueue *wq;
1065: 
1066:         ipl = splenet();
1067:         if ((wq = &(d->enOD_Waiting))->enWQ_NumQueued)
1068:         n = wq->enWQ_Packets[wq->enWQ_Head]->enP_ByteCount;
1069:         else
1070:         n = 0;
1071:         splx(ipl);
1072:         bcopy((caddr_t)&n, addr, sizeof n);
1073:     }
1074:     endcase
1075: 
1076:     /*
1077: 	 *  Set maximum recv queue length for a device
1078: 	 */
1079:     case EIOCSETW:
1080:     {
1081:             unsigned un;
1082: 
1083:         bcopy(addr, (caddr_t)&un, sizeof un);
1084:         /*
1085:              *  unsigned un         MaxQueued
1086:              * ----------------    ------------
1087:              *  0               ->  DEFWAITING
1088: 	     *  1..MAXWAITING   ->  un
1089: 	     *  MAXWAITING..-1  ->  MAXWAITING
1090:              */
1091:         d->enOD_Waiting.enWQ_MaxWaiting = (un) ? min(un, ENMAXWAITING)
1092:                                         : ENDEFWAITING;
1093:     }
1094:     endcase
1095: 
1096:     /*
1097: 	 *  Flush all packets queued for a device
1098: 	 */
1099:     case EIOCFLUSH:
1100:     {
1101:         ipl = splenet();
1102:         enFlushWaitQueue(&(d->enOD_Waiting));
1103:         splx(ipl);
1104:     }
1105:     endcase
1106: 
1107:     /*
1108: 	 *  Set mode bits
1109: 	 */
1110:     case EIOCMBIS:
1111:     {
1112:         u_short mode;
1113: 
1114:         bcopy(addr, (caddr_t)&mode, sizeof mode);
1115:         if (mode&ENPRIVMODES)
1116:         return(EINVAL);
1117:         else
1118:         d->enOD_Flag |= mode;
1119:     }
1120:     endcase
1121: 
1122:     /*
1123: 	 *  Clear mode bits
1124: 	 */
1125:     case EIOCMBIC:
1126:     {
1127:         u_short mode;
1128: 
1129:         bcopy(addr, (caddr_t)&mode, sizeof mode);
1130:         if (mode&ENPRIVMODES)
1131:         return(EINVAL);
1132:         else
1133:         d->enOD_Flag &= ~mode;
1134:     }
1135:     endcase
1136: 
1137:     /*
1138: 	 * Return hardware-specific device parameters.
1139: 	 */
1140:     case EIOCDEVP:
1141:     {
1142:         bcopy((caddr_t)&(enDevParams), addr, sizeof(struct endevp));
1143:     }
1144:     endcase;
1145: 
1146:     /*
1147: 	 * Return # of free minor devices.
1148: 	 */
1149:     case EIOCMFREE:
1150:     {
1151:         register int md;
1152:         register int sum = 0;
1153: 
1154:         for (md = 0; md < enMaxMinors; md++)
1155:             if (enAllocMap[md] == FALSE)
1156:             sum++;
1157:         *(int *)addr = sum;
1158:     }
1159:     endcase;
1160: 
1161:         default:
1162:     {
1163:     bad:
1164:         return(EINVAL);
1165:     }
1166:     }
1167: 
1168:     return(0);
1169: 
1170: }
1171: 
1172: /****************************************************************
1173:  *								*
1174:  *		Support for select() system call		*
1175:  *								*
1176:  *	Other hooks in:						*
1177:  *		enInitDescriptor()				*
1178:  *		enInputDone()					*
1179:  *		enTimeout()					*
1180:  ****************************************************************/
1181: /*
1182:  * inspired by the code in tty.c for the same purpose.
1183:  */
1184: 
1185: /*
1186:  * enetselect - returns true iff the specific operation
1187:  *	will not block indefinitely.  Otherwise, return
1188:  *	false but make a note that a selwakeup() must be done.
1189:  */
1190: enetselect(dev, rw)
1191: register dev_t dev;
1192: int rw;
1193: {
1194:     register struct enOpenDescriptor *d;
1195:     register struct enWaitQueue *wq;
1196:     register int ipl;
1197:     register int avail;
1198: 
1199:     switch (rw) {
1200: 
1201:     case FREAD:
1202:         /*
1203: 		 * an imitation of the FIONREAD ioctl code
1204: 		 */
1205:         d = &(enAllDescriptors[ENINDEX(dev)]);
1206: 
1207:         ipl = splenet();
1208:         wq = &(d->enOD_Waiting);
1209:         if (wq->enWQ_NumQueued)
1210:             avail = 1;  /* at least one packet queued */
1211:         else {
1212:             avail = 0;  /* sorry, nothing queued now */
1213:             /*
1214: 			 * If there's already a select() waiting on this
1215: 			 * minor device then this is a collision.
1216: 			 * [This shouldn't happen because enet minors
1217: 			 * really should not be shared, but if a process
1218: 			 * forks while one of these is open, it is possible
1219: 			 * that both processes could select() us.]
1220: 			 */
1221:             if (d->enOD_SelProc
1222:                  && d->enOD_SelProc->p_wchan == (caddr_t)&selwait)
1223:                     d->enOD_SelColl = 1;
1224:             else
1225:                 d->enOD_SelProc = u.u_procp;
1226:         }
1227:         splx(ipl);
1228:         return(avail);
1229: 
1230:     case FWRITE:
1231:         /*
1232: 		 * since the queueing for output is shared not just with
1233: 		 * the other enet devices but also with the IP system,
1234: 		 * we can't predict what would happen on a subsequent
1235: 		 * write.  However, since we presume that all writes
1236: 		 * complete eventually, and probably fairly fast, we
1237: 		 * pretend that select() is true.
1238: 		 */
1239:         return(1);
1240: 
1241:     default:        /* hmmm. */
1242:         return(1);      /* don't block in select() */
1243:     }
1244: }
1245: 
1246: enetwakeup(d)
1247: register struct enOpenDescriptor *d;
1248: {
1249:     if (d->enOD_SelProc) {
1250:         selwakeup(d->enOD_SelProc, d->enOD_SelColl);
1251:         d->enOD_SelColl = 0;
1252:         d->enOD_SelProc = 0;
1253:     }
1254: }
1255: 
1256: /*
1257:  * enetFilter - incoming linkage from ../vaxif/if_en.c
1258:  */
1259: 
1260: enetFilter(en, m, count)
1261: register int en;
1262: register struct mbuf *m;
1263: register int count;
1264: {
1265:     register struct enState *enStatep = &enState[en];
1266:     register struct enPacket *p;
1267:     register int pullcount; /* bytes, not words */
1268:     int s = splenet();
1269: 
1270: #if DEBUG
1271:     enprintf(ENDBG_TRACE)("enetFilter(%d):\n", en);
1272: #endif
1273: 
1274:     p = enAllocatePacket(); /* panics if not possible */
1275: 
1276:     p->enP_ByteCount = count;
1277: 
1278:     pullcount = min(MLEN, count);   /* largest possible first mbuf */
1279:     if (m->m_len < pullcount) {
1280:         /* first mbuf not as full as it could be - fix this */
1281:     if ((m = m_pullup(m, pullcount)) == 0) {
1282:         /* evidently no resources; bloody m_pullup discarded mbuf */
1283:         enDeallocatePacket(p);
1284:         enRdrops++;
1285:         goto out;
1286:     }
1287:     }
1288: 
1289:     p->enP_mbuf = m;
1290:     p->enP_Data = mtod(m, u_short *);
1291: 
1292:     enInputDone(enStatep, p);
1293: out:
1294:     splx(s);
1295: }
1296: 
1297: /*
1298:  * enInputDone - process correctly received packet
1299:  */
1300: 
1301: enInputDone(enStatep, p)
1302: register struct enState *enStatep;
1303: register struct enPacket *p;
1304: {
1305:     register struct enOpenDescriptor *d;
1306:     int queued = 0;
1307:     register int maxword;
1308:     register unsigned long rcount;
1309:     register struct enOpenDescriptor *prevd;
1310: 
1311: #if INNERDEBUG
1312:     enprintf(ENDBG_TRACE)("enInputDone(%x): %x\n", enStatep, p);
1313: #endif
1314:     /* precompute highest possible word offset */
1315:     /* can't address beyond end of packet or end of first mbuf */
1316:     maxword = (min(p->enP_ByteCount, p->enP_mbuf->m_len)>>1);
1317: 
1318:     forAllOpenDescriptors(d)
1319:     {
1320:     if (enDoFilter(p, d, maxword))
1321:     {
1322:             if (d->enOD_Waiting.enWQ_NumQueued < d->enOD_Waiting.enWQ_MaxWaiting)
1323:         {
1324:                 enEnWaitQueue(&(d->enOD_Waiting), p);
1325:                 p->enP_RefCount++;
1326:         queued++;
1327:                 wakeup((caddr_t)d);
1328:         enetwakeup(d);
1329: #if INNERDEBUG
1330:                 enprintf(ENDBG_TRACE)("enInputDone: queued\n");
1331: #endif
1332:         }
1333:         /*  send notification when input packet received  */
1334:         if (d->enOD_SigNumb) {
1335:         if (d->enOD_SigProc->p_pid == d->enOD_SigPid)
1336:             psignal(d->enOD_SigProc, d->enOD_SigNumb);
1337:         if ((d->enOD_Flag & ENHOLDSIG) == 0)
1338:             d->enOD_SigNumb = 0;        /* disable signal */
1339:         }
1340:         rcount = ++(d->enOD_RecvCount);
1341: 
1342:         /* see if ordering of filters is wrong */
1343:         if (d->enOD_OpenFilter.enf_Priority >= ENHIPRI) {
1344:             prevd = (struct enOpenDescriptor *)d->enOD_Link.B;
1345:         /*
1346: 		 * If d is not the first element on the queue, and
1347: 		 * the previous element is at equal priority but has
1348: 		 * a lower count, then promote d to be in front of prevd.
1349: 		 */
1350:         if (((struct Queue *)prevd != &(enDesq.enQ_Head)) &&
1351:                 (d->enOD_OpenFilter.enf_Priority ==
1352:                 prevd->enOD_OpenFilter.enf_Priority)) {
1353:             /* threshold difference to avoid thrashing */
1354:             if ((100 + prevd->enOD_RecvCount) < rcount) {
1355:             enReorderQueue(&(prevd->enOD_Link), &(d->enOD_Link));
1356:             }
1357:         }
1358:         break;  /* high-priority filter => no more deliveries */
1359:         }
1360:         else if (enOneCopy)
1361:         break;
1362:     }
1363:     }
1364:     if (queued == 0)            /* this buffer no longer in use */
1365:     {
1366:     m_freem(p->enP_mbuf);           /* free mbuf */
1367:     enDeallocatePacket(p);          /*  and packet */
1368:     enRdrops++;
1369:     }
1370:     else
1371:     enRcnt++;
1372: 
1373: }
1374: 
1375: #define opx(i)  (i>>ENF_NBPA)
1376: 
1377: boolean
1378: enDoFilter(p, d, maxword)
1379: struct enPacket *p;
1380: struct enOpenDescriptor *d;
1381: register int maxword;
1382: {
1383: 
1384:     register unsigned short *sp;
1385:     register unsigned short *fp;
1386:     register unsigned short *fpe;
1387:     register unsigned op;
1388:     register unsigned arg;
1389:     unsigned short stack[ENMAXFILTERS+1];
1390:     struct fw {unsigned arg:ENF_NBPA, op:ENF_NBPO;};
1391: 
1392: #ifdef  INNERDEBUG
1393:     enprintf(ENDBG_TRACE)("enDoFilter(%x,%x):\n", p, d);
1394: #endif
1395:     sp = &stack[ENMAXFILTERS];
1396:     fp = &d->enOD_OpenFilter.enf_Filter[0];
1397:     fpe = d->enOD_FiltEnd;
1398:     /* ^ is really: fpe = &fp[d->enOD_OpenFilter.enf_FilterLen]; */
1399:     *sp = TRUE;
1400: 
1401:     for (; fp < fpe; )
1402:     {
1403:     op = ((struct fw *)fp)->op;
1404:     arg = ((struct fw *)fp)->arg;
1405:     fp++;
1406:     switch (arg)
1407:     {
1408:         default:
1409:             arg -= ENF_PUSHWORD;
1410: #ifndef lint
1411:         /*
1412: 		 * This next test is a little bogus; since arg
1413: 		 * is unsigned, it is always >= 0 (the compiler
1414: 		 * knows this and emits no code).  If arg were
1415: 		 * less than ENF_PUSHWORD before the subtract,
1416: 		 * it is certaintly going to be more than maxword
1417: 		 * afterward, so the code does work "right"
1418: 		 */
1419:         if ((arg >= 0) && (arg < maxword))
1420: #else
1421:         if (arg < maxword)
1422: #endif	lint
1423:             *--sp = p->enP_Data[arg];
1424:         else
1425:         {
1426: #ifdef  INNERDEBUG
1427:             enprintf(ENDBG_TRACE)("=>0(len)\n");
1428: #endif
1429:             return(false);
1430:         }
1431:         break;
1432:         case ENF_PUSHLIT:
1433:         *--sp = *fp++;
1434:         break;
1435:         case ENF_PUSHZERO:
1436:         *--sp = 0;
1437:         case ENF_NOPUSH:
1438:         break;
1439:     }
1440:     if (sp < &stack[2]) /* check stack overflow: small yellow zone */
1441:     {
1442:         enprintf(ENDBG_TRACE)("=>0(--sp)\n");
1443:         return(false);
1444:     }
1445:     if (op == ENF_NOP)
1446:         continue;
1447:     /*
1448: 	 * all non-NOP operators binary, must have at least two operands
1449: 	 * on stack to evaluate.
1450: 	 */
1451:     if (sp > &stack[ENMAXFILTERS-2])
1452:     {
1453:         enprintf(ENDBG_TRACE)("=>0(sp++)\n");
1454:         return(false);
1455:     }
1456:     arg = *sp++;
1457:     switch (op)
1458:     {
1459:         default:
1460: #ifdef  INNERDEBUG
1461:         enprintf(ENDBG_TRACE)("=>0(def)\n");
1462: #endif
1463:         return(false);
1464:         case opx(ENF_AND):
1465:         *sp &= arg;
1466:         break;
1467:         case opx(ENF_OR):
1468:         *sp |= arg;
1469:         break;
1470:         case opx(ENF_XOR):
1471:         *sp ^= arg;
1472:         break;
1473:         case opx(ENF_EQ):
1474:         *sp = (*sp == arg);
1475:         break;
1476:         case opx(ENF_NEQ):
1477:         *sp = (*sp != arg);
1478:         break;
1479:         case opx(ENF_LT):
1480:         *sp = (*sp < arg);
1481:         break;
1482:         case opx(ENF_LE):
1483:         *sp = (*sp <= arg);
1484:         break;
1485:         case opx(ENF_GT):
1486:         *sp = (*sp > arg);
1487:         break;
1488:         case opx(ENF_GE):
1489:         *sp = (*sp >= arg);
1490:         break;
1491: 
1492:         /* short-circuit operators */
1493: 
1494:         case opx(ENF_COR):
1495:             if (*sp++ == arg) {
1496: #ifdef  INNERDEBUG
1497:             enprintf(ENDBG_TRACE)("=>COR %x\n", *sp);
1498: #endif
1499:             return(true);
1500:         }
1501:         break;
1502:         case opx(ENF_CAND):
1503:             if (*sp++ != arg) {
1504: #ifdef  INNERDEBUG
1505:             enprintf(ENDBG_TRACE)("=>CAND %x\n", *sp);
1506: #endif
1507:             return(false);
1508:         }
1509:         break;
1510:         case opx(ENF_CNOR):
1511:             if (*sp++ == arg) {
1512: #ifdef  INNERDEBUG
1513:             enprintf(ENDBG_TRACE)("=>COR %x\n", *sp);
1514: #endif
1515:             return(false);
1516:         }
1517:         break;
1518:         case opx(ENF_CNAND):
1519:             if (*sp++ != arg) {
1520: #ifdef  INNERDEBUG
1521:             enprintf(ENDBG_TRACE)("=>CAND %x\n", *sp);
1522: #endif
1523:             return(true);
1524:         }
1525:         break;
1526:     }
1527:     }
1528: #ifdef  INNERDEBUG
1529:     enprintf(ENDBG_TRACE)("=>%x\n", *sp);
1530: #endif
1531:     return((boolean)*sp);
1532: 
1533: }
1534: 
1535: enInitDescriptor(d, flag)
1536: register struct enOpenDescriptor *d;
1537: {
1538: 
1539: #if DEBUG
1540:     enprintf(ENDBG_TRACE)("enInitDescriptor(%x):\n", d);
1541: #endif
1542:     d->enOD_RecvState = ENRECVIDLE;
1543:     d->enOD_OpenFilter.enf_FilterLen = 0;
1544:     d->enOD_OpenFilter.enf_Priority = 0;
1545:     d->enOD_FiltEnd = &(d->enOD_OpenFilter.enf_Filter[0]);
1546:     d->enOD_RecvCount = 0;
1547:     d->enOD_Timeout = 0;
1548:     d->enOD_SigNumb = 0;
1549:     d->enOD_Flag = flag;
1550:     d->enOD_SelColl = 0;
1551:     d->enOD_SelProc = 0;        /* probably unnecessary */
1552:     /*
1553:      * Remember the PID that opened us, at least until some process
1554:      * sets a signal for this minor device
1555:      */
1556:     d->enOD_SigPid = u.u_procp->p_pid;
1557: 
1558:     enInitWaitQueue(&(d->enOD_Waiting));
1559: #if DEBUG
1560:     enprintf(ENDBG_TRACE)("=>eninitdescriptor\n");
1561: #endif
1562: 
1563: }
1564: 
1565: /*
1566:  *  enInsertDescriptor - insert open descriptor in queue ordered by priority
1567:  */
1568: 
1569: enInsertDescriptor(q, d)
1570: register struct enQueue *q;
1571: register struct enOpenDescriptor *d;
1572: {
1573:     struct enOpenDescriptor * nxt;
1574:     register int ipl;
1575: 
1576:     ipl = splenet();
1577:     nxt = (struct enOpenDescriptor *)q->enQ_F;
1578:     while ((struct Queue *)q != &(nxt->enOD_Link))
1579:     {
1580:         if (d->enOD_OpenFilter.enf_Priority > nxt->enOD_OpenFilter.enf_Priority)
1581:         break;
1582:         nxt = (struct enOpenDescriptor *)nxt->enOD_Link.F;
1583:     }
1584:     enqueue((struct Queue *)&(nxt->enOD_Link),(struct Queue *)&(d->enOD_Link));
1585:     enprintf(ENDBG_DESQ)("enID: Desq: %x, %x\n", q->enQ_F, q->enQ_B);
1586:     q->enQ_NumQueued++;
1587:     splx(ipl);
1588: 
1589: }
1590: 
1591: int enReorderCount = 0;     /* for external monitoring */
1592: 
1593: /*
1594:  * enReorderQueue - swap order of two elements in queue
1595:  *	assumed to be called at splenet
1596:  */
1597: enReorderQueue(first, last)
1598: register struct Queue *first;
1599: register struct Queue *last;
1600: {
1601:     register struct Queue *prev;
1602:     register struct Queue *next;
1603: 
1604:     enprintf(ENDBG_DESQ)("enReorderQ: %x, %x\n", first, last);
1605: 
1606:     enReorderCount++;
1607: 
1608:     /* get pointers to other queue elements */
1609:     prev = first->B;
1610:     next = last->F;
1611: 
1612:     /*
1613: 	 * no more reading from queue elements; this ensures that
1614: 	 * the code works even if there are fewer than 4 elements
1615: 	 * in the queue.
1616: 	 */
1617: 
1618:     prev->F = last;
1619:     next->B = first;
1620: 
1621:     last->B = prev;
1622:     last->F = first;
1623: 
1624:     first->F = next;
1625:     first->B = last;
1626: }
1627: 
1628: enetattach(ifp, devp)
1629: struct ifnet *ifp;
1630: struct endevp *devp;
1631: {
1632:     register struct enState *enStatep = &enState[enUnits];
1633: 
1634: #ifdef  DEBUG
1635:     enprintf(ENDBG_INIT) ("enetattach: type %d, addr ", devp->end_dev_type);
1636:     if (enDebug&ENDBG_INIT) {
1637:     register int i;
1638:     for (i = 0; i < devp->end_addr_len; i++)
1639:         printf("%o ", devp->end_addr[i]);
1640:     printf("\n");
1641:     }
1642: #endif	DEBUG
1643: 
1644:     enet_info[enUnits].ifp = ifp;
1645: 
1646:     bcopy((caddr_t)devp, (caddr_t)&(enDevParams), sizeof(struct endevp));
1647: 
1648:     enInit(enStatep, enUnits);
1649: 
1650:     return(enUnits++);
1651: }
1652: 
1653: #endif	(NENETFILTER > 0)

Defined functions

enAllocatePacket defined in line 470; used 1 times
enDeWaitQueue defined in line 353; used 1 times
enDoFilter defined in line 1377; used 2 times
enEnWaitQueue defined in line 338; used 1 times
enFindMinor defined in line 674; used 1 times
enFlushQueue defined in line 302; never used
enInit defined in line 689; used 1 times
enInitDescriptor defined in line 1535; used 1 times
enInitScavenge defined in line 416; used 1 times
enInitWaitQueue defined in line 323; used 1 times
enInputDone defined in line 1301; used 1 times
enInsertDescriptor defined in line 1569; used 2 times
enKludgeTime defined in line 940; used 2 times
enReorderQueue defined in line 1597; used 1 times
enScavenge defined in line 432; used 1 times
enTimeout defined in line 845; used 4 times
enTrimWaitQueue defined in line 373; used 2 times
enetFilter defined in line 1260; never used
enetattach defined in line 1628; never used
enetclose defined in line 724; never used
enetioctl defined in line 968; never used
enetopen defined in line 599; never used
enetread defined in line 757; never used
enetselect defined in line 1190; never used
enetwakeup defined in line 1246; used 2 times
enetwrite defined in line 869; never used
enrmove defined in line 518; used 1 times
enwmove defined in line 538; used 1 times

Defined variables

enAllDescriptors defined in line 239; used 6 times
enAllocMap defined in line 237; used 6 times
enDebug defined in line 242; used 2 times
enFreeq defined in line 234; used 10 times
enFreeqMin defined in line 240; used 2 times
enKludgeSleep defined in line 865; used 8 times
enMaxMinors defined in line 245; used 3 times
enOneCopy defined in line 244; used 1 times
enQueueElts defined in line 233; used 1 times
enReorderCount defined in line 1591; used 1 times
enScavLevel defined in line 411; used 3 times
enScavenges defined in line 241; used 1 times
enState defined in line 235; used 10 times
enUnitMap defined in line 236; used 1 times
enUnits defined in line 243; used 5 times
enet_info defined in line 260; used 4 times
enetaf defined in line 262; used 1 times

Defined struct's

enet_info defined in line 258; never used
fw defined in line 1390; used 4 times

Defined macros

DEBUG defined in line 186; used 15 times
ENDBG_ABNORM defined in line 203; used 2 times
ENDBG_DESQ defined in line 200; used 4 times
ENDBG_INIT defined in line 201; used 3 times
ENDBG_SCAV defined in line 202; used 2 times
ENDBG_TRACE defined in line 199; used 22 times
NENET defined in line 147; used 5 times
NENETFILTER defined in line 183; used 10 times
PRINET defined in line 210; used 2 times
SUN_OPENI defined in line 151; used 2 times
enDeallocatePacket defined in line 496; used 4 times
enEnqueue defined in line 289; used 2 times
enFlushWaitQueue defined in line 400; used 2 times
enprintf defined in line 190; used 31 times
forAllOpenDescriptors defined in line 280; used 2 times
min defined in line 206; used 5 times
opx defined in line 1375; used 13 times
splenet defined in line 208; used 11 times
Last modified: 1985-11-11
Generated: 2016-12-26
Generated by src2html V0.67
page hit count: 3535
Valid CSS Valid XHTML 1.0 Strict