Search Results

Search found 17651 results on 707 pages for 'unix domain sockets'.

Page 112/707 | < Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >

  • limit PHP script to one domain per license

    - by Mac Os
    what is the best way to make my php code working on one domain and sure i will encode the whole code by ioncube i want function like function domain(){ } if($this_domain <> domain()){ exit('no'); } or $allowed_hosts = array('foo.example.com', 'bar.example.com'); if (!isset($_SERVER['HTTP_HOST']) || !in_array($_SERVER['HTTP_HOST'], $allowed_hosts)) { header($_SERVER['SERVER_PROTOCOL'].' 400 Bad Request'); exit; } now i want know the best way to do that may be will user strpos

    Read the article

  • xcode 4 creating a 2d grid (range and domain)

    - by user1706978
    I'm learning how to program c and i'm trying to make a program the finds the range (using an equation with x as the domain) of a 2d grid...ive already attempted it, but it's giving me all these errors on Xcode, any help?(As you can see, I'm quite stuck!) #include <stdio.h> #include <stdlib.h> float domain; float domain = 2.0; float domainsol(float x ) { domain = x; float func = 1.25 * x + 5.0; return func; } int main(int argc, const char * argv[]) { }

    Read the article

  • How can I close a port that appears to be orphaned by Xvfb?

    - by Jim Fiorato
    I'm running Xvfb on a FC8 Amazon EC2 image. On occasion Xvfb will crash (unable at the moment to find out the reason for the crash), and after crashing the TCP port will appear to be orphaned. I'm unable to get a PID to kill any process that may be using it. I'm starting Xvfb with: Xvfb :7 -screen 0 1024x768x24 & Examples of what I'm working with are below, the Xvfb port is (was) 6007: # netstat -ap Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 *:ssh *:* LISTEN 1894/sshd tcp 0 0 *:6007 *:* LISTEN - tcp 0 352 ip-10-84-69-165.ec2.int:ssh c-71-194-253-238.hsd1:51689 ESTABLISHED 2981/0 udp 0 0 *:bootpc *:* 1817/dhclient udp 0 0 *:bootpc *:* 1463/dhclient Active UNIX domain sockets (servers and established) Proto RefCnt Flags Type State I-Node PID/Program name Path unix 2 [ ] DGRAM 871 668/udevd @/org/kernel/udev/udevd unix 2 [ ACC ] STREAM LISTENING 5385 1880/dbus-daemon /var/run/dbus/system_bus_socket unix 6 [ ] DGRAM 5353 1867/rsyslogd /dev/log unix 2 [ ] DGRAM 11861 2981/0 unix 2 [ ] DGRAM 5461 1974/crond unix 2 [ ] DGRAM 5451 1904/console-kit-da unix 3 [ ] STREAM CONNECTED 5438 1880/dbus-daemon /var/run/dbus/system_bus_socket unix 3 [ ] STREAM CONNECTED 5437 1904/console-kit-da unix 3 [ ] STREAM CONNECTED 5396 1880/dbus-daemon unix 3 [ ] STREAM CONNECTED 5395 1880/dbus-daemon unix 2 [ ] DGRAM 5361 1871/rklogd # lsof -i COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME dhclient 1463 root 3u IPv4 4704 UDP *:bootpc dhclient 1817 root 4u IPv4 5173 UDP *:bootpc sshd 1894 root 3u IPv4 5414 TCP *:ssh (LISTEN) sshd 2981 root 3u IPv4 11825 TCP ip-10-84-69-165.ec2.internal:ssh->c-71-194-253-238.hsd1.il.comcast.net:51689 (ESTABLISHED) Attempting to force the port closed with iptables doesn't seem to work either. iptables -A INPUT -p tcp --dport 6007 -j DROP I'm at a loss as to how to reclaim/free the port. From what I can tell, this port will remain in this state until the EC2 instance is shut down. So, how can I close this port so I can restart Xvfb?

    Read the article

  • TinyDNS and proper settings for SPF records

    - by Teddy
    I've inherited a TinyDNS configuration that have following entries for SPF: @domain.com:x.x.x.3:a::86400 @domain.com:x.x.x.103:c:10:86400 =domain.com:x.x.x.3:86400 =mail.domain.com:x.x.x.3:86400 =mail.domain.com:x.x.x.103:86400 'domain.com:v=spf1 ip4\072x.x.x.3 ip4\07231.130.96.103 ptr\072mail.domain.com +mx a -all:3600 'mail.domain.com:v=spf1 ip4\072x.x.x.3 ip4\072x.x.x.103 ptr\072mail.domain.com +mx a -all:3600 'a.mx.domain.com:v=spf1 ip4\072x.x.x.3 ip4\072x.x.x.103 ptr\072mail.domain.com +mx a -all:3600 This is the result from http://www.kitterman.com/spf/validate.html SPF record lookup and validation for: domain.com SPF records are primarily published in DNS as TXT records. The TXT records found for your domain are: v=spf1 ip4:x.x.x.3 ip4:x.x.x.103 ptr:mail.domain.com +mx a -all SPF records should also be published in DNS as type SPF records. No type SPF records found. Checking to see if there is a valid SPF record. Found v=spf1 record for domain.com: v=spf1 ip4:x.x.x.3 ip4:x.x.x.103 ptr:mail.domain.com +mx a -all evaluating... SPF record passed validation test with pySPF (Python SPF library)! I'm struggling with this from yesterday and cant figure it why this validator returns No type SPF records found. I see in BIND we cand define SPF type record with example.com. IN SPF "v=spf1 a -all", but in TinyDNS we only have TXT records that we set for SPF, maybe this is a problem?

    Read the article

  • NIS: which mechanism hides shadow.byname for unpriviledged users?

    - by Mark Salzer
    On some Linux box (SLES 11.1) which is a NIS client I can do as root: ypcat shadow.byname and get output, i.e. some lines with the encrypted passwords, amongst other information. On the same Linux box, if I run the same command as unpriviledged user, I get No such map shadow.byname. Reason: No such map in server's domain Now I am surprised. My good old knowlege says that shadow passwords in NIS are absurd because there is no access control or authentication in the protocol and thus every (unpriviledged) user can access the shadow map and thereby obtain the encrypted passwords. Obviously we have a different picture here. Unfortunately I don't have access to the NIS server to figure out what is happening. My only guess is that the NIS master gives the map only to clients conection from a priviledged port (1024), but this is only an uneducated guess. What mechanisms are there in current NIS implementations to lead to a behavior like the above? How "secure" are they? Can the be circumvented easily? Or are shadow passwords in NIS as secure as the good old shadow files?

    Read the article

  • AD User Passwords expiring without any notifications?

    - by scooter133
    We setup password Policies in Active Directory to Expire peoples passwords after so many days. Well it looks like the time has come for the Expiration of the Passwords and people are getting locked out... There has been no warning of user passwords about to expire. They just come in to work and they cannot log in, the phones no longer connect, nothing. Reset the password and all is good. Some of the users are locked out, though most are not, they just cannot log in. On setting the password Expiration, I didn't see anything about nor warning the users of the impending expiration. Seems like it used to warn you 15 days or so before it would expire. Clients range from: WinXP, WinVista, Win7 and Server 2008R2 Remote Desktop Services. How can I make sure my users are warned of the Expiration? Resultant Set of Policy for User that was not prompted: Account Policies/Password Policy Policy Setting Winning GPO Enforce password history 10 passwords remembered Default Domain Policy Maximum password age 270 days Default Domain Policy Minimum password age 0 days Default Domain Policy Minimum password length 4 characters Default Domain Policy Password must meet complexity requirements Disabled Default Domain Policy Store passwords using reversible encryption Disabled Default Domain Policy Account Policies/Account Lockout Policy Policy Setting Winning GPO Account lockout duration 20 minutes Default Domain Policy Account lockout threshold 5 invalid logon attempts Default Domain Policy Reset account lockout counter after 15 minutes Default Domain Policy Local Policies/Audit Policy Policy Setting Winning GPO Audit account logon events Failure Default Domain Policy Audit account management Success, Failure Default Domain Policy Audit directory service access Success, Failure Default Domain Policy Audit logon events Failure Default Domain Policy Audit policy change Success, Failure Default Domain Policy Audit privilege use Failure Default Domain Policy Local Policies/Security Options Interactive Logon Policy Setting Winning GPO Interactive logon: Prompt user to change password before expiration 7 days Default Domain Policy

    Read the article

  • Postfix connects to wrong relay?

    - by Eric
    I am trying to set up postfix on my ubuntu server in order to send emails via my isp's smtp server. I seem to have missed something because the mail.log tells me: Jan 19 11:23:11 mediaserver postfix/smtp[5722]: CD73EA05B7: to=<[email protected]>, relay=new.mailia.net[85.183.240.20]:25, delay=6.2, delays=5.7/0.02/0.5/0, dsn=4.7.0, status=deferred (SASL authentication failed; server new.mailia.net[85.183.240.20] said: 535 5.7.0 Error: authentication failed: ) The relay "new.mailia.net[85.183.240.20]:25" was not set up by me. I use "relayhost = smtp.alice.de". Why is postfix trying to connect to a different server? Here is my main.cf: # See /usr/share/postfix/main.cf.dist for a commented, more complete version # Debian specific: Specifying a file name will cause the first # line of that file to be used as the name. The Debian default # is /etc/mailname. #myorigin = /etc/mailname smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h readme_directory = no # TLS parameters smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key smtpd_use_tls=yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache # See /usr/share/doc/postfix/TLS_README.gz in the postfix-doc package for # information on enabling SSL in the smtp client. myhostname = mediaserver alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases mydestination = mediaserver, localhost.localdomain, , localhost relayhost = smtp.alice.de mynetworks = 127.0.0.0/8 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all myorigin = /etc/mailname inet_protocols = all sender_canonical_maps = hash:/etc/postfix/sender_canonical smtp_sasl_auth_enable = yes smtp_sasl_password_maps = hash:/etc/postfix/sasl_password smtp_sasl_security_options = noanonymous Output of postconf -n: alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases append_dot_mydomain = no biff = no config_directory = /etc/postfix inet_interfaces = all inet_protocols = ipv4 mailbox_size_limit = 0 mydestination = mediaserver, localhost.localdomain, , localhost myhostname = mediaserver mynetworks = 127.0.0.0/8 myorigin = /etc/mailname readme_directory = no recipient_delimiter = relayhost = smtp.alice.de sender_canonical_maps = hash:/etc/postfix/sender_canonical smtp_generic_maps = hash:/etc/postfix/generic smtp_sasl_auth_enable = yes smtp_sasl_password_maps = hash:/etc/postfix/sasl_password smtp_sasl_security_options = noanonymous smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) smtpd_tls_cert_file = /etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file = /etc/ssl/private/ssl-cert-snakeoil.key smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes

    Read the article

  • HP ouvre ses serveurs de mission critique à l'architecture x86 et annonce le projet Odyssey de convergence avec les systèmes UNIX

    HP ouvre ses serveurs à mission critique à l'architecture x86 Et annonce le Projet Odyssey de convergence avec les systèmes UNIX HP se lance dans un projet d'envergure qui vise à réunir les architectures serveur UNIX et x86 au sein d'une plateforme unique pour les systèmes critiques : son Projet Odyssey. Les serveurs haut de gamme Integrity vont pouvoir accueillir des processeurs Intel Xeon x86, compatibles Windows et Linux, qui viennent disputer le règne de l'Itanium vieillissant que HP et Intel

    Read the article

  • Log transport and aggregation at scale

    - by markdrayton
    How're you analysing log files from UNIX/Linux machines? We run several hundred servers which all generate their own log files, either directly or through syslog. I'm looking for a decent solution to aggregate these and pick out important events. This problem breaks down into 3 components: 1) Message transport The classic way is to use syslog to log messages to a remote host. This works fine for applications that log into syslog but less useful for apps that write to a local file. Solutions for this might include having the application log into a FIFO connected to a program to send the message using syslog, or by writing something that will grep the local files and send the output to the central syslog host. However, if we go to the trouble of writing tools to get messages into syslog would we be better replacing the whole lot with something like Facebook's Scribe which offers more flexibility and reliability than syslog? 2) Message aggregation Log entries seem to fall into one of two types: per-host and per-service. Per-host messages are those which occur on one machine; think disk failures or suspicious logins. Per-service messages occur on most or all of the hosts running a service. For instance, we want to know when Apache finds an SSI error but we don't want the same error from 100 machines. In all cases we only want to see one of each type of message: we don't want 10 messages saying the same disk has failed, and we don't want a message each time a broken SSI is hit. One approach to solving this is to aggregate multiple messages of the same type into one on each host, send the messages to a central server and then aggregate messages of the same kind into one overall event. SER can do this but it's awkward to use. Even after a couple of days of fiddling I had only rudimentary aggregations working and had to constantly look up the logic SER uses to correlate events. It's powerful but tricky stuff: I need something which my colleagues can pick up and use in the shortest possible time. SER rules don't meet that requirement. 3) Generating alerts How do we tell our admins when something interesting happens? Mail the group inbox? Inject into Nagios? So, how're you solving this problem? I don't expect an answer on a plate; I can work out the details myself but some high-level discussion on what is surely a common problem would be great. At the moment we're using a mishmash of cron jobs, syslog and who knows what else to find events. This isn't extensible, maintainable or flexible and as such we miss a lot of stuff we shouldn't. Updated: we're already using Nagios for monitoring which is great for detected down hosts/testing services/etc but less useful for scraping log files. I know there are log plugins for Nagios but I'm interested in something more scalable and hierarchical than per-host alerts.

    Read the article

  • 64-bit Archives Needed

    - by user9154181
    A little over a year ago, we received a question from someone who was trying to build software on Solaris. He was getting errors from the ar command when creating an archive. At that time, the ar command on Solaris was a 32-bit command. There was more than 2GB of data, and the ar command was hitting the file size limit for a 32-bit process that doesn't use the largefile APIs. Even in 2011, 2GB is a very large amount of code, so we had not heard this one before. Most of our toolchain was extended to handle 64-bit sized data back in the 1990's, but archives were not changed, presumably because there was no perceived need for it. Since then of course, programs have continued to get larger, and in 2010, the time had finally come to investigate the issue and find a way to provide for larger archives. As part of that process, I had to do a deep dive into the archive format, and also do some Unix archeology. I'm going to record what I learned here, to document what Solaris does, and in the hope that it might help someone else trying to solve the same problem for their platform. Archive Format Details Archives are hardly cutting edge technology. They are still used of course, but their basic form hasn't changed in decades. Other than to fix a bug, which is rare, we don't tend to touch that code much. The archive file format is described in /usr/include/ar.h, and I won't repeat the details here. Instead, here is a rough overview of the archive file format, implemented by System V Release 4 (SVR4) Unix systems such as Solaris: Every archive starts with a "magic number". This is a sequence of 8 characters: "!<arch>\n". The magic number is followed by 1 or more members. A member starts with a fixed header, defined by the ar_hdr structure in/usr/include/ar.h. Immediately following the header comes the data for the member. Members must be padded at the end with newline characters so that they have even length. The requirement to pad members to an even length is a dead giveaway as to the age of the archive format. It tells you that this format dates from the 1970's, and more specifically from the era of 16-bit systems such as the PDP-11 that Unix was originally developed on. A 32-bit system would have required 4 bytes, and 64-bit systems such as we use today would probably have required 8 bytes. 2 byte alignment is a poor choice for ELF object archive members. 32-bit objects require 4 byte alignment, and 64-bit objects require 64-bit alignment. The link-editor uses mmap() to process archives, and if the members have the wrong alignment, we have to slide (copy) them to the correct alignment before we can access the ELF data structures inside. The archive format requires 2 byte padding, but it doesn't prohibit more. The Solaris ar command takes advantage of this, and pads ELF object members to 8 byte boundaries. Anything else is padded to 2 as required by the format. The archive header (ar_hdr) represents all numeric values using an ASCII text representation rather than as binary integers. This means that an archive that contains only text members can be viewed using tools such as cat, more, or a text editor. The original designers of this format clearly thought that archives would be used for many file types, and not just for objects. Things didn't turn out that way of course — nearly all archives contain relocatable objects for a single operating system and machine, and are used primarily as input to the link-editor (ld). Archives can have special members that are created by the ar command rather than being supplied by the user. These special members are all distinguished by having a name that starts with the slash (/) character. This is an unambiguous marker that says that the user could not have supplied it. The reason for this is that regular archive members are given the plain name of the file that was inserted to create them, and any path components are stripped off. Slash is the delimiter character used by Unix to separate path components, and as such cannot occur within a plain file name. The ar command hides the special members from you when you list the contents of an archive, so most users don't know that they exist. There are only two possible special members: A symbol table that maps ELF symbols to the object archive member that provides it, and a string table used to hold member names that exceed 15 characters. The '/' convention for tagging special members provides room for adding more such members should the need arise. As I will discuss below, we took advantage of this fact to add an alternate 64-bit symbol table special member which is used in archives that are larger than 4GB. When an archive contains ELF object members, the ar command builds a special archive member known as the symbol table that maps all ELF symbols in the object to the archive member that provides it. The link-editor uses this symbol table to determine which symbols are provided by the objects in that archive. If an archive has a symbol table, it will always be the first member in the archive, immediately following the magic number. Unlike member headers, symbol tables do use binary integers to represent offsets. These integers are always stored in big-endian format, even on a little endian host such as x86. The archive header (ar_hdr) provides 15 characters for representing the member name. If any member has a name that is longer than this, then the real name is written into a special archive member called the string table, and the member's name field instead contains a slash (/) character followed by a decimal representation of the offset of the real name within the string table. The string table is required to precede all normal archive members, so it will be the second member if the archive contains a symbol table, and the first member otherwise. The archive format is not designed to make finding a given member easy. Such operations move through the archive from front to back examining each member in turn, and run in O(n) time. This would be bad if archives were commonly used in that manner, but in general, they are not. Typically, the ar command is used to build an new archive from scratch, inserting all the objects in one operation, and then the link-editor accesses the members in the archive in constant time by using the offsets provided by the symbol table. Both of these operations are reasonably efficient. However, listing the contents of a large archive with the ar command can be rather slow. Factors That Limit Solaris Archive Size As is often the case, there was more than one limiting factor preventing Solaris archives from growing beyond the 32-bit limits of 2GB (32-bit signed) and 4GB (32-bit unsigned). These limits are listed in the order they are hit as archive size grows, so the earlier ones mask those that follow. The original Solaris archive file format can handle sizes up to 4GB without issue. However, the ar command was delivered as a 32-bit executable that did not use the largefile APIs. As such, the ar command itself could not create a file larger than 2GB. One can solve this by building ar with the largefile APIs which would allow it to reach 4GB, but a simpler and better answer is to deliver a 64-bit ar, which has the ability to scale well past 4GB. Symbol table offsets are stored as 32-bit big-endian binary integers, which limits the maximum archive size to 4GB. To get around this limit requires a different symbol table format, or an extension mechanism to the current one, similar in nature to the way member names longer than 15 characters are handled in member headers. The size field in the archive member header (ar_hdr) is an ASCII string capable of representing a 32-bit unsigned value. This places a 4GB size limit on the size of any individual member in an archive. In considering format extensions to get past these limits, it is important to remember that very few archives will require the ability to scale past 4GB for many years. The old format, while no beauty, continues to be sufficient for its purpose. This argues for a backward compatible fix that allows newer versions of Solaris to produce archives that are compatible with older versions of the system unless the size of the archive exceeds 4GB. Archive Format Differences Among Unix Variants While considering how to extend Solaris archives to scale to 64-bits, I wanted to know how similar archives from other Unix systems are to those produced by Solaris, and whether they had already solved the 64-bit issue. I've successfully moved archives between different Unix systems before with good luck, so I knew that there was some commonality. If it turned out that there was already a viable defacto standard for 64-bit archives, it would obviously be better to adopt that rather than invent something new. The archive file format is not formally standardized. However, the ar command and archive format were part of the original Unix from Bell Labs. Other systems started with that format, extending it in various often incompatible ways, but usually with the same common shared core. Most of these systems use the same magic number to identify their archives, despite the fact that their archives are not always fully compatible with each other. It is often true that archives can be copied between different Unix variants, and if the member names are short enough, the ar command from one system can often read archives produced on another. In practice, it is rare to find an archive containing anything other than objects for a single operating system and machine type. Such an archive is only of use on the type of system that created it, and is only used on that system. This is probably why cross platform compatibility of archives between Unix variants has never been an issue. Otherwise, the use of the same magic number in archives with incompatible formats would be a problem. I was able to find information for a number of Unix variants, described below. These can be divided roughly into three tribes, SVR4 Unix, BSD Unix, and IBM AIX. Solaris is a SVR4 Unix, and its archives are completely compatible with those from the other members of that group (GNU/Linux, HP-UX, and SGI IRIX). AIX AIX is an exception to rule that Unix archive formats are all based on the original Bell labs Unix format. It appears that AIX supports 2 formats (small and big), both of which differ in fundamental ways from other Unix systems: These formats use a different magic number than the standard one used by Solaris and other Unix variants. They include support for removing archive members from a file without reallocating the file, marking dead areas as unused, and reusing them when new archive items are inserted. They have a special table of contents member (File Member Header) which lets you find out everything that's in the archive without having to actually traverse the entire file. Their symbol table members are quite similar to those from other systems though. Their member headers are doubly linked, containing offsets to both the previous and next members. Of the Unix systems described here, AIX has the only format I saw that will have reasonable insert/delete performance for really large archives. Everyone else has O(n) performance, and are going to be slow to use with large archives. BSD BSD has gone through 4 versions of archive format, which are described in their manpage. They use the same member header as SVR4, but their symbol table format is different, and their scheme for long member names puts the name directly after the member header rather than into a string table. GNU/Linux The GNU toolchain uses the SVR4 format, and is compatible with Solaris. HP-UX HP-UX seems to follow the SVR4 model, and is compatible with Solaris. IRIX IRIX has 32 and 64-bit archives. The 32-bit format is the standard SVR4 format, and is compatible with Solaris. The 64-bit format is the same, except that the symbol table uses 64-bit integers. IRIX assumes that an archive contains objects of a single ELFCLASS/MACHINE, and any archive containing ELFCLASS64 objects receives a 64-bit symbol table. Although they only use it for 64-bit objects, nothing in the archive format limits it to ELFCLASS64. It would be perfectly valid to produce a 64-bit symbol table in an archive containing 32-bit objects, text files, or anything else. Tru64 Unix (Digital/Compaq/HP) Tru64 Unix uses a format much like ours, but their symbol table is a hash table, making specific symbol lookup much faster. The Solaris link-editor uses archives by examining the entire symbol table looking for unsatisfied symbols for the link, and not by looking up individual symbols, so there would be no benefit to Solaris from such a hash table. The Tru64 ld must use a different approach in which the hash table pays off for them. Widening the existing SVR4 archive symbol tables rather than inventing something new is the simplest path forward. There is ample precedent for this approach in the ELF world. When ELF was extended to support 64-bit objects, the approach was largely to take the existing data structures, and define 64-bit versions of them. We called the old set ELF32, and the new set ELF64. My guess is that there was no need to widen the archive format at that time, but had there been, it seems obvious that this is how it would have been done. The Implementation of 64-bit Solaris Archives As mentioned earlier, there was no desire to improve the fundamental nature of archives. They have always had O(n) insert/delete behavior, and for the most part it hasn't mattered. AIX made efforts to improve this, but those efforts did not find widespread adoption. For the purposes of link-editing, which is essentially the only thing that archives are used for, the existing format is adequate, and issues of backward compatibility trump the desire to do something technically better. Widening the existing symbol table format to 64-bits is therefore the obvious way to proceed. For Solaris 11, I implemented that, and I also updated the ar command so that a 64-bit version is run by default. This eliminates the 2 most significant limits to archive size, leaving only the limit on an individual archive member. We only generate a 64-bit symbol table if the archive exceeds 4GB, or when the new -S option to the ar command is used. This maximizes backward compatibility, as an archive produced by Solaris 11 is highly likely to be less than 4GB in size, and will therefore employ the same format understood by older versions of the system. The main reason for the existence of the -S option is to allow us to test the 64-bit format without having to construct huge archives to do so. I don't believe it will find much use outside of that. Other than the new ability to create and use extremely large archives, this change is largely invisible to the end user. When reading an archive, the ar command will transparently accept either form of symbol table. Similarly, the ELF library (libelf) has been updated to understand either format. Users of libelf (such as the link-editor ld) do not need to be modified to use the new format, because these changes are encapsulated behind the existing functions provided by libelf. As mentioned above, this work did not lift the limit on the maximum size of an individual archive member. That limit remains fixed at 4GB for now. This is not because we think objects will never get that large, for the history of computing says otherwise. Rather, this is based on an estimation that single relocatable objects of that size will not appear for a decade or two. A lot can change in that time, and it is better not to overengineer things by writing code that will sit and rot for years without being used. It is not too soon however to have a plan for that eventuality. When the time comes when this limit needs to be lifted, I believe that there is a simple solution that is consistent with the existing format. The archive member header size field is an ASCII string, like the name, and as such, the overflow scheme used for long names can also be used to handle the size. The size string would be placed into the archive string table, and its offset in the string table would then be written into the archive header size field using the same format "/ddd" used for overflowed names.

    Read the article

  • tcp checksum and tcp offloading

    - by scatman
    i am using raw sockets to create my own socket. i need to set the tcp_checksum. i have tried a lot of references but all are not working (i am using wireshark for testing). could you help me please. by the way, i read somewhere that if you set tcp_checksum=0. then the hardware will calculate the checksum automatically for you. is this true? i tried it, but in wireshark the tcp_checksum gives a value of 0X000 and says tcp offload. i also read about tcp offloading, and didn't understand, is it only that wireshark is cannot check an offloaded tcp checksum, but there is a correct one??

    Read the article

  • which ASP.NET hosting site allows listening on different ports than 80 and uses .NET 4?

    - by ijjo
    I'm trying to take advantage of HTML 5 web sockets in .NET and the easiest way appears to be something like what this guy does. I've already tested this myself and it works great, but there are a few problems if I try to deploy this to my hosting site (discountasp.net). Basically, I am not allowed to open up a port on 8080 and listen on it. I then tried to figure out a way to listen on port 80 with IIS as well, but using the HTTPListener, I run into sercurity issues as well. This doesn't seem like it will help since I can't mess with this stuff on the hosting site server either. So to make my life easier, I think I need to find a hosting site that simply allows me to open up a socket on port 8080 and listen on it. Anyone know of one? Or does anyone know of a workaround (besides sniffing all the traffic on port 80)?

    Read the article

  • which asp net hosting site allows to listen on differnt port than 80 and uses .net 4?

    - by ijjo
    i'm trying to take advantage of html 5 web sockets in .NET and the easiest way appears to do something like this guy does: http://www.codeproject.com/KB/webservices/c_sharp_web_socket_server.aspx?msg=3485900#xx3485900xx i've already tested this myself and it works great, but there are a few problems if i try to deploy this to my hosting site (discountasp.net). basically i am not allowed to open up a port on 8080 and listen on it. i then tried to figure out a way to listen non port 80 with IIS as well, but using the HTTPListener runs into sercurity issues as well that doesn't seem like will help since i can't mess with this stuff on the hosting site server either: http://stackoverflow.com/questions/169904/can-i-listen-on-a-port-using-httplistener-or-other-net-code-on-vista-without-r so to make my life easier, i think i need to find a hosting site that simply allows me to open up a socket on port 8080 and listen on it. anyone know of one? or anyone know of a workaround (besides sniffing ALL the traffic on port 80)?

    Read the article

  • How to download a file from a server using Java Socket?

    - by Ada
    Hi, I have an assignment about uploading and downloading a file to a server. I managed to do the uploading part using Java Sockets however I am having a hard time doing the downloading part. I should use Range: for downloading parellel. In my request, I should have the Range: header. But I don't understand how I will receive the file with that HTTP GET request. All the examples I have seen was about uploading a file. I already did it. I can upload .exe, image, .pdf, anything and when I download them back (by my browser), there are no errors. Can you help me with the downloading part? Can you give me an example beacuse I really didn't get it.

    Read the article

  • Problem reading hexadecimal buffer from C socket

    - by Olaseni
    I'm using the SDL_net sockets API to create a server and client. I can easily read a string buffer, but when I try to send hexadecimal data, recv gets the length, but I cannot seem to be a able to read the buffer contents. IPaddress ip; TCPsocket server,client; int bufSize = 1024; char message[bufSize]; int len; server = SDLNet_TCP_Open(&ip); client = SDLNet_TCP_Accept(server); len = SDLNet_TCP_Recv(client,message,bufSize); Here's a snippet. the buffer length "len" is set (i.e. message length) but I can't get to the data contents in the message buffer. Some sample bind_transmitter PDU data was sent by a random client to the server at that port. I can't read the PDU (SMPP).

    Read the article

  • How to recover gracefully from a C# udp socket exception

    - by Gearoid Murphy
    Context: I'm porting a linux perl app to C#, the server listens on a udp port and maintains multiple concurrent dialogs with remote clients via a single udp socket. During testing, I send out high volumes of packets to the udp server, randomly restarting the clients to observe the server registering the new connections. The problem is this: when I kill a udp client, there may still be data on the server destined for that client. When the server tries to send this data, it gets an icmp "no service available" message back and consequently an exception occurs on the socket. I cannot reuse this socket, when I try to associate a C# async handler with the socket, it complains about the exception, so I have to close and reopen the udp socket on the server port. Is this the only way around this problem?, surely there's some way of "fixing" the udp socket, as technically, UDP sockets shouldn't be aware of the status of a remote socket? Any help or pointers would be much appreciated. Thanks.

    Read the article

  • How to send and receive file on Silverlight application?

    - by Samvel Siradeghyan
    Hi. I am writting Silverlight client-server application. Server part is WinForm application an client part is Silverlight. I use TCP connction. I use sockets for sending and receiving information. But now I need to send a file, size of which may be greater then 1 Mb, so I can't use socets to send that file as bite stream. I wont to send that file form server and receive it on client Silverlight application. How can I do that? Thanks.

    Read the article

  • custom MSSQL driver

    - by hoodoos
    I had a crazy thought about writing my own MSSQL driver to make it work something like non-blocking http client, so it won't be thread thirsty and could handle lots of db queries within one thread. I tried to look over google for some guidelines about implementing MSSQL client protocol, but found none really, where do those guys get information about it when they write own implementations for PHP or python? I need a really low level to be documented so I can implement all phases of working with a connection through sockets. And would be really nice to have a an example in c# langauge. :)

    Read the article

  • Raw socket sendto() failure in OS X

    - by user37278
    When I open a raw socket is OS X, construct my own udp packet (headers and data), and call sendto(), I get the error "Invalid Argument". Here is a sample program "rawudp.c" from the web site http://www.tenouk.com/Module43a.html that demonstrates this problem. The program (after adding string and stdlib #includes) runs under Fedora 10 but fails with "Invalid Argument" under OS X. Can anyone suggest why this fails in OS X? I have looked and looked and looked at the sendto() call, but all the parameters look good. I'm running the code as root, etc. Is there perhaps a kernel setting that prevents even uid 0 executables from sending packets through raw sockets in OS X Snow Leopard? Thanks.

    Read the article

  • Multiple complete HTTP requests stuck in TCP CLOSE_WAIT state

    - by Sean Owen
    I have a Java and Tomcat-based server application which initiates many outbound HTTP requests to other web sites. We use Jakarta's HTTP Core/Client libraries, very latest versions. The server locks up at some point since all its worker threads are stuck trying to close completed HTTP connections. Using 'lsof' reveals a bunch of sockets stuck in TCP CLOSE_WAIT state. This doesn't happen for all, or even most connections. In fact, I saw it before and resolved it by making sure to set the Connection: Close response header. So that makes me think it may be bad behavior of remote servers. It may have come up again since I moved the app to a totally new service provider -- different OS, network situation. But, I am still at a loss as to what I could do, if anything, to work around this. Some poking around on the internet didn't turn up anything I'm not already doing. Just thought I'd ask if anyone has seen and solved this?

    Read the article

  • Java Socket fails to transmit data over the network

    - by Mark Griffin
    I'm experiencing a bizarre problem with sockets between a Java Knopflerfish client bundle and a PHP (CLI, not web) server. The client/server pair work fine when both are located on the localhost, and all data is transmitted successfully. However, when the Java client exists on a different machine, connections to the server are successful, but no data is received by the PHP script. Packet analysis confirms that the data sent by the Java client is received on by the server - PHP just seems to have problems getting its hands on it. As a further note, I've done some tests with telnet as the client. The PHP server script receives all data fine from any host. This leads me to believe that the problem has something to do with the way java is setting up the socket or that there is some networking issue that I'm not familiar with. Any thoughts would be appreciated. Can post code samples if desired.

    Read the article

< Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >