Search Results

Search found 25996 results on 1040 pages for 'memory address'.

Page 97/1040 | < Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >

  • Week in Geek: Forced Facebook E-mail Changes are Altering Address Books, Causing Lost Mail

    - by Asian Angel
    Our first edition of WIG for July is filled with news link goodness covering topics such as why Microsoft killed the Start Button in Windows 8, how to outsmart websites trying to get you to pay top dollar, OS X Mountain Lion will check daily for security updates, and more. How to Banish Duplicate Photos with VisiPic How to Make Your Laptop Choose a Wired Connection Instead of Wireless HTG Explains: What Is Two-Factor Authentication and Should I Be Using It?

    Read the article

  • How to check use of userva boot option on Win 2K3 server

    - by Tim Sylvester
    I have some 32-bit Win2K3 servers running an application that fails now and then apparently due to heap fragmentation. (Process virtual bytes grows, private bytes does not) I do not have access to the source code or build process of this application. I have modified the boot.ini file on one of these servers to include /userva=2560, half way between the normal mode of operation and the /3GB option. Normally it takes weeks to reach the point of failure, but I'd like to see right away whether this has actually had any effect. As I understand it, this option limits the kernel to the remaining address space (1536MB instead of 2048), but does not necessarily give an application the extra address space, depending on the flags in the application's PE header. How can I determine whether the O/S is allowing a particular application, running in production, to access address space above 2GB? Additionally, what's the best way to monitor the system to ensure that the kernel is not starved for address space, and more generally how should I go about finding the optimal value for this setting?

    Read the article

  • Copy to USB memory stick really slow?

    - by Eloff
    When I copy files to the USB device, it takes much longer than in windows (same usb device, same port) it's faster than USB 1.0 speeds (1MB/s) but much slower than USB 2.0 speeds (12MB/s). To copy 1.8GB takes me over 10 minutes (it should be < 3 min.) I have two identical SanDisk Cruzer 8GB sticks, and I have the same problem with both. I have a super talent 32GB USB SSD in the neighboring port and it works at expected speeds. The problem I seem to see in the GUI is that the progress bar goes to 90% almost instantly, completes to 100% a little slower and then hangs there for 10 minutes. Interrupting the copy at this point seems to result in corruption at the tail end of the file. If I wait for it to complete the copy is successful. Any ideas? dmesg output below: [64059.432309] usb 2-1.2: new high-speed USB device number 5 using ehci_hcd [64059.526419] scsi8 : usb-storage 2-1.2:1.0 [64060.529071] scsi 8:0:0:0: Direct-Access SanDisk Cruzer 1.14 PQ: 0 ANSI: 2 [64060.530834] sd 8:0:0:0: Attached scsi generic sg4 type 0 [64060.531925] sd 8:0:0:0: [sdd] 15633408 512-byte logical blocks: (8.00 GB/7.45 GiB) [64060.533419] sd 8:0:0:0: [sdd] Write Protect is off [64060.533428] sd 8:0:0:0: [sdd] Mode Sense: 03 00 00 00 [64060.534319] sd 8:0:0:0: [sdd] No Caching mode page present [64060.534327] sd 8:0:0:0: [sdd] Assuming drive cache: write through [64060.537988] sd 8:0:0:0: [sdd] No Caching mode page present [64060.537995] sd 8:0:0:0: [sdd] Assuming drive cache: write through [64060.541290] sdd: sdd1 [64060.544617] sd 8:0:0:0: [sdd] No Caching mode page present [64060.544619] sd 8:0:0:0: [sdd] Assuming drive cache: write through [64060.544621] sd 8:0:0:0: [sdd] Attached SCSI removable disk

    Read the article

  • Memory allocation strategy for the vertex buffers (DirectX 10/11)

    - by Alex
    I have the following question. I write CAD system. So I have a 3D scene and there are many different objects (walls, doors, windows and so on). User can add or delete some objects. The question is: how can I organise the keeping of vertices for all my objects. I can create vertex buffer for every object. But I think drawing/switching from one buffer to another would have performance penalty. Another way - I can create several big buffers for every object type. But I don't understand how to update such buffers. It is too big to update whole buffer (for example buffer for all walls). What I need to do if I want to delete the object from the middle of the buffer? Actually I have the similar question: http://stackoverflow.com/questions/5515700/how-to-properly-update-vertex-buffers-in-directx-10 Most examples I've found work with very static models. Therefore, they tend to create a single vertex buffer with their list of points, and then are just manipulated by matrix transformations. I, on the other hand, will be updating the scene very often.

    Read the article

  • ISC-DHCP not providing address

    - by kiler129
    I just replaced my old router using server with Ubuntu. Everything's fine except DHCP. When I tried connecting iPhone - it works: http://pastebin.com/NNEeiRLY but unfortunately some of my devices can't get IP from server, e.g. my computer: http://pastebin.com/N6LnsEWC Here's my isc configuration: http://pastebin.com/N5KQnhZV I've also tried running DHCP server as root (because of some permission denied in logs on lease file). What can I do?

    Read the article

  • a mechanism to address WPF bindings beyond NameScope

    There are many situations that a property should be bind to a DynamicResource. Many UI patterns like Composite UI Applications need a mechanism to support binding across modules. This article addresses these issues.  read moreBy Siyamand AyubiDid you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Strange IP address showing up with OS X ssh

    - by user50799
    I was futzing around with DTrace on Mac OS X and found the following script that prints out information about connections being established: $ cat script.d syscall::connect:entry { printf("execname: %s\n", execname); printf("pid: %d\n", pid); printf("sockfd: %d\n",arg0); socks = (struct sockaddr*)copyin(arg1, arg2); hport = (uint_t)socks->sa_data[0]; lport = (uint_t)socks->sa_data[1]; hport <<= 8; port = hport + lport; printf("Port number: %d\n", port); printf("IP address: %d.%d.%d.%d\n", socks->sa_data[2], socks->sa_data[3], socks->sa_data[4], socks->sa_data[5]); printf("======\n"); } I run it in one window: $ sudo dtrace -s ./script.d Then I ssh to another machine from another window. I get this output from my dtrace window: CPU ID FUNCTION:NAME 0 18696 connect:entry execname: ssh pid: 5446 sockfd: 3 Port number: 22 IP address: 192.168.0.207 ====== 0 18696 connect:entry execname: ssh pid: 5446 sockfd: 5 Port number: 12148 IP address: 109.112.47.108 ====== ^C The first IP address I can explain (192.168.0.207), that's the machine I'm connecting to. But what's with the 109.112.47.108 machine? It doesn't show up in tcpdump nor netstat -an Is there something with my dtrace code or my understanding of how the connect system call works?

    Read the article

  • How to tune system settings for mongoDB on Linux?

    - by jsh
    Trying to squeeze a lot out of one question here -- please bear with me. Although the MongoDB man pages make several useful recommendations about system settings like ulimit (http://docs.mongodb.org/manual/reference/ulimit/), and other production factors (http://docs.mongodb.org/manual/administration/production-notes/) they seem mysteriously silent on things like virtual memory and swap settings. The closest we get to a hint is that "...the operating system’s virtual memory subsystem manages MongoDB’s memory..." (http://docs.mongodb.org/manual/faq/fundamentals/#does-mongodb-require-a-lot-of-ram). Running the same job - high writes and high reads on about 10,000,000 records in a single collection -- on my 4-processor, 4GB RAM macbook and an 8-core ubuntu box with 64GB RAM I saw dramatically WORSE read performance on the linux box with factory settings, and could hear the disk constantly spinning, indicating high I/O and presumably swapping. Yes, other things were happening on the box, but there was plenty of free RAM, disk space, etc.; furthermore, I did not see evidence that Mongo was expanding to take advantage of all that free RAM as it is touted to do. Linux box default settings were as follows: vm.swappiness =60 vm.dirty_background_ratio = 10 vm.dirty_ratio = 20 vm.dirty_expire_centisecs =3000 vm.dirty_writeback_centisecs=500 I hazarded some guesses looking at docs and blogs for other types of databases (Oracle, MYSQL, etc.), experimented, and adjusted as below. vm.swappiness=10 vm.dirty_background_ratio=5 vm.dirty_ratio=5 vm.dirty_writeback_centisecs=250 vm.dirty_expire_centisecs=500 I saw some immediate apparent improvements in read time. However, when I ran my test jobs again, read performance continued to be painfully sluggish during heavy writes. Then, I REBUILT the collection from an available data source - and suddenly I can read at 1ms or less per record WHILE doing the write job! So the question is really two-fold: 1) What are appropriate VM settings for MongoDB on Linux? 2) (bonus) Does Mongo do some checking or optimization with the OS while data is being built? In other words, if I have built a large data set with suboptimal VM or I/O settings, does Mongo make assumptions during the memory-mapping process that will fail to take advantage of optimizations down the road? Obviously I don't fully grok memory mapping under the hood (I was hoping I wouldn't have to). Any help appreciated...thanks! -j

    Read the article

  • Fail2ban memory usage

    - by ltsstar
    Since my server is under a sustain DNS amplification attack (DDOS), I configured fail2ban and initially my outgoing traffic dropped markedly. Anyway, after a few hours (mostly +10), fail2ban uses about 75% ram and seems to be crashed in some way, because the outgoing traffic raises imediatly after. When I searched the web for the memory problem, I found some people complaining about high fail2ban memory usages as well. But the recommended solution, to insert an ulimit command into a fail2ban config file, did not change that much for me.

    Read the article

  • NK2 files doesn't keep the email addresses in memory

    - by r0ca
    When I send an email to someone outside the firm, when I only type the first letters of its name (Contact), I get the auto-suggest of the "Already-sent" users. So now, since a few days, the emails are not kept in memory by Outlook (NK2 file). I see that that file is only 2kb and on my old machine, it's almost 200kb (So a lot more email addresses kept in memory) Should I just rebuilt the Outlook profile or the whole Windows Profile? A simple Outlook reinstall or to build a new PC?

    Read the article

  • can I forward "referrer" information to other address?

    - by user5679
    I have two addresses for two servers: www.urlA.com www.urlB.com I have all my websites installed in www.urlB.com, but visitors recognize www.urlA.com primarily. I have www.urlA.com/index.php as the following <?php header('Location: http://www.urlB.com/'); ?> But, when I use this forwarding method, the tracking javascript in www.urlB.com cannot recognize where the visitors are from. I only obtain "NO REFERRING LINK" What should I do to do the following two jobs: 1. to forward urlA.com to urlB.com 2. to receive the referrer information

    Read the article

  • Nginx Multiple If Statements Cause Memory Usage to Jump

    - by Justin Kulesza
    We need to block a large number of requests by IP address with nginx. The requests are proxied by a CDN, and so we cannot block with the actual client IP address (it would be the IP address of the CDN, not the actual client). So, we have $http_x_forwarded_for which contains the IP which we need to block for a given request. Similarly, we cannot use IP tables, as blocking the IP address of the proxied client will have no effect. We need to use nginx to block the requested based on the value of $http_x_forwarded_for. Initially, we tried multiple, simple if statements: http://pastie.org/5110910 However, this caused our nginx memory usage to jump considerably. We went from somewhere around a 40MB resident size to over a 200MB resident size. If we changed things up, and created one large regex that matched the necessary IP addresses, memory usage was fairly normal: http://pastie.org/5110923 Keep in mind that we're trying to block many more than 3 or 4 IP addresses... more like 50 to 100, which may be included in several (20+) nginx server configuration blocks. Thoughts? Suggestions? I'm interested both in why memory usage would spike so greatly using multiple if blocks, and also if there are any better ways to achieve our goal.

    Read the article

  • Can next hop address be same as destination address?

    - by Raj
    Like if host address is 100.0.0.1 and next hop address is 100.0.0.2 and destination ip address is also 100.0.0.2 Is this a valid use case? Any real life usage? <dest ip> <next hop> ip route 100.0.0.2 255.255.255.255 100.0.0.2 weight 1 next-hop-vrf GlobalRouter Above is the command on a router inside a VRF. 100.0.0.2 is pingable from host. 100.0.0.1 & 100.0.0.2 are an ip address assigned to a VLAN on host & destination respectively. On a linux box, Such configuration is valid. [root]# netstat -r -n Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface 55.55.55.55 55.55.55.55 255.255.255.255 UGH 0 0 0 eth0 [root]# ip route show 55.55.55.55 via 55.55.55.55 dev eth0 As per my understanding, If a destination IP is reachable (i.e in the same subnet of host IP) we dont need a next hop. I came across one application for using next hop for destination IP in same subnet (i.e for VPN) See this: Will packets send to the same subnet go through routers? If next hop != destination IP but they are in same subnet as that of host, is a valid scenario for VPN, then i am wondering what are the applications of next_hop==dest_ip & subnet same as host? This is my first post in Super User. Extremely happy with the quick and warm response.

    Read the article

  • Memory upgrade for Toshiba P20 S203?

    - by pjc50
    I've had an offer of a 256MB PC2700 SODIMM, apparently from an iBook, to upgrade a Toshiba laptop. Is that suitable? I've seen "DDR 266 SODIMM" on sale as the official upgrade memory. How in general should I work this out? I've long since lost track of what memory goes with what system.

    Read the article

  • Website address hacked, emails created but not showing in manage your account

    - by ProfMJMcG
    I have a website, thebleudoor.com. It is hosted by yahoo. It gets 2000-3000 hits a day and has been for at least 5 years. A few months ago, as admin of the website, I started getting bounced back emails from newly created emails like [email protected]. Yahoo only shows 2 emails for my account. They said they can't do anything about it. Now, my "spam hacked email accounts" are getting spam. They haven't altered or used my website or email or bank info, just the good name of my website. Is there anything I can do? Do I need to be concerned? Changing my provider won't really help will it? Thank you.

    Read the article

  • Enabling Squid delay pool eats up the entire memory

    - by Supratik
    I am using "squid-3.1.8-1.el5" in my CentOS 5 32 bit system. In normal condition Squid uses 85m - 90m, but when I enable the delay pool parameters the memory usage suddenly rise up 2GB. The memory keeps on increasing until the system is out of resource. The following are my delay pool settings: delay_pools 1 delay_class 1 1 delay_access 1 allow all delay_parameters 1 192000/192000 Is there anything I am missing here or is it a bug with Squid ?

    Read the article

  • How to hide website's real address

    - by Nick
    I'm building a website for public use. It's a sharing website - everyone is allowed to download specific content, but I want to make sure nobody knows where all the files are kept, so I've decided to use URL Forwarding, e.g. when someone visits fakesite.com, it returns realsite.com without revealing/redirecting to realsite.com. Question: I don't know how to make this work. Please help me by explaining how to use URL Forwarding! Thanks!

    Read the article

  • Shared memory multiprocesses

    - by poly
    I'm building an multi processes application and I need to save session ID, the sessions ID is 32 bit, and of course it can't be used twice in its lifetime, I'm currently using DB that saves all the ID in a table, and I do the following, ID table is (int key, char used(1)) //1 is used, 0 is not 1. lock table 2. get one key for one sessions 3. update used field in it to used 4. unlock After the session is finished the process use below to free key, 1. lock table 2. update used field in it to not used 4. unlock I'm really wondering whether this is a good/fast implementation. and please note it's multi processes application.

    Read the article

  • Apache with mod_php high memory utilization

    - by Raj
    We have Magento application deployed on Apache with mod_php and mysql. I have observed that sometime apache server starts consuming high memory which causes memory swapping and results in high load on servers. whenever there is high load on apache server, the apache processes which are causing the high load were in sleep mode at mysql end and CLOSE_WAIT state at client side. Any help is appreciated to resolve this issue.

    Read the article

  • Making Use of a Class C IP Address

    All search engines make use of backlinks and page rank of a website to ensure quality of links. It is simply the value that such backlinks can provide that is really important when it comes to staying ahead of the competition.

    Read the article

  • Is Running programs by address common?

    - by dgood1
    I have read some of the things posted here and I keep reading about people running stuff like /foldername/executable -cmd NAME (was reading about a programmer using Eclipse, so he was testing something he made) I don't see things like that when I run things here (Ubuntu 12.04) because of the launcher and the Ubuntu button at the very top. That and Eclipse indigo has a button for running and testing things it makes. Just asking how and why it's common? (assuming it's the Terminal[Ctrl+alt+T] but I'm not sure)

    Read the article

< Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >