Search Results

Search found 1804 results on 73 pages for 'rich'.

Page 13/73 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • The Power of AJAX and How to Use it

    Since its conception in 2005, AJAX (Asynchronous Javascript and XML) has changed the web world as we know it today. It's helped websites evolve into RIAs (rich internet applications) by allowing webpages to make direct requests to a web server without reloading the page. This capability is vital in replicating the rich user experiences achieved in client applications. The purpose of this post is to help newcomers understand the different ways that AJAX can be used to create RIAs.

    Read the article

  • Another rsAccessDenied problem with SSRS

    - by Rich.Carpenter
    I've read through a lot of posts regarding the problem, but none of the proposed solutions have worked for me. I continue to get an error stating, "The permissions granted to user '\Rich' are insufficient for performing this operation. (rsAccessDenied)." If I am logged in as the local administrator account, entering the Reporting Services URL in IE doesn't give me that error, but it takes me to a blank page. I haven't been able to get to a SSRS home page at all. Order of operations: I installed and patched Windows 7 Ultimate 64-bit I installed SQL Server Express 2008 with Advanced Services using the MS web installer. I downloaded and installed SP1 for SQL Server Express 2008. I've tried running IE as administrator, adding local machine to trusted sites, and just about every other suggestion I've found. I even ran the entire installation logged in as the local administrator. Nothing seems to work. Could someone please tell me, considering the above installation process, what I should expect to do after to make this work?

    Read the article

  • genStrAsCharArray optimisation benefits

    - by Rich
    Hi I am looking into the options available to me for optimising the performance of JBoss 5.1.0. One of the options I am looking at is setting genStrAsCharArray to true in <JBOSS_HOME>/server/<PROFILE>/deployers/jbossweb.deployer/web.xml. This affects the generation of .java code from .JSPs. The comment describes this flag as: Should text strings be generated as char arrays, to improve performance in some cases? I have a few questions about this. Is this the generation of Strings in the dynamic parts of the JSP page (ie each time the page is called) or is it the generation of Strings in the static parts (ie when the .java is built from the JSP)? "in some cases" - which cases are these? What are the situations where the performance is worse? Does this speed up the generation of the .java, the compilation of the .class or the execution of the .class? At a more technical level (and the answer to this will probably depend on the answer to part 1), why can the use of char arrays improve performance? Thanks in advance Rich

    Read the article

  • How to get a debug flow of execution in C++

    - by Rich
    Hi, I work on a global trading system which supports many users. Each user can book,amend,edit,delete trades. The system is regulated by a central deal capture service. The deal capture service informs all the user of any updates that occur. The problem comes when we have crashes, as the production environment is impossible to re-create on a test system, I have to rely on crash dumps and log files. However this doesn't tell me what the user has been doing. I'd like a system that would (at the time of crashing) dump out a history of what the user has been doing. Anything that I add has to go into the live environment so it can't impact performance too much. Ideas wise I was thinking of a MACRO at the top of each function which acted like a stack trace (only I could supply additional user information, like trade id's, user dialog choices, etc ..) The system would record stack traces (on a per thread basis) and keep a history in a cyclic buffer (varying in size, depending on how much history you wanted to capture). Then on crash, I could dump this history stack. I'd really like to hear if anyone has a better solution, or if anyone knows of an existing framework? Thanks Rich

    Read the article

  • Long primitive or AtomicLong for a counter?

    - by Rich
    Hi I have a need for a counter of type long with the following requirements/facts: Incrementing the counter should take as little time as possible. The counter will only be written to by one thread. Reading from the counter will be done in another thread. The counter will be incremented regularly (as much as a few thousand times per second), but will only be read once every five seconds. Precise accuracy isn't essential, only a rough idea of the size of the counter is good enough. The counter is never cleared, decremented. Based upon these requirements, how would you choose to implement your counter? As a simple long, as a volatile long or using an AtomicLong? Why? At the moment I have a volatile long but was wondering whether another approach would be better. I am also incrementing my long by doing ++counter as opposed to counter++. Is this really any more efficient (as I have been led to believe elsewhere) because there is no assignment being done? Thanks in advance Rich

    Read the article

  • MySQL connection attempt works fine in 5.2.9 but not in 5.3.0 - Help?

    - by Rich
    Hi, I'm having trouble making a secondary MySQL connection (to a separate, external DB) in my code. It works fine in PHP 5.2.9 but fails to connect in PHP 5.3.0. I'm aware of (at least some) of the changes needed to make successful MySQL connections in the newer version of PHP, and have succeeded before, so I'm not sure why it isn't working this time. I already have a db connection open to a local database. This function below is then used to make an additional connection to a separate, remote directory. The included config file simply contains the external database details (host, user, pass and name). I have checked and it is being included correctly. function connectDP() { global $dpConnection; include("secondary_db_config.php); $dpConnection = mysql_connect($dp_dbHost, $dp_dbUser, $dp_dbPass, true) or DIE("ERROR: Unable to connect to Deployment Platform"); mysql_select_db($dp_dbName, $dpConnection) or DIE("ERROR 006: Unable to select Deployment Platform Database"); } I then attempt to make this new connection simply by calling this function externally: connectDP(); But when loading the page (in 5.3.0), I get the message: ERROR: Unable to connect to Deployment Platform I'm using the optional new_link flag boolean as the fourth argument in the mysql_connect() function and it's still not working. I've been wracking my brain this morning trying to figure out why this connection doesn't work (while I've done something very similar elsewhere to a separate second database that does work). Any help would be appreciated. Thanks! Rich

    Read the article

  • Is there a maximum number of input controls that can be used on an HTML form?

    - by Rich
    I have an ambitious requirement for an asp.net 2.0 web page that contains a table (gridview), and each row in the grid contains 6 select (dropdown) controls for data entry. The number of rows that will be displayed is dependent upon the user's search parameters, which are specified in another area of the page. Unfortunately, with the default (and even basic) search parameters specified, the grid could contain several hundred rows. I've noticed that the browser, in this case IE8, starts behaving rather erratically once I reach a large number of rows -- no documented evidence for the number of rows where this begins to be a problem. For example, trying to view the source of the page results in a message from IE stating that there was a problem with the page that forced the browser to reload it, and I never get the source. Obviously the page loads and renders rather slowly also. I know that my solution is probably going to involve paging the gridview such that it only displays 20 or so rows per page, and I'll have to write code to handle the saving of changes in the dropdown values when the user changes pages. I can probably turn off viewstate on the gridview also. However, the question I really want to pose is this -- has anyone seen a documented rule indicating the maximum number of input controls that an HTML browser form is supposed to be able to contain? I could not find anything on the Internet after doing a search, and I suspect the answer may be whatever the browser can handle based on the machine configuration it is running on. Any rules of thumb you use? Thanks for any suggestions. Rich

    Read the article

  • Perl IO modules possibly causing issues in Net::DNS module

    - by Rich
    Hi! I’m porting some software that I wrote for a White Russian OpenWRT system to a new Kamikaze 8.09.1 OpenWRT system but I am having some serious issues that I’m hoping you can help me with. Old system Linux kernel 2.4.34 MIPSEL arch Perl 5.8.7 Net::DNS 0.48 IO 1.21 IO::Socket 1.28 IO::Socket::INET 1.28 New system Linux kernel 2.6.26.8 MIPS arch Perl 5.10.0 Net::DNS 0.66 IO 1.23_01 IO::Socket 1.30_01 IO::Socket::INET 1.31 First, let me provide some background information… I am trying to resolve my server (clearprobe.winbeam.com) from within my Perl program and see the following if I enable debugging in Net::DNS: resolve: Server 'clearprobe-ddns.winbeam.com' ;; query(clearprobe-ddns.winbeam.com) ;; setting up an AF_INET() family type UDP socket ;; send_udp(192.168.88.1:53) ;; send_udp(4.2.2.2:53) ;; send_udp(192.168.88.1:53) ;; send_udp(4.2.2.2:53) resolve: res->errorstring: query timed out Both of these servers resolve clearprobe.winbeam.com fine from the command line: root@cwb-2-11:~# echo “nameserver 192.168.88.1” > /etc/resolv.conf root@cwb-2-11:~# nslookup clearprobe-ddns.winbeam.com Server: 192.168.88.1 Address 1: 192.168.88.1 router Name: clearprobe-ddns.winbeam.com Address 1: 64.13.48.40 64-13-48-40.war.clearwire-dns.net root@cwb-2-11:~# echo “nameserver 4.2.2.2” > /etc/resolv.conf root@cwb-2-11:~# nslookup clearprobe-ddns.winbeam.com Server: 4.2.2.2 Address 1: 4.2.2.2 vnsc-bak.sys.gtei.net Name: clearprobe-ddns.winbeam.com Address 1: 64.13.48.40 64-13-48-40.war.clearwire-dns.net Using Perl’s call to the C gethostbyaddr() function works fine, but I need to do another lookup later in the software which requires that I specify the nameserver (clearprobe-ddns.winbeam.com is the authority for my internal DNS zone), hence my Net::DNS requirement. Now, here is the IO module-specific information: What I am seeing is that the reply is coming back from the nameserver (confirmed via tcpdump – I can send the captures if you’d like), but the UDP packets are sitting in the process’s UDP receive queue pending reception by Net::DNS (the approx 1752 bytes per response stay queued waiting for $sel-can_read()): root@cwb-2-11:~# netstat -una Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State udp 1752 0 0.0.0.0:52680 0.0.0.0:* root@cwb-2-11:~# netstat -una Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State udp 5256 0 0.0.0.0:52680 0.0.0.0:* If I force $sock[AF_INET]-recv($buf, $self-_packetsz) around line 803 of /usr/lib/perl5/5.10/Net/DNS/Resolver/Base.pm, instead of waiting for IO::Select’s can_read() function ( @ready = $sel-can_read($timeout)) to populate @ready, the response is received and processed. Any idea what could be causing this issue? In a possibly related matter, I noticed in another script that the following code fails in the same manner (network responses stay in the process’s TCP receive queue) with the new system: $sock = new IO::Socket::INET( PeerAddr => "$server", PeerPort => 37, Proto => 'tcp', Timeout => 5 ); Whereas the following code works: $sock = new IO::Socket::INET( PeerAddr => "$server", PeerPort => 37, Proto => 'tcp' ); I have looked through the NET::DNS code and don’t see a timeout passed for the UDP sockets, so I am not sure if that this is related or not. Please let me know if I can provide you with any further information in order to help diagnose this issue. Thanks! -Rich

    Read the article

  • Is That REST API Really RPC? Roy Fielding Seems to Think So.

    - by Rich Apodaca
    A large amount of what I thought I knew about REST is apparently wrong - and I'm not alone. This question has a long lead-in, but it seems to be necessary because the information is a bit scattered. The actual question comes at the end if you're already familiar with this topic. From the first paragraph of Roy Fielding's REST APIs must be hypertext-driven, it's pretty clear he believes his work is being widely misinterpreted: I am getting frustrated by the number of people calling any HTTP-based interface a REST API. Today’s example is the SocialSite REST API. That is RPC. It screams RPC. There is so much coupling on display that it should be given an X rating. Fielding goes on to list several attributes of a REST API. Some of them seem to go against both common practice and common advice on SO and other forums. For example: A REST API should be entered with no prior knowledge beyond the initial URI (bookmark) and set of standardized media types that are appropriate for the intended audience (i.e., expected to be understood by any client that might use the API). ... A REST API must not define fixed resource names or hierarchies (an obvious coupling of client and server). ... A REST API should spend almost all of its descriptive effort in defining the media type(s) used for representing resources and driving application state, or in defining extended relation names and/or hypertext-enabled mark-up for existing standard media types. ... The idea of "hypertext" plays a central role - much more so than URI structure or what HTTP verbs mean. "Hypertext" is defined in one of the comments: When I [Fielding] say hypertext, I mean the simultaneous presentation of information and controls such that the information becomes the affordance through which the user (or automaton) obtains choices and selects actions. Hypermedia is just an expansion on what text means to include temporal anchors within a media stream; most researchers have dropped the distinction. Hypertext does not need to be HTML on a browser. Machines can follow links when they understand the data format and relationship types. I'm guessing at this point, but the first two points above seem to suggest that API documentation for a Foo resource that looks like the following leads to tight coupling between client and server and has no place in a RESTful system. GET /foos/{id} # read a Foo POST /foos/{id} # create a Foo PUT /foos/{id} # update a Foo Instead, an agent should be forced to discover the URIs for all Foos by, for example, issuing a GET request against /foos. (Those URIs may turn out to follow the pattern above, but that's beside the point.) The response uses a media type that is capable of conveying how to access each item and what can be done with it, giving rise to the third point above. For this reason, API documentation should focus on explaining how to interpret the hypertext contained in the response. Furthermore, every time a URI to a Foo resource is requested, the response contains all of the information needed for an agent to discover how to proceed by, for example, accessing associated and parent resources through their URIs, or by taking action after the creation/deletion of a resource. The key to the entire system is that the response consists of hypertext contained in a media type that itself conveys to the agent options for proceeding. It's not unlike the way a browser works for humans. But this is just my best guess at this particular moment. Fielding posted a follow-up in which he responded to criticism that his discussion was too abstract, lacking in examples, and jargon-rich: Others will try to decipher what I have written in ways that are more direct or applicable to some practical concern of today. I probably won’t, because I am too busy grappling with the next topic, preparing for a conference, writing another standard, traveling to some distant place, or just doing the little things that let me feel I have I earned my paycheck. So, two simple questions for the REST experts out there with a practical mindset: how do you interpret what Fielding is saying and how do you put it into practice when documenting/implementing REST APIs? Edit: this question is an example of how hard it can be to learn something if you don't have a name for what you're talking about. The name in this case is "Hypermedia as the Engine of Application State" (HATEOAS).

    Read the article

  • Move database from SQL Server 2012 to 2008

    - by Rich
    I have a database on a SQL Sever 2012 instance which I would like to copy to a 2008 server. The 2008 server cannot restore backups created by a 2012 server (I have tried). I cannot find any options in 2012 to create a 2008 compatible backup. Am I missing something? Is there an easy way to export the schema and data to a version-agnostic format which I can then import into 2008? The database does not use any 2012 specific features. It contains tables, data and stored procedures. Here is what I have tried so far: I tried "tasks" - "generate scripts" on the 2012 server, and I was able to generate the schema (including stored procedures) as a sql script. This didn't include any of the data, though. After creating that schema on my 2008 machine, I was able to open the "Export Data" wizard on the 2012 machine, and after configuring the 2012 as source machine and the 2008 as target machine, I was presented with a list of tables which I could copy. I selected all my tables (300+), and clicked through the wizard. Unfortunately it spends ages generating its scripts, then fails with errors like "Failure inserting into the read-only column 'FOO_ID'". I also tried the "Copy Database Wizard", which claimed to be able to copy "from 2000 or later to 2005 or later". It has two modes: 1) "detach and attach", which failed with error: Message: Index was outside the bounds of the array. StackTrace: at Microsoft.SqlServer.Management.Smo.PropertyBag.SetValue(Int32 index, Object value) ... at Microsoft.SqlServer.Management.Smo.DataFile.get_FileName() 2) SQL Management Object Method which failed with error "Cannot read property IsFileStream.This property is not available on SQL Server 7.0."

    Read the article

  • Ubuntu server has slow performance

    - by Rich
    I have a custom built Ubuntu 11.04 server with a 6 disk software RAID 10 primary drive. On it I'm primarily running a PostgreSQL and a few other utilities that stream data from the web. I often find after a few hours of uptime the server starts to lag with all kinds of processes. For example, it may take 10-15 seconds after log-in to get a shell prompt. It might take 5-10 seconds for top to come up. An ls might take a second or two. When I look at top there is almost no CPU usage. There's a fair amount of memory used by the PostgreSQL server but not enough to bleed into swap. I have no idea where to go from here, other than to suspect the RAID10 (I've only ever had software RAID 1's before). Edit: Output from top: top - 11:56:03 up 1:46, 3 users, load average: 0.89, 0.73, 0.72 Tasks: 119 total, 1 running, 118 sleeping, 0 stopped, 0 zombie Cpu(s): 0.2%us, 0.0%sy, 0.0%ni, 93.5%id, 6.2%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 16325596k total, 3478248k used, 12847348k free, 20880k buffers Swap: 19534176k total, 0k used, 19534176k free, 3041992k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1747 woodsp 20 0 109m 10m 4888 S 1 0.1 0:42.70 python 357 root 20 0 0 0 0 S 0 0.0 0:00.40 jbd2/sda3-8 1 root 20 0 24324 2284 1344 S 0 0.0 0:00.84 init 2 root 20 0 0 0 0 S 0 0.0 0:00.00 kthreadd 3 root 20 0 0 0 0 S 0 0.0 0:00.24 ksoftirqd/0 6 root RT 0 0 0 0 S 0 0.0 0:00.00 migration/0 7 root RT 0 0 0 0 S 0 0.0 0:00.01 watchdog/0 8 root RT 0 0 0 0 S 0 0.0 0:00.00 migration/1 10 root 20 0 0 0 0 S 0 0.0 0:00.02 ksoftirqd/1 12 root RT 0 0 0 0 S 0 0.0 0:00.01 watchdog/1 13 root RT 0 0 0 0 S 0 0.0 0:00.00 migration/2 14 root 20 0 0 0 0 S 0 0.0 0:00.00 kworker/2:0 15 root 20 0 0 0 0 S 0 0.0 0:00.00 ksoftirqd/2 16 root RT 0 0 0 0 S 0 0.0 0:00.01 watchdog/2 17 root RT 0 0 0 0 S 0 0.0 0:00.00 migration/3 18 root 20 0 0 0 0 S 0 0.0 0:00.00 kworker/3:0 19 root 20 0 0 0 0 S 0 0.0 0:00.02 ksoftirqd/3 20 root RT 0 0 0 0 S 0 0.0 0:00.01 watchdog/3 21 root 0 -20 0 0 0 S 0 0.0 0:00.00 cpuset 22 root 0 -20 0 0 0 S 0 0.0 0:00.00 khelper 23 root 20 0 0 0 0 S 0 0.0 0:00.00 kdevtmpfs 24 root 0 -20 0 0 0 S 0 0.0 0:00.00 netns 26 root 20 0 0 0 0 S 0 0.0 0:00.00 sync_supers df -h rpsharp@ncp-skookum:~$ df -h Filesystem Size Used Avail Use% Mounted on /dev/sda3 1.8T 549G 1.2T 32% / udev 7.8G 4.0K 7.8G 1% /dev tmpfs 3.2G 492K 3.2G 1% /run none 5.0M 0 5.0M 0% /run/lock none 7.8G 0 7.8G 0% /run/shm /dev/sda2 952M 128K 952M 1% /boot/efi /dev/md0 5.5T 562G 4.7T 11% /usr/local free -m psharp@ncp-skookum:~$ free -m total used free shared buffers cached Mem: 15942 3409 12533 0 20 2983 -/+ buffers/cache: 405 15537 Swap: 19076 0 19076 tail -50 /var/log/syslog Jul 3 06:31:32 ncp-skookum rsyslogd: [origin software="rsyslogd" swVersion="5.8.6" x-pid="1070" x-info="http://www.rsyslog.com"] rsyslogd was HUPed Jul 3 06:39:01 ncp-skookum CRON[14211]: (root) CMD ( [ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -depth -mindepth 1 -maxdepth 1 -type f -cmin +$(/usr/lib/php5/maxlifetime) ! -execdir fuser -s {} 2>/dev/null \; -delete) Jul 3 06:40:01 ncp-skookum CRON[14223]: (smmsp) CMD (test -x /etc/init.d/sendmail && /usr/share/sendmail/sendmail cron-msp) Jul 3 07:00:01 ncp-skookum CRON[14328]: (woodsp) CMD (/home/woodsp/bin/mail_tweetupdate # email an update) Jul 3 07:00:01 ncp-skookum CRON[14327]: (smmsp) CMD (test -x /etc/init.d/sendmail && /usr/share/sendmail/sendmail cron-msp) Jul 3 07:00:28 ncp-skookum sendmail[14356]: q63E0SoZ014356: from=woodsp, size=2328, class=0, nrcpts=2, msgid=<[email protected]>, relay=woodsp@localhost Jul 3 07:00:29 ncp-skookum sm-mta[14357]: q63E0Si6014357: from=<[email protected]>, size=2569, class=0, nrcpts=2, msgid=<[email protected]>, proto=ESMTP, daemon=MTA-v4, relay=localhost [127.0.0.1] Jul 3 07:00:29 ncp-skookum sendmail[14356]: q63E0SoZ014356: to=Spencer Wood <[email protected]>,Martin Lacayo <[email protected]>, ctladdr=woodsp (1004/1005), delay=00:00:01, xdelay=00:00:01, mailer=relay, pri=62328, relay=[127.0.0.1] [127.0.0.1], dsn=2.0.0, stat=Sent (q63E0Si6014357 Message accepted for delivery) Jul 3 07:00:29 ncp-skookum sm-mta[14359]: STARTTLS=client, relay=mx3.stanford.edu., version=TLSv1/SSLv3, verify=FAIL, cipher=DHE-RSA-AES256-SHA, bits=256/256 Jul 3 07:00:29 ncp-skookum sm-mta[14359]: q63E0Si6014357: to=<[email protected]>,<[email protected]>, ctladdr=<[email protected]> (1004/1005), delay=00:00:01, xdelay=00:00:00, mailer=esmtp, pri=152569, relay=mx3.stanford.edu. [171.67.219.73], dsn=2.0.0, stat=Sent (Ok: queued as 8F3505802AC) Jul 3 07:09:08 ncp-skookum CRON[14396]: (root) CMD ( [ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -depth -mindepth 1 -maxdepth 1 -type f -cmin +$(/usr/lib/php5/maxlifetime) ! -execdir fuser -s {} 2>/dev/null \; -delete) Jul 3 07:17:01 ncp-skookum CRON[14438]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Jul 3 07:20:01 ncp-skookum CRON[14453]: (smmsp) CMD (test -x /etc/init.d/sendmail && /usr/share/sendmail/sendmail cron-msp) Jul 3 07:39:01 ncp-skookum CRON[14551]: (root) CMD ( [ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -depth -mindepth 1 -maxdepth 1 -type f -cmin +$(/usr/lib/php5/maxlifetime) ! -execdir fuser -s {} 2>/dev/null \; -delete) Jul 3 07:40:01 ncp-skookum CRON[14562]: (smmsp) CMD (test -x /etc/init.d/sendmail && /usr/share/sendmail/sendmail cron-msp) Jul 3 08:00:01 ncp-skookum CRON[14668]: (smmsp) CMD (test -x /etc/init.d/sendmail && /usr/share/sendmail/sendmail cron-msp) Jul 3 08:09:01 ncp-skookum CRON[14724]: (root) CMD ( [ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -depth -mindepth 1 -maxdepth 1 -type f -cmin +$(/usr/lib/php5/maxlifetime) ! -execdir fuser -s {} 2>/dev/null \; -delete) Jul 3 08:17:01 ncp-skookum CRON[14766]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Jul 3 08:20:01 ncp-skookum CRON[14781]: (smmsp) CMD (test -x /etc/init.d/sendmail && /usr/share/sendmail/sendmail cron-msp) Jul 3 08:39:01 ncp-skookum CRON[14881]: (root) CMD ( [ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -depth -mindepth 1 -maxdepth 1 -type f -cmin +$(/usr/lib/php5/maxlifetime) ! -execdir fuser -s {} 2>/dev/null \; -delete) Jul 3 08:40:01 ncp-skookum CRON[14892]: (smmsp) CMD (test -x /etc/init.d/sendmail && /usr/share/sendmail/sendmail cron-msp) Output of hdparm -t /dev/sd{a,b,c,d,e,f} This looks suspicious? /dev/sda: Timing buffered disk reads: 2 MB in 4.84 seconds = 423.39 kB/sec /dev/sdb: Timing buffered disk reads: 420 MB in 3.01 seconds = 139.74 MB/sec /dev/sdc: Timing buffered disk reads: 390 MB in 3.00 seconds = 129.87 MB/sec /dev/sdd: Timing buffered disk reads: 416 MB in 3.00 seconds = 138.51 MB/sec /dev/sde: Timing buffered disk reads: 422 MB in 3.00 seconds = 140.50 MB/sec /dev/sdf: Timing buffered disk reads: 416 MB in 3.01 seconds = 138.26 MB/sec

    Read the article

  • directoryperdb issue

    - by Rich Blumer
    I installed MongoDB to run as a Windows Service on Win 7 and everything runs well. However, when I attempt to use the command --directoryperdb, it does not recognize this command. Does anyone know how to resolve this issue?

    Read the article

  • What are your favourite free fonts?

    - by Rich Bradshaw
    As super users, using nicer fonts is often an important issue. Now that many sites are using @font-face to embed fonts in the page, having the same fonts installed locally means less downloading, and producing documents with custom fonts often look nicer. What are your favourite fonts? Ideally fonts that have unicode support and that have proper ligatures and kerning. Please add pictures if possible!

    Read the article

  • Unicast traffic between hosts on a switch leaving the switch by its uplink. Why?

    - by Rich Lafferty
    I have a weird thing happening on our network at my office which I can't quite get my head around. In particular I can't tell if it's a problem with a switch, or a problem with configuration. We have a Cisco SG300-52 switch (sw01) in the top of a rack in our server room, connected to another SG300-28 that acts as our core switch (core01). Both run layer 2 only, our firewalls do routing between VLANs. They have a dozen or so VLANs between them. Gi1 on sw01 is a trunk port connected to gi1 on core01. (Disclosure: There are other switches in our environment but I'm pretty sure I've isolated the problem down to these two. Happy to provide more info if necessary.) The behaviour I'm seeing is limited to one VLAN, vlan 12 -- or, at least, it's not happening on the other ones I checked (It's hard to guarantee the absence of packets), and it is: sw01 is forwarding, to core01, traffic which is between two hosts which are both plugged into sw01. (I noticed this because the IDS in our firewall gave a false positive on traffic which should not reach the firewall.) We noticed this mostly between our two dhcp/dns servers, net01 (10.12.0.10) and net02 (10.12.0.11). net01 is physical hardware and net02 is on a VMware ESX server. net01 is connected to gi44 on sw01 and net02's ESX server to gi11. [net01]----gi44-[sw01]-gi1----gi1-[core01] [net02]----gi11/ Let's see some interfaces! Remember, vlan 12 is the problem vlan. Of the others I explicitly verified that vlan 27 was not affected. Here's the two hosts' ports: esx01 contains net02. sw01#sh run int gi11 interface gigabitethernet11 description esx01 lldp med disable switchport trunk allowed vlan add 5-7,11-13,100 switchport trunk native vlan 27 ! sw01#sh run int gi44 interface gigabitethernet44 description net01-1 lldp med disable switchport mode access switchport access vlan 12 ! Here's the trunk on sw01. sw01#sh run int gi1 interface gigabitethernet1 description "trunk to core01" lldp med disable switchport trunk allowed vlan add 4-7,11-13,27,100 ! And the other end of the trunk on core01. interface gigabitethernet1 description sw01 macro description switch switchport trunk allowed vlan add 2-7,11-16,27,100 ! I have a monitor port on core01, thus: core01#sh run int gi12 interface gigabitethernet12 description "monitor port" port monitor GigabitEthernet 1 ! And the monitor port on core01 sees unicast traffic going between net01 and net02, both of which are on sw01! I've verified this with a monitor port on sw01 that sees the net01-net02 unicast traffic leaving via gi1 too. sw01 knows that both of those hosts are on ports that are not its trunk port: :) ratchet$ arp -a | grep net net02.2ndsiteinc.com (10.12.0.11) at 00:0C:29:1A:66:15 [ether] on eth0 net01.2ndsiteinc.com (10.12.0.10) at 00:11:43:D8:9F:94 [ether] on eth0 sw01#sh mac addr addr 00:0C:29:1A:66:15 Aging time is 300 sec Vlan Mac Address Port Type -------- --------------------- ---------- ---------- 12 00:0c:29:1a:66:15 gi11 dynamic sw01#sh mac addr addr 00:11:43:D8:9F:94 Aging time is 300 sec Vlan Mac Address Port Type -------- --------------------- ---------- ---------- 12 00:11:43:d8:9f:94 gi44 dynamic I also brought up an unused port on sw01 on vlan 12, but the unicast traffic was (as best as I could tell) not coming out that port. So it doesn't look like sw01 is pushing it out all its ports, just the right ports and also gi1! I've verified that sw01 is not filling up its address-table: sw01#sh mac addr count This may take some time. Capacity : 8192 Free : 7983 Used : 208 The full configs for both core01 and sw01 are available: core01, sw01. Finally, versions: sw01#sh ver SW version 1.1.2.0 ( date 12-Nov-2011 time 23:34:26 ) Boot version 1.0.0.4 ( date 08-Apr-2010 time 16:37:57 ) HW version V01 core01#sh ver SW version 1.1.2.0 ( date 12-Nov-2011 time 23:34:26 ) Boot version 1.1.0.6 ( date 11-May-2011 time 18:31:00 ) HW version V01 So my understanding is this: sw01 should take unicast traffic for net01 and send it only out net02's port, and vice versa; none of it should go out sw01's uplink. But core01, receiving traffic on gi1 for a host it knows is on gi1, is right in sending it out all of its ports. (That is: sw01 is misbehaving, but core01 is doing what it should given the circumstances.) My question is: Why is sw01 sending that unicast traffic out its uplink, gi1? (And pre-emptively: yes, I know SG300s leave much to be desired, and yes, we should have spanning-tree enabled, but that's where I'm at right now.)

    Read the article

  • Disable/enable the proxy on a tab by tab basis in Firefox

    - by Rich
    Is it possible to disable/enable the proxy connection on a tab-by-tab basis in Firefox? I can access the internet with the proxy enabled, and I can access our internal servers with the proxy disabled, but neither configuration allows the other. I have a feeling that Firefox 4's per tab profiles may permit this, but was wondering whether there was a way to achieve this on Firefox 3.6.13, perhaps through the use of an extension. For the moment I'm making do with quickProxy (not QuickProxy which I've just discovered while trying to find quickProxy) which allows me to switch the proxy on and off for the whole browser, but would prefer something more fine-grained.

    Read the article

  • Port Forwarding(?) TD-W8961nd

    - by rich
    I have a bit of a weird internet setup. I am connected via a decent WiFi connection (from work) which I pick up using a Buffalo Airstation Wireless-G box. This simply picks up the signal and gives me 4 ethernet ports to connect to. That's all fine and works as it should. I also have a TP LINK TD-W8961nd router which used to be connected to the Airstation via an ethernet cable so I could essentially have WiFi access in my house. To cut a long story short I can't remember how the hell I got it to work and I can't find the notes I scribbled down on how to do it. I'm pretty sure I need to tell the router what ip to pick up the internet connection from and have the local wifi as a seperate network. How the hell I do that I have no idea right now. Can anyone give me some advice on this? If you need more information ask and I will be able to do so. Cheers in advance. edit I'm in work at the moment so I can't give 100% details but I will be able to later on.

    Read the article

  • Spreadsheet functions to query route planner for travel time/distance

    - by Rich
    I would like to achieve something whereby I have a spreadsheet such that the columns are: Column A - place name Column B - place name Column C - distance by road between places in columns A and B Column D - travel time by road between places in columns A and B I thought it might be possible using Google Docs' spreadsheet and its 'Google' functions, but I've not found any that might do the trick. In the end I could knock up an app to do it using the Google Maps API but would rather avoid it if I can.

    Read the article

  • Windows XP taskbar is stuck on "always on top of other windows"

    - by Rich
    My Windows XP taskbar is stuck on "always on top of other windows". It's always visible, even when a full-screen app is running. I've gone into the properties of the taskbar and un-checked the box for "Keep the taskbard on top of other windows" but it makes no difference. checked or un-checked, it's always on top of other windows. Any ideas?

    Read the article

  • Redirect anything within /attachments/ directory up one level but also removing anything after /attachments/

    - by Rich Staats
    WordPress creates an attachment page for any images uploaded through the post editor and "attached" to that post. After migrating a site, those attachment pages no longer exist, and we now have around 1000 links pointed at 404's. So, I was looking to find a way to do a redirect for any url that has /attachement/ in it's string and then push that url back up one level (which happens to be the post page). so for instance: http://mysite.com/2012/news/blog-post-title/attachment/image-page/ (which doesnt exist) will go to http://mysite.com/2012/news/blog-post-title/ (which does exist). Along with redirecting up one level, I also need to remove anything after /attachment/ (in this case the "image-page." Any suggestions? Thanks in advance

    Read the article

  • PCMCIA Does Not Work on Dell Latitude under Windows XP

    - by Rich
    PCMCIA cards do not work on Dell Latitude e5510 under Windows XP. Works fine on Windows 7. Cards are recognized by Device Manager but do not work correctly. For example, CF adapter does not recognize when a card is inserted and network card is unable to obtain an IP address. Other systems that may be affected: Latitude E4310 Latitude E5410 Latitude E5510 Latitude E6410 Latitude E6410 ATG Latitude E6510 Dell Precision Mobile WorkStation M4500

    Read the article

  • View Centreon graph data

    - by Rich
    Is there a way to view the data that Centreon uses to build graphs, from within the Centreon web interface? We have some gaps in some of our graphs, and I would like to see if it is a problem with the data being returned from the NRPE plugins. I have seen the MonitoringEvent Logs section, but I can't get that to show the returned string and status for each call to a particular plugin, which is what I'd like. Is there a hidden function to do this? Thanks in advance

    Read the article

  • Looking for a tool to expand shortened urls

    - by Rich Seller
    The interwebs seem to be infested with shortened urls (Twitter I'm looking at you). I'm always reluctant to click these as it is a leap into the unknown. Are there any browser plugins or Greasemonkey scripts that will auto-expand the shortened URL or give me a tooltip with the resolved target? I've seen LongUrl.org, which has an API I could use to roll my own, but I'd like to avoid the effort if this is a solved problem.

    Read the article

  • How can i initiate a windows task from a shortcut?

    - by rich
    I've created 2 tasks in Task Scheduler on my Vista PC start uTorrent at 2am then close uTorrent (and shutdown PC) at 7am. However i'd like to only like this task to run if I've clicked a shortcut - ideally show something in the tray as well if possible. But not sure how?

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >