Search Results

Search found 13889 results on 556 pages for 'results'.

Page 261/556 | < Previous Page | 257 258 259 260 261 262 263 264 265 266 267 268  | Next Page >

  • File downloaded from IIS6/Win2003 server to a Mac (and not PCs) is incredibly slow

    - by Simon Swords
    We have a test zip file on our customers server that we host for him that when downloading to a Mac is incredibly slow. On a Mac - trying the download via Safari 5.0.3 and Chrome 8.0.552.231 results in a quick burst of normal download speed then plummets to almost no speed at all after 1 or 2 meg (between 1 and 5 Kb/s - yes, KiloBITS per second! According to the network monitor). Downloading via Windows was fine and speedy. Tested via; IE7 7.0.5730.13 and Chrome Portable 8.0.552.224 On Windows XP Pro, and; IE8 8.0.7600.16385 in a Windows 7 virtual machine running via VirtualBox 4.0.0 r69151 on the same Mac mention above Google hasn't helped us out on this occasion, possibly because the search terms I'm having to use are quite generic. Has anybody ever experienced this and if so how do we fix it? Thanks in advance

    Read the article

  • SQL Azure Security: DoS

    - by Herve Roggero
    Since I decided to understand in more depth how SQL Azure works I started to dig into its performance characteristics. So I decided to write an application that allows me to put SQL Azure to the test and compare results with a local SQL Server database. One of the options I added is the ability to issue the same command on multiple threads to get certain performance metrics. That's when I stumbled on an interesting security feature of SQL Azure: its Denial of Service (DoS) detection engine. What this security feature does is that it performs a check on the number of connections being established, and if the rate of connection is too high, SQL Azure blocks all communication from that machine. I am still trying to learn more about this specific feature, but it appears that going to the SQL Azure portal and testing the connection from the portal "resets" the feature and you are allowed to connect again... until you reach the login threashold. In the specific test I was performing, all the logins were successful. I haven't tried to login with an invalid account or password... that will be for next time. On my Linked In group (SQL Server and SQL Azure Security: http://www.linkedin.com/groups?gid=2569994&trk=hb_side_g) Chip Andrews (www.sqlsecurity.com) pointed out that this feature in itself could present an internal threat. In theory, a rogue application could be issuing many login requests from a NATed network, which could potentially prevent any production system from connecting to SQL Azure within the same network. My initial response was that this could indeed be the case. However, while the TCP protocol contains the latest NATed IP address of a machine (which masks the origin of the machine making the SQL request), the TDS protocol itself contains the IP Address of the machine making the initial request; so technically there would be a way for SQL Azure to block only the internal IP address making the rogue requests.  So this warrants further investigation... stay tuned...

    Read the article

  • Is there a pattern or best practice for passing a reference type to multiple classes vs a static class?

    - by Dave
    My .NET application creates HTML files, and as such, the structure looks like variable myData BuildHomePage() variable graph = new BuildGraphPage(myData) variable table = BuildTablePage(myData) BuildGraphPage and BuildTablePage both require access data, the myData object. In the above example, I've passed the myData object to 2 constructors. This is what I'm doing now, in my current project. The myData object, and it's properties are all readonly. The problem is, the number of pages which will require this object has grown. In the real project, there are currently 4, but the new spec is to have about 20. Passing this object to the constructor of each new object and assigning it to a field is a little time consuming, but not a hardship! This poses the question whether it's better practice to continue as I have, or to refactor and create a new static class for myData which can be referenced from any where in my project. I guess my abilities to use Google are poor, because I did try and find an appropriate pattern as I am sure this type of design must be common place but my results returned nothing. Is there a pattern which is suited, or do best practices lean towards one implementation over another.

    Read the article

  • Can't configure Bluetooth drivers for mac mini 4,1

    - by t0dbld
    upon following the instructions on thread for mac mini leading to me to a thread for a fix for BlueTooth, i follow the instructions (even they are for other Mac's) i get it to fire up and combined with hcitool i managed to connect my apple track pad and keyboard... how ever upon reboot im back to bluetooth telling me there are no drivers nothing works, yet all the files are still there? This is driving me nuts and i caant figure out what i could be doing do you have any ideas what would make this happen ? lsusb results Bus 004 Device 007: ID 05ac:8218 Apple, Inc. Bus 004 Device 006: ID 05ac:820b Apple, Inc. Bus 004 Device 005: ID 05ac:820a Apple, Inc. Bus 004 Device 004: ID 0a5c:4500 Broadcom Corp. BCM2046B1 USB 2.0 Hub (part of BCM2046 Bluetooth) Bus 004 Device 003: ID 05ac:8242 Apple, Inc. IR Receiver [built-in] Bus 004 Device 002: ID 03f0:1c24 Hewlett-Packard Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 001 Device 002: ID 054c:031f Sony Corp. Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

    Read the article

  • XMLPad – a new tool in my developer utility belt

    - by jamiet
    Yesterday I was on the lookout for a free tool that would help me write XPath statements. I put a shout out on Twitter and Johan Barnard replied saying : Give XMLPad a try http://www.wmhelp.com/xmlpad3.htm I’m sure there are legions of developers out there that know all about XMLPad but I had never heard about it so I suspect some of you reading haven’t either. Today I downloaded it to give it a run out and I gotta say – I love it. I only used it to do one thing –constructing an XPath expression to point to a particular Configuration definition in a .dtsx file- and it allowed me to do that with consummate ease. The feature I particularly loved was that, similar to Google Suggest, it showed me results from my expression as I typed. Here is a screenshot of my XPath expression to find (and just try saying this in a hurry) the value of a property whose DTS:Name attribute equals ‘ConfigurationString’ of a Configuration definition where the value of that Configuration definition’s property whose DTS:Name attribute equals ‘ObjectName’, equals ‘BIConfig My XPath expression: /DTS:Executable/DTS:Configuration[DTS:Property[@DTS:Name=’ObjectName’]=’BIConfig’]/DTS:Property[@DTS:Name=’ConfigurationString’] and believe me, there was no way I would have been able to come up with that without a tool to help me! So, an easy tip for you – if you need to write XPath expression download XMLPad for free from http://www.wmhelp.com/xmlpad3.htm and see what it can do for you. That’s all. Its now Friday evening and I’m shutting down and relaxing before heading to the big game at Twickenham tomorrow (yes, I have a ticket ). Have a good one! @Jamiet

    Read the article

  • Call issue with Freeswitch

    - by gbraad
    I am testing the following with Freeswitch and different devices (nokia n900, nokia e60, ekiga) and have similar results between them. On the Freeswitch server (1.0.4 in multi-tenant mode) I have several user profiles for a domain, e.g. 1000, 1001 for host.com The user are authenticated correctly and calls can be placede. When I place a call from a device registered as [email protected] to [email protected] it will show up at the other end (1002) as [email protected] I would expect this call to show up as [email protected]. The IP address is the one of from the Freeswitch server. Because of this, the calls are no correctly recognized by the address book on certain devices. Can the he domain FQDN of the callers domain/acount be used, instead of the IP address of the server in the SIP uri?

    Read the article

  • SQL Server 2005 Agent running SSIS job can't find file path

    - by alimack
    Basically I'm trying to run a functioning SSIS job (created in BIDS) under the SQL Server Agent - it reads a set of Excel spreadsheets and dumps the results into a table. The problem I'm having is getting the SSA to read the file path, the relevant part of the error is: "0x80004005 Description: "'N:\Assets Property & Facilities Management\Monthly Absence.xls' is not a valid path. Make sure that the path name is spelled correctly and that you are connected to the server on which the file resides." I've tried using UNC paths (\servername\ share) but the BIDS rewrites the paths to standard file paths (c:\directory\filename), I've also tried a proxy which runs this step under an Admin account. I've also tried changing the path to UNC on the SSIS job on the server. Also I'm forcing it to use the 32 bit DTEXEC, so it's not that either Always get the same error, do I need to re-create the job from scratch?

    Read the article

  • Distribute Sort Sample Service

    - by kaleidoscope
    How it works? Using the front-end of the service, a user can specify a size in MB for the input data set to sort. Algorithm CreateAndSplit The CreateAndSplit task generates the input data and stores them as 10 blobs in the utility storage. The URLs to these blobs are packaged as Separate work items and written to the queue. · Separate The Separate task reads the blobs with the random numbers created in the CreateAndSplit task and places the random numbers into buckets. The interval of the numbers that go into one bucket is chosen so that the expected amount of numbers (assuming a uniform distribution of the numbers in the original data set) is around 100 kB. Each bucket is represented as a blob container in utility storage. Whenever there are 10 blobs in one bucket (i.e., the placement in this bucket is complete because we had 10 original splits), the separate task will generate a new Sort task and write the task into the queue. · Sort The Sort task merges all blobs in a single bucket and sorts them using a standard sort algorithm. The result is stored as a blob in utility storage. · Concat The concat task merges the results of all Sort tasks into a single blob. This blob can be downloaded as a text file using this Web page. As the resulting file is presented in text format, the size of the file is likely to be larger than the specified input file. Anish

    Read the article

  • Tool to monitor file size, file existence, parse xml, etc

    - by Artur Carvalho
    I'm trying to find some tool that helps me monitor several things. What are some requirements: Shows results on a web page. Checks existence of files/folders Checks sizes of files/folders Can parse xml files Can have several status depending if it's for instance, after 9pm Ping workstations/Servers to ensure they are on or off create daily/weekly/monthly reports (pdf, html, csv) show daily/weekly/monthly scheduled tasks check if specific users are logged in a machine check which users are logged in in a machine I've looked into some solutions but could not find what I wanted. Usually tools like nagios are more focused in servers, and spiceworks is not so specific. At this point I'm using a little powershell script that does several of these items, but before losing more time probably reinventing the wheel, what tools are out there? Thank you in advance.

    Read the article

  • Checking if your SIMPLE databases need a log backup

    - by Fatherjack
    Hopefully you have read the blog by William Durkin explaining why your SIMPLE databases need a log backup in some cases. There is a SQL Server bug that means in some cases databases are marked as being in SIMPLE RECOVERY but have a log wait type that shows they are not properly configured. Please read his blog for the full explanation and a great description of how to reproduce the issue. As part of our (William happens to be my Boss) work to recover our affected databases I wrote this small PowerShell script to quickly check our servers for databases that needed the attention that William details.  cls $Servers = “Server01″,”Server02″,”etc”,”etc” foreach($Server in $Servers){ write-host “************” $server “****************”     $server = New-Object Microsoft.sqlserver.management.smo.server $Server     foreach($db in $Server.databases){         $db | where {$_.RecoveryModel -eq “Simple” -and $_.logreusewaitstatus -ne “nothing”} | select name, LogReuseWaitStatus     } } If you get any results from this query then you should consult Williams blog for the details on what action you should take. This script does give out false positives if in some circumstances depending on how busy your databases are. Hopefully this will let you check your servers quickly and if you find any problems you can reference Williams blog to understand what you need to do.

    Read the article

  • What should be the architecture of an urban game system?

    - by pmichna
    I'm going to develop an urban game using a telco API for phone geolocation and sending/receiving messages. A player would pick up one of the scenarios, move around the city and when he hits a given location, he gets a message and possibly has to answer it. I'm wondering, what approach would be the best in my case. I came up with this general idea: Web application as a user interface (user registration, players ranking, scenarios editing) written in Ruby on Rails. Game server (hosting games, game logic like checking players location, sending and receiving messages) written in Ruby. Database (users, scores, scenarios etc.), probably MySQL or someother open source DB. I want to learn Ruby and RoR, that's why I chose these language and framework. Do you think it's a good choice for a game server? Another question: is this project division good? I mean, I have little experience with Ruby and Rails - that's why I'm asking. Maybe it's better to have web application merged with game server and somehow have the server hosting RoR application do the tasks like mobile phone pinging and message sending? How would that be performed? Maybe this is worth mentioning: the API is RESTful, most results are JSON, few are XML.

    Read the article

  • SQL Query for Determining SharePoint ACL Sizes

    - by Damon Armstrong
    When a SharePoint Access Control List (ACL) size exceeds more than 64kb for a particular URL, the contents under that URL become unsearchable due to limitations in the SharePoint search engine.  The error most often seen is The Parameter is Incorrect which really helps to pinpoint the problem (its difficult to convey extreme sarcasm here, please note that it is intended).  Exceeding this limit is not unheard of – it can happen when users brute force security into working by continually overriding inherited permissions and assigning user-level access to securable objects. Once you have this issue, determining where you need to focus to fix the problem can be difficult.  Fortunately, there is a query that you can run on a content database that can help identify the issue: SELECT [SiteId],      MIN([ScopeUrl]) AS URL,      SUM(DATALENGTH([Acl]))/1024 as AclSizeKB,      COUNT(*) AS AclEntries FROM [Perms] (NOLOCK) GROUP BY siteid ORDER BY AclSizeKB DESC This query results in a list of ACL sizes and entry counts on a site-by-site basis.  You can also remove grouping to see a more granular breakdown: SELECT [ScopeUrl] AS URL,       SUM(DATALENGTH([Acl]))/1024 as AclSizeKB,      COUNT(*) AS AclEntries FROM [Perms] (NOLOCK) GROUP BY ScopeUrl ORDER BY AclSizeKB DESC

    Read the article

  • dilog box keep asking for password

    - by hossam-khalili
    I'm having the exact same problem. I am running Windows 7, Outlook 2007 (Office 2007 pro) and I'm connecting to our Exchange Server 2007 which is part of Small Business Server 2008 Outlook 2007 on a client keeps asking for the password to the remote access URL. If I simply click cancel it's OK for a few minutes. Entering the password and clicking the save PW box does no good. Sometimes clicking cancel results in another dilog box asking the same thing and I may have to click cancel several times to get it to go away for a while. Occasionally Outlook may actually go into a mode where it says it needs the password typed so I click the link which brings the dilog back but simply clicking cancel will make Outlook connect again. can anyone help me thanks

    Read the article

  • Can't boot Ubuntu 12.04 from external Hard Drive using Mac

    - by Catgirl the Crazy
    Recently, I upgraded the RAM and hard drive on my Early 2008 Macbook to improve the performance. Rather than throw away the old hard drive, I bought an enclosure for it to turn it into an external hard drive, and, since all the data was migrated to my new drive, I decided to install Ubuntu on it for funsies (note: I am a near-total Ubuntu n00b). My first attempt to install Ubuntu didn't work (it gave me errors about not being able to find the BIOS or something), but my second attempt finished successfully (can't remember what, if anything, I did different). However, when I plug the external drive into my Macbook, it gives me a message saying it can't read the disk. Moreover, when I go into the Startup Manager (i.e.: what you get when you turn on the Macbook while holding the option key), the external drive is not one of the available startup disks. I thought this might be because I have an older Macbook, so I tried booting it with my mom's Late 2011 Macbook, and got the same results. Then I tried booting it through my dad's Dell laptop that runs Windows 7, and that time it worked. This is really counter intuitive to me, since the hard drive originally came from a Macbook, so if anything you'd think it would be less compatible with the Windows laptop than the Macbook. In case it helps, here's a link to a picture of how I set up the partition table while doing the install (not shown there is the fact that I checked the "Format?" box next to the /boot partition, since it gave me a warning when I tried to continue the installation without doing so) Anyone have any clue at all? If it helps, the hard drive I'm using is a 120GB 5400-rpm Serial ATA hard disk drive.

    Read the article

  • Guest Blog: Secure your applications based on your business model, not your application architecture, by Yaldah Hakim

    - by Darin Pendergraft
    Today’s businesses are looking for new ways to engage their customers, embrace mobile applications, while staying in compliance, improving security and driving down costs.  For many, the solution to that problem is to host their applications with a Cloud Services provider, but concerns that a hosted application will be less secure continue to cause doubt. Oracle is recognized by Gartner as a leader in the User Provisioning and Identity and Access Governance magic quadrants, and has helped thousands of companies worldwide to secure their enterprise applications and identities.  Now those same world class IDM capabilities are available as a managed service, both for enterprise applications, as well has Oracle hosted applications. --- Listen to our IDM in the cloud podcast to hear Yvonne Wilson, Director of the IDM Practice in Cloud Service, explain how Oracle Managed Services provides IDM as a service ---Selecting OracleManaged Cloud Services to deploy and manage Oracle Identity Management Services is a smart business decision for a variety of reasons. Oracle hosted Identity Management infrastructure is deployed securely, resilient to failures, and supported by Oracle experts. In addition, Oracle  Managed Cloud Services monitors customer solutions from several perspectives to ensure they continue to work smoothly over time. Customers gain the benefit of Oracle Identity Management expertise to achieve predictable and effective results for their organization.Customers can select Oracle to host and manage any number of Oracle IDM products as a service as well as other Oracle’s security products, providing a flexible, cost effective alternative to onsite hardware and software costs.Security is a major concern for all organizations- making it increasingly important to partner with a company like Oracle to ensure consistency and a layered approach to security and compliance when selecting a cloud provider.  Oracle Cloud Service makes this possible for our customers by taking away the headache and complexity of managing Identity management infrastructure and other security solutions. For more information:http://www.oracle.com/us/solutions/cloud/managed-cloud-services/overview/index.htmlTwitter-https://twitter.com/OracleCloudZoneFacebook - http://www.facebook.com/OracleCloudComputing

    Read the article

  • How does iperf calculate throughput and jitter?

    - by Someone
    I've read that iperf basically tries to send as much information down a connection as quickly as possible reporting on the throughput achieved. This tool is especially useful in determining the volume of data that links between two machines can supply. is it possible to gather the same results by sending regular data, as in not testing data? what I'm trying to do is this; sending data in the foreground while in the back ground gather statistics (throughput and jitter). so can anyone tell me how iperf calculates these two values ?

    Read the article

  • Functional Methods on Collections

    - by GlenPeterson
    I'm learning Scala and am a little bewildered by all the methods (higher-order functions) available on the collections. Which ones produce more results than the original collection, which ones produce less, and which are most appropriate for a given problem? Though I'm studying Scala, I think this would pertain to most modern functional languages (Clojure, Haskell) and also to Java 8 which introduces these methods on Java collections. Specifically, right now I'm wondering about map with filter vs. fold/reduce. I was delighted that using foldRight() can yield the same result as a map(...).filter(...) with only one traversal of the underlying collection. But a friend pointed out that foldRight() may force sequential processing while map() is friendlier to being processed by multiple processors in parallel. Maybe this is why mapReduce() is so popular? More generally, I'm still sometimes surprised when I chain several of these methods together to get back a List(List()) or to pass a List(List()) and get back just a List(). For instance, when would I use: collection.map(a => a.map(b => ...)) vs. collection.map(a => ...).map(b => ...) The for/yield command does nothing to help this confusion. Am I asking about the difference between a "fold" and "unfold" operation? Am I trying to jam too many questions into one? I think there may be an underlying concept that, if I understood it, might answer all these questions, or at least tie the answers together.

    Read the article

  • Stereo images rectification and disparity: which algorithms?

    - by alessandro.francesconi
    I'm trying to figure out what are currently the two most efficent algorithms that permit, starting from a L/R pair of stereo images created using a traditional camera (so affected by some epipolar lines misalignment), to produce a pair of adjusted images plus their depth information by looking at their disparity. Actually I've found lots of papers about these two methods, like: "Computing Rectifying Homographies for Stereo Vision" (Zhang - seems one of the best for rectification only) "Three-step image recti?cation" (Monasse) "Rectification and Disparity" (slideshow by Navab) "A fast area-based stereo matching algorithm" (Di Stefano - seems a bit inaccurate) "Computing Visual Correspondence with Occlusions via Graph Cuts" (Kolmogorov - this one produces a very good disparity map, with also occlusion informations, but is it efficient?) "Dense Disparity Map Estimation Respecting Image Discontinuities" (Alvarez - toooo long for a first review) Anyone could please give me some advices for orienting into this wide topic? What kind of algorithm/method should I treat first, considering that I'll work on a very simple input: a pair of left and right images and nothing else, no more information (some papers are based on additional, pre-taken, calibration infos)? Speaking about working implementations, the only interesting results I've seen so far belongs to this piece of software, but only for automatic rectification, not disparity: http://stereo.jpn.org/eng/stphmkr/index.html I tried the "auto-adjustment" feature and seems really effective. Too bad there is no source code...

    Read the article

  • kubuntu 12.10 will not boot on mac 2.93Ghz intel core 2 duo

    - by Jake Sweet
    I feel like I've tried it all and nothing is changing. I've tried booting from a liveUSB, a liveDVD, and I've checked the mod5 everything matches up. I've even tried different distro's same result on all of them. Just for reference: linuxmint 13kde and Fedora 17. I've also tried changing my liveUSB building software just in case. I've tried unetbootin and Linux USB builder. Both have same results, my opinion is that it is a hardware issue since I'm having near the same result with all of these variables. So now what is actually happening? I can boot up to a screen. I say A screen because some of the ways that dvd's and usb's boot differs. Now on liveusb I'm reaching a black screen with white text. Says booting: done, then below says loading ramdrive: done, then below that it says preparing to boot kernel this may take a while and buckle in or something to that effect. Then nothing. That's it computer freezes. I've waited up to 8 hrs and still nothing. Ok for the liveDVD Everything goes according to instructions per pdf files on every distro, until linux starts. I can only run in compatibility mode. When any other option is tried the computer seems to freeze/stall/be a pain in my butt... Ok well that seems to wrap it up. Also if I'm not explaining something well, I'm sorry I can try to clear anything up. I'm not the best at descriptions. I'm leaving with a tech specs of my mac: 2.93GHz Intel Core 2 Duo, 4 GB ram, NVIDIA GeForce GT 120 graphics, bought in late 09" it's the 24" model, let me know if anymore information will help. Also thanks in advance

    Read the article

  • How to change Linux hot key [migrated]

    - by Willie Wright Jr
    I would like to know if it is possible to change the hot key on a Linux distribution. I have already tried to re-configure the hot keys in the virtual box preferences but i have not been able to accomplish the results i am looking for. I am currently running a Linux Antergos in a Virtual Box and i would like to change the hot key to match the ? key of Mac Book Pro to improve workflow and swap the workstation hot key to Ctrl. Overall i would like to be able to save files in the Virtual Machine using ?+s, ?+r Etc...

    Read the article

  • How do I get grep color in the file names before each match?

    - by chimerical
    If I run grep -ir "somethingtomatch" . from the current directory, I typically get results like this: ./some/path/file1.html: filecontent filecontent keyword filecontent ./some/path/file2.html: filecontent filecontent filecontent keyword ./some/path/file3.html: filecontent keyword filecontent filecontent ./some/path/file4.html: keyword filecontent filecontent filecontent I used grep --color=auto -ir 'somethingtomatch" . but it only highlights the keywords in white on a red highlight. I'm trying to get file names on the left color-coded too. How do I do that? I'm using Terminal.app in OS X with bash and xterm (and I tried xterm-color too).

    Read the article

  • LEMP Stack on Ubuntu Server 13.04 not parsing PHP Switch Statement Properly

    - by schester
    On my Ubuntu 12.04 Server LTS on nginx 1.1.19, the following PHP code works properly: switch($_SESSION['user']['permissions']) { case 9: echo "Super Admin Privileges"; break; case 0: echo "Operator Privileges"; break; case 1: echo "Line Leader Privileges"; break; case 2: echo "Supervisor Privileges"; break; case 3: echo "Engineer Privileges"; break; case 4: echo "Manager Privileges"; break; case 5: echo "Administrator Privileges"; break; default: echo "Operator Privileges"; } However, I have a backup server running Ubuntu Server 13.04 on nginx 1.4.1 which has the exact same copy of the script (synced) but instead of breaking on the break; command, it echos the whole php script. The output on the 12.04 Box is similar to this: You are logged in with Super Admin Privileges But on the 13.04 Box, the output is like this: You are logged in logged in with Super Admin Privileges"; break; case 0: echo "Operator Privileges"; break; case 1: echo "Line Leader Privileges"; break; case 2: echo "Supervisor Privileges"; break; case 3: echo "Engineer Privileges"; break; case 4: echo "Manager Privileges"; break; case 5: echo "Administrator Privileges"; break; default: echo "Operator Privileges"; } ?> I have also tried changing the script from switch statement to if statements but same results. Any idea what is wrong?

    Read the article

  • Apache, modifying response codes from 404 to 301

    - by user72539
    Hi, I'm running a magento installation on an apache server. There are many pages indexed in both google and linked to from external sites. I can't use 301 redirects in a .htaccess file as I can't be sure I will catch all the links. At the moment all requests are rewritten through magento and if a request isn't found magento returns a 404 File not Found. Is there a way of using one of the apache modules to filter the response* from magento and if a 404 Not Found is being sent back then replace the response with a standard 301 Redirect to the home page? E.g. Request to Magento -- Apache -- Rewrite to Magento index.php page -- page processed. Response if request exists -- return results (200) if request doesn't exist -- return 404 -- apache filter change response -- return 301 redirect to / I appreciate any help. Thanks, Jon as far as I am aware mod_rewrite is only used to rewrite requests and doesn't allow the modification of responses.

    Read the article

  • Running evrouter at boot with init.d, or after xserver starts

    - by J V
    I'm using evrouter to set up mouse button binds, and init.d to start it. My init.d file: #!/bin/bash #Simple init.d script to run evrouter ### BEGIN INIT INFO # Provides: evrouter # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: Set evrouter bindings # Description: Set evrouter bindings at boot time. ### END INIT INFO config="/opt/hacks/evrouterrc" case "$1" in start|restart|reload|force-reload) evrouter -c "$config" /dev/input/event* ;; stop) echo "Evrouter is not a daemon, change settings file at '$config' and restart" ;; *) echo "Usage: $0 start" >&2 exit 3 ;; esac evrouter however complains that: evrouter: could not open display "". If evrouter requires xserver to be up, how do I get init to wait until after xserver starts to run this script? If xserver restarts will this script run automatically? Running this with sudo services evrouter start still results in this error, can init.d scripts not tell where my display is? (Not exactly familiar with init, runlevels, etc)

    Read the article

  • The Hot-Add Memory Hogs

    - by Andrew Clarke
    One of the more difficult tasks, when virtualizing a server, is to determine the amount of memory that Hypervisor should assign to the virtual machine. This requires accurate monitoring and, because of the consequences of setting the value too low, there is a great temptation to err on the side of over-provisioning. This results in fewer guest VMs and, in fact, with more accurate memory provisioning, many virtual environments could support 30% more VMs. In order to achieve a better consolidation (aka VM density) ratio, Windows Server 2008 R2 SP1 has introduced what Microsoft calls ‘Dynamic Memory’. This means that the start-up RAM VM memory assigned to guest virtual machines can be allowed to vary according to demand, changing dynamically while the VM is running, based on the workload of applications running inside. If demand outstrips supply, then memory can be rationed according to the ‘memory weight’ assigned to the guest VM. By this mechanism, memory becomes a shared resource that can be reallocated automatically as demand patterns vary. Unlike VMWare’s Memory Overcommit technology, the sum of all the memory allocations to each virtual machine will not exceed the total memory of the host computer. This is fine for applications that are self-regulating in their demands for memory, releasing memory back into the 'pool' when not under peak load. Other applications however, such as SQL Server Standard and Enterprise, are by nature, memory hogs under high workload; they can grab hot-add memory whilst running under load and then never release it. This requires more careful setting-up and the SQLOS team have provided some guidelines from for configuring SQL Server in virtual environments. Whereas VMWare’s Memory Overcommit is well-proven in a number of different configurations, Hyper-V’s ‘Dynamic Memory’ is new. So far, the indications are that it will improve the business case for virtualizing and it is probably a far more intuitive technology for the average IT professional to grasp. It is certainly worth testing to see whether it works for you.

    Read the article

< Previous Page | 257 258 259 260 261 262 263 264 265 266 267 268  | Next Page >