Search Results

Search found 7898 results on 316 pages for 'canada dev'.

Page 55/316 | < Previous Page | 51 52 53 54 55 56 57 58 59 60 61 62  | Next Page >

  • make IIS 7.5 cache static content files over diferent pages

    - by Achilles
    On a Windows 2008 R2, using DNS and IIS I've established my development test server; i.e. I'll have a web application that I can browse on http://test.dev I've moved all the static content files like images, js files and css files into another application which is visible on http://cdn.test.dev test.dev, uses cdn.test.dev urls like http://cdn.test.dev/js/jquery.js to load js, css and images. When I first load "~/" of test.dev, all files will load with a response code of 200; when I press F5 in Firefox, all files, except the "~/default.aspx", will load with 304 response code; but pressing Ctrl+F5 loads them again with a 200 code; if I browse another url like "~/pages/" in test.dev, all of those static files will reload with a 200 code... Is this normal or I'm doing something wrong? Actually, I'm looking for a behavior like this: I want the client to load http://cdn.test.dev/js/jquery.js, only once. I want the client's browser to use this jquery.js file, from cache, in all other pages of test.dev Is this possible? This is the web.config file I have in the root directory of cdn.test.dev: <configuration> <system.webServer> <caching> <profiles> <add extension=".png" policy="CacheUntilChange" varyByHeaders="User-Agent" location="Client" /> <add extension=".gif" policy="CacheUntilChange" varyByHeaders="User-Agent" location="Client" /> <add extension=".jpg" policy="CacheUntilChange" varyByHeaders="User-Agent" location="Client" /> <add extension=".js" policy="CacheUntilChange" varyByHeaders="User-Agent" location="Client" /> <add extension=".css" policy="CacheUntilChange" varyByHeaders="User-Agent" location="Client" /> <add extension=".axd" kernelCachePolicy="CacheUntilChange" varyByHeaders="User-Agent" location="Client" /> </profiles> </caching> <httpProtocol allowKeepAlive="true"> <customHeaders> <add name="Cache-Control" value="public, max-age=31536000" /> </customHeaders> </httpProtocol> <validation validateIntegratedModeConfiguration="false" /> <modules runAllManagedModulesForAllRequests="true"> <remove name="RadUploadModule" /> <remove name="RadCompression" /> <add name="RadUploadModule" type="Telerik.Web.UI.RadUploadHttpModule" preCondition="integratedMode" /> <add name="RadCompression" type="Telerik.Web.UI.RadCompression" preCondition="integratedMode" /> </modules> <handlers> <remove name="ChartImage_axd" /> <remove name="Telerik_Web_UI_SpellCheckHandler_axd" /> <remove name="Telerik_Web_UI_DialogHandler_aspx" /> <remove name="Telerik_RadUploadProgressHandler_ashx" /> <remove name="Telerik_Web_UI_WebResource_axd" /> <add name="ChartImage_axd" path="ChartImage.axd" type="Telerik.Web.UI.ChartHttpHandler" verb="*" preCondition="integratedMode" /> <add name="Telerik_Web_UI_SpellCheckHandler_axd" path="Telerik.Web.UI.SpellCheckHandler.axd" type="Telerik.Web.UI.SpellCheckHandler" verb="*" preCondition="integratedMode" /> <add name="Telerik_Web_UI_DialogHandler_aspx" path="Telerik.Web.UI.DialogHandler.aspx" type="Telerik.Web.UI.DialogHandler" verb="*" preCondition="integratedMode" /> <add name="Telerik_RadUploadProgressHandler_ashx" path="Telerik.RadUploadProgressHandler.ashx" type="Telerik.Web.UI.RadUploadProgressHandler" verb="*" preCondition="integratedMode" /> <add name="Telerik_Web_UI_WebResource_axd" path="Telerik.Web.UI.WebResource.axd" type="Telerik.Web.UI.WebResource" verb="*" preCondition="integratedMode" /> </handlers> <security> <requestFiltering> <requestLimits maxAllowedContentLength="10485760" /> </requestFiltering> </security> <staticContent> <clientCache cacheControlMode="UseExpires" httpExpires="Wed, 01 Jan 2020 00:00:00 GMT"/> </staticContent> </system.webServer> <appSettings /> <system.web> <compilation debug="false" targetFramework="4.0" /> <pages> <controls> <add tagPrefix="telerik" namespace="Telerik.Web.UI" assembly="Telerik.Web.UI" /> </controls> </pages> <httpHandlers> <add path="ChartImage.axd" type="Telerik.Web.UI.ChartHttpHandler" verb="*" validate="false" /> <add path="Telerik.Web.UI.SpellCheckHandler.axd" type="Telerik.Web.UI.SpellCheckHandler" verb="*" validate="false" /> <add path="Telerik.Web.UI.DialogHandler.aspx" type="Telerik.Web.UI.DialogHandler" verb="*" validate="false" /> <add path="Telerik.RadUploadProgressHandler.ashx" type="Telerik.Web.UI.RadUploadProgressHandler" verb="*" validate="false" /> <add path="Telerik.Web.UI.WebResource.axd" type="Telerik.Web.UI.WebResource" verb="*" validate="false" /> </httpHandlers> <httpModules> <add name="RadUploadModule" type="Telerik.Web.UI.RadUploadHttpModule" /> <add name="RadCompression" type="Telerik.Web.UI.RadCompression" /> </httpModules> <httpRuntime maxRequestLength="10240" /> </system.web> </configuration> and this is the resulting response header for http://cdn.test.dev/css/global.css: Cache-Control: private,public, max-age=31536000 Content-Type: text/css Content-Encoding: gzip Expires: Wed, 01 Jan 2020 00:00:00 GMT Last-Modified: Mon, 06 Sep 2010 08:53:06 GMT Accept-Ranges: bytes Etag: "0454eca04dcb1:0" Vary: Accept-Encoding Server: Microsoft-IIS/7.5 X-Powered-By: ASP.NET Date: Mon, 06 Sep 2010 14:57:08 GMT Content-Length: 4495

    Read the article

  • make IIS 7.5 cache static content files over diferent pages

    - by Achilles
    On a Windows 2008 R2, using DNS and IIS I've established my development test server; i.e. I'll have a web application that I can browse on http://test.dev I've moved all the static content files like images, js files and css files into another application which is visible on http://cdn.test.dev test.dev, uses cdn.test.dev urls like http://cdn.test.dev/js/jquery.js to load js, css and images. When I first load "~/" of test.dev, all files will load with a response code of 200; when I press F5 in Firefox, all files, except the "~/default.aspx", will load with 304 response code; but pressing Ctrl+F5 loads them again with a 200 code; if I browse another url like "~/pages/" in test.dev, all of those static files will reload with a 200 code... Is this normal or I'm doing something wrong? Actually, I'm looking for a behavior like this: I want the client to load http://cdn.test.dev/js/jquery.js, only once. I want the client's browser to use this jquery.js file, from cache, in all other pages of test.dev Is this possible? This is the web.config file I have in the root directory of cdn.test.dev: <configuration> <system.webServer> <caching> <profiles> <add extension=".png" policy="CacheUntilChange" varyByHeaders="User-Agent" location="Client" /> <add extension=".gif" policy="CacheUntilChange" varyByHeaders="User-Agent" location="Client" /> <add extension=".jpg" policy="CacheUntilChange" varyByHeaders="User-Agent" location="Client" /> <add extension=".js" policy="CacheUntilChange" varyByHeaders="User-Agent" location="Client" /> <add extension=".css" policy="CacheUntilChange" varyByHeaders="User-Agent" location="Client" /> <add extension=".axd" kernelCachePolicy="CacheUntilChange" varyByHeaders="User-Agent" location="Client" /> </profiles> </caching> <httpProtocol allowKeepAlive="true"> <customHeaders> <add name="Cache-Control" value="public, max-age=31536000" /> </customHeaders> </httpProtocol> <validation validateIntegratedModeConfiguration="false" /> <modules runAllManagedModulesForAllRequests="true"> <remove name="RadUploadModule" /> <remove name="RadCompression" /> <add name="RadUploadModule" type="Telerik.Web.UI.RadUploadHttpModule" preCondition="integratedMode" /> <add name="RadCompression" type="Telerik.Web.UI.RadCompression" preCondition="integratedMode" /> </modules> <handlers> <remove name="ChartImage_axd" /> <remove name="Telerik_Web_UI_SpellCheckHandler_axd" /> <remove name="Telerik_Web_UI_DialogHandler_aspx" /> <remove name="Telerik_RadUploadProgressHandler_ashx" /> <remove name="Telerik_Web_UI_WebResource_axd" /> <add name="ChartImage_axd" path="ChartImage.axd" type="Telerik.Web.UI.ChartHttpHandler" verb="*" preCondition="integratedMode" /> <add name="Telerik_Web_UI_SpellCheckHandler_axd" path="Telerik.Web.UI.SpellCheckHandler.axd" type="Telerik.Web.UI.SpellCheckHandler" verb="*" preCondition="integratedMode" /> <add name="Telerik_Web_UI_DialogHandler_aspx" path="Telerik.Web.UI.DialogHandler.aspx" type="Telerik.Web.UI.DialogHandler" verb="*" preCondition="integratedMode" /> <add name="Telerik_RadUploadProgressHandler_ashx" path="Telerik.RadUploadProgressHandler.ashx" type="Telerik.Web.UI.RadUploadProgressHandler" verb="*" preCondition="integratedMode" /> <add name="Telerik_Web_UI_WebResource_axd" path="Telerik.Web.UI.WebResource.axd" type="Telerik.Web.UI.WebResource" verb="*" preCondition="integratedMode" /> </handlers> <security> <requestFiltering> <requestLimits maxAllowedContentLength="10485760" /> </requestFiltering> </security> <staticContent> <clientCache cacheControlMode="UseExpires" httpExpires="Wed, 01 Jan 2020 00:00:00 GMT"/> </staticContent> </system.webServer> <appSettings /> <system.web> <compilation debug="false" targetFramework="4.0" /> <pages> <controls> <add tagPrefix="telerik" namespace="Telerik.Web.UI" assembly="Telerik.Web.UI" /> </controls> </pages> <httpHandlers> <add path="ChartImage.axd" type="Telerik.Web.UI.ChartHttpHandler" verb="*" validate="false" /> <add path="Telerik.Web.UI.SpellCheckHandler.axd" type="Telerik.Web.UI.SpellCheckHandler" verb="*" validate="false" /> <add path="Telerik.Web.UI.DialogHandler.aspx" type="Telerik.Web.UI.DialogHandler" verb="*" validate="false" /> <add path="Telerik.RadUploadProgressHandler.ashx" type="Telerik.Web.UI.RadUploadProgressHandler" verb="*" validate="false" /> <add path="Telerik.Web.UI.WebResource.axd" type="Telerik.Web.UI.WebResource" verb="*" validate="false" /> </httpHandlers> <httpModules> <add name="RadUploadModule" type="Telerik.Web.UI.RadUploadHttpModule" /> <add name="RadCompression" type="Telerik.Web.UI.RadCompression" /> </httpModules> <httpRuntime maxRequestLength="10240" /> </system.web> </configuration> and this is the resulting response header for http://cdn.test.dev/css/global.css: Cache-Control: private,public, max-age=31536000 Content-Type: text/css Content-Encoding: gzip Expires: Wed, 01 Jan 2020 00:00:00 GMT Last-Modified: Mon, 06 Sep 2010 08:53:06 GMT Accept-Ranges: bytes Etag: "0454eca04dcb1:0" Vary: Accept-Encoding Server: Microsoft-IIS/7.5 X-Powered-By: ASP.NET Date: Mon, 06 Sep 2010 14:57:08 GMT Content-Length: 4495

    Read the article

  • It&rsquo;s About You: Tell Microsoft How They&rsquo;re Doing!

    - by juanlarios
    Every fall and spring, a survey goes out to a few hundred thousand IT folk in Canada asking what they think of Microsoft as a company. The information they get from this survey helps them understand what problems and issues you’re facing and how they can do better. The team at Microsoft Canada takes the input they get from this survey very seriously. Now I don’t know who of you will get the survey and who won’t but if you do find an email in your inbox from "Microsoft Feedback” with an email address of “ [email protected] ” and a subject line “Help Microsoft Focus on Customers and Partners” from now until April 13th — it’s not a hoax or phishing email. Please open it and take a few minutes to tell them what you think. This is your chance to get your voice heard: If they’re doing well, feel free to pile on the kudos (they love positive feedback!) and if you see areas they can improve, please point them out so they can make adjustments (they also love constructive criticism!). The Microsoft team would like to thank you for all your feedback in the past — to those of you who have filled out the survey and sent them emails. Thank you to all who engage with them in so many different ways through events, the blogs, online and in person. You are why they do what they do and they feel lucky to work with such a great community! One last thing - even if you don’t get the survey you can always give the team feedback by emailing us directly through the Microsoft Canada IT Pro Feedback email address . They want to make sure they are serving you in the best possible way. Tell them what you want more of. What should they do less of or stop altogether? How can they help? Do you want more cowbell ? Let them know through the survey or the email alias. They love hearing from you!

    Read the article

  • All New MySQL For Beginners Training on Demand Offering

    - by Antoinette O'Sullivan
    Get started on MySQL for Beginners training within 24 hours with the newly released MySQL for Beginners Training on Demand. With Training on Demand, you get: - Trained by top MySQL Instructors - Access to hands-on practice environment - Full classroom content available 24/7 - And no travel expenses to worry about The MySQL for Beginners course covers all the basics and gets you on your way with a solid foundation. This hands-on class covers the fundamentals of SQL and relational databases, using MySQL as a teaching tool. In addition to the Training on Demand option, you have the choice to taking the MySQL for Beginners course as: Live Virtual Training: Live, interactive, online training delivered by MySQL instructor to you anywhere you have an internet connection. 100s of events on the schedule for different timezones. In-Classroom Training: Scheduled events include those listed below:  Location  Date  Delivery Language  Warsaw, Poland  16 July 2012  Polish  Dublin, Ireland 15 October 2012  English  Belfast, Ireland  28 August 2012  English  Rome, Italy  5 November 2012  Italy  Hamburg, Germany  3 December 2012  German  Lisbon, Portugal  5 November 2012  European Portugese  Amsterdam, Netherlands  10 December 2012  Dutch  Nieuwegein, Netherlands  17 September 2012  Dutch  Barcelona, Spain  5 November 2012  Spanish  Riga, Latvia  15 July 2012  Latvian  Petaling Jaya, Malaysia  7 August 2012  English  Ottawa, Canada  7 August 2012  English  Toronto, Canada  7 August 2012  English  Montreal, Canada  7 August 2012  English  Sao Paulo, Brazil  10 July 2012  Brazilan Portugese For more information on any of the MySQL for Beginners training options or to learn more about the Authorized MySQL curriculum go to the Oracle University portal and click on MySQL.

    Read the article

  • OSX: How to get a volume name (or bsd name) from a IOUSBDeviceInterface or location id

    - by LG
    Hi, I'm trying to write an app that associates a particular USB string descriptor (of a USB mass storage device) with its volume or bsd name. So the code goes through all the connected USB devices, gets the string descriptors and extracts information from one of them. I would like to get the volume name of those USB devices. I can't find the right API to do that. I have tried to do that: DASessionRef session = DASessionCreate(kCFAllocatorDefault); DADiskRef disk_ref = DADiskCreateFromIOMedia(kCFAllocatorDefault, session, usb_device_ref); const char* name = DADiskGetBSDName(disk_ref); But the DADiskCreateFromIOMedia function returned nil, I assume the usb_device_ref that I passed was not compatible with the io_service_t that the function is expecting. So how can I get the volume name of a USB device? Could I use the location id to do that? Thanks for reading. -L FOO_Result result = FOO_SUCCESS; mach_port_t master_port; kern_return_t k_result; io_iterator_t iterator = 0; io_service_t usb_device_ref; CFMutableDictionaryRef matching_dictionary = NULL; // first create a master_port if (FOO_FAILED(k_result = IOMasterPort(MACH_PORT_NULL, &master_port))) { fprintf(stderr, "could not create master port, err = %d\n", k_result); goto cleanup; } if ((matching_dictionary = IOServiceMatching(kIOUSBDeviceClassName)) == NULL) { fprintf(stderr, "could not create matching dictionary, err = %d\n", k_result); goto cleanup; } if (FOO_FAILED(k_result = IOServiceGetMatchingServices(master_port, matching_dictionary, &iterator))) { fprintf(stderr, "could not find any matching services, err = %d\n", k_result); goto cleanup; } while (usb_device_ref = IOIteratorNext(iterator)) { IOReturn err; IOCFPlugInInterface **iodev; // requires <IOKit/IOCFPlugIn.h> IOUSBDeviceInterface **dev; SInt32 score; err = IOCreatePlugInInterfaceForService(usb_device_ref, kIOUSBDeviceUserClientTypeID, kIOCFPlugInInterfaceID, &iodev, &score); if (err || !iodev) { printf("dealWithDevice: unable to create plugin. ret = %08x, iodev = %p\n", err, iodev); return; } err = (*iodev)->QueryInterface(iodev, CFUUIDGetUUIDBytes(kIOUSBDeviceInterfaceID), (LPVOID*)&dev); (*iodev)->Release(iodev); // done with this FOO_String string_value; UInt8 string_index = 0x1; FOO_Result result = FOO_SUCCESS; CFStringRef device_name_as_cf_string; do { if (FOO_SUCCEEDED(result = FOO_GetStringDescriptor(dev, string_index, 0, string_value))) { printf("String at index %i is %s\n", string_index, string_value.GetChars()); // extract the command code if it is the FOO string if (string_value.CompareN("FOO:", 4) == 0) { FOO_Byte code = 0; FOO_HexToByte(string_value.GetChars() + 4, code); // Get other relevant information from the device io_name_t device_name; UInt32 location_id = 0; // Get the USB device's name. err = IORegistryEntryGetName(usb_device_ref, device_name); device_name_as_cf_string = CFStringCreateWithCString(kCFAllocatorDefault, device_name, kCFStringEncodingASCII); err = (*dev)->GetLocationID(dev, &location_id); // TODO: get volume or BSD name // add the device to the list break; } } string_index++; } while (FOO_SUCCEEDED(result)); err = (*dev)->USBDeviceClose(dev); if (err) { printf("dealWithDevice: error closing device - %08x\n", err); (*dev)->Release(dev); return; } err = (*dev)->Release(dev); if (err) { printf("dealWithDevice: error releasing device - %08x\n", err); return; } IOObjectRelease(usb_device_ref); // no longer need this reference }

    Read the article

  • ASMLib

    - by wcoekaer
    Oracle ASMlib on Linux has been a topic of discussion a number of times since it was released way back when in 2004. There is a lot of confusion around it and certainly a lot of misinformation out there for no good reason. Let me try to give a bit of history around Oracle ASMLib. Oracle ASMLib was introduced at the time Oracle released Oracle Database 10g R1. 10gR1 introduced a very cool important new features called Oracle ASM (Automatic Storage Management). A very simplistic description would be that this is a very sophisticated volume manager for Oracle data. Give your devices directly to the ASM instance and we manage the storage for you, clustered, highly available, redundant, performance, etc, etc... We recommend using Oracle ASM for all database deployments, single instance or clustered (RAC). The ASM instance manages the storage and every Oracle server process opens and operates on the storage devices like it would open and operate on regular datafiles or raw devices. So by default since 10gR1 up to today, we do not interact differently with ASM managed block devices than we did before with a datafile being mapped to a raw device. All of this is without ASMLib, so ignore that one for now. Standard Oracle on any platform that we support (Linux, Windows, Solaris, AIX, ...) does it the exact same way. You start an ASM instance, it handles storage management, all the database instances use and open that storage and read/write from/to it. There are no extra pieces of software needed, including on Linux. ASM is fully functional and selfcontained without any other components. In order for the admin to provide a raw device to ASM or to the database, it has to have persistent device naming. If you booted up a server where a raw disk was named /dev/sdf and you give it to ASM (or even just creating a tablespace without asm on that device with datafile '/dev/sdf') and next time you boot up and that device is now /dev/sdg, you end up with an error. Just like you can't just change datafile names, you can't change device filenames without telling the database, or ASM. persistent device naming on Linux, especially back in those days ways to say it bluntly, a nightmare. In fact there were a number of issues (dating back to 2004) : Linux async IO wasn't pretty persistent device naming including permissions (had to be owned by oracle and the dba group) was very, very difficult to manage system resource usage in terms of open file descriptors So given the above, we tried to find a way to make this easier on the admins, in many ways, similar to why we started working on OCFS a few years earlier - how can we make life easier for the admins on Linux. A feature of Oracle ASM is the ability for third parties to write an extension using what's called ASMLib. It is possible for any third party OS or storage vendor to write a library using a specific Oracle defined interface that gets used by the ASM instance and by the database instance when available. This interface offered 2 components : Define an IO interface - allow any IO to the devices to go through ASMLib Define device discovery - implement an external way of discovering, labeling devices to provide to ASM and the Oracle database instance This is similar to a library that a number of companies have implemented over many years called libODM (Oracle Disk Manager). ODM was specified many years before we introduced ASM and allowed third party vendors to implement their own IO routines so that the database would use this library if installed and make use of the library open/read/write/close,.. routines instead of the standard OS interfaces. PolyServe back in the day used this to optimize their storage solution, Veritas used (and I believe still uses) this for their filesystem. It basically allowed, in particular, filesystem vendors to write libraries that could optimize access to their storage or filesystem.. so ASMLib was not something new, it was basically based on the same model. You have libodm for just database access, you have libasm for asm/database access. Since this library interface existed, we decided to do a reference implementation on Linux. We wrote an ASMLib for Linux that could be used on any Linux platform and other vendors could see how this worked and potentially implement their own solution. As I mentioned earlier, ASMLib and ODMLib are libraries for third party extensions. ASMLib for Linux, since it was a reference implementation implemented both interfaces, the storage discovery part and the IO part. There are 2 components : Oracle ASMLib - the userspace library with config tools (a shared object and some scripts) oracleasm.ko - a kernel module that implements the asm device for /dev/oracleasm/* The userspace library is a binary-only module since it links with and contains Oracle header files but is generic, we only have one asm library for the various Linux platforms. This library is opened by Oracle ASM and by Oracle database processes and this library interacts with the OS through the asm device (/dev/asm). It can install on Oracle Linux, on SuSE SLES, on Red Hat RHEL,.. The library itself doesn't actually care much about the OS version, the kernel module and device cares. The support tools are simple scripts that allow the admin to label devices and scan for disks and devices. This way you can say create an ASM disk label foo on, currently /dev/sdf... So if /dev/sdf disappears and next time is /dev/sdg, we just scan for the label foo and we discover it as /dev/sdg and life goes on without any worry. Also, when the database needs access to the device, we don't have to worry about file permissions or anything it will be taken care of. So it's a convenience thing. The kernel module oracleasm.ko is a Linux kernel module/device driver. It implements a device /dev/oracleasm/* and any and all IO goes through ASMLib - /dev/oracleasm. This kernel module is obviously a very specific Oracle related device driver but it was released under the GPL v2 so anyone could easily build it for their Linux distribution kernels. Advantages for using ASMLib : A good async IO interface for the database, the entire IO interface is based on an optimal ASYNC model for performance A single file descriptor per Oracle process, not one per device or datafile per process reducing # of open filehandles overhead Device scanning and labeling built-in so you do not have to worry about messing with udev or devlabel, permissions or the likes which can be very complex and error prone. Just like with OCFS and OCFS2, each kernel version (major or minor) has to get a new version of the device drivers. We started out building the oracleasm kernel module rpms for many distributions, SLES (in fact in the early days still even for this thing called United Linux) and RHEL. The driver didn't make sense to get pushed into upstream Linux because it's unique and specific to the Oracle database. As it takes a huge effort in terms of build infrastructure and QA and release management to build kernel modules for every architecture, every linux distribution and every major and minor version we worked with the vendors to get them to add this tiny kernel module to their infrastructure. (60k source code file). The folks at SuSE understood this was good for them and their customers and us and added it to SLES. So every build coming from SuSE for SLES contains the oracleasm.ko module. We weren't as successful with other vendors so for quite some time we continued to build it for RHEL and of course as we introduced Oracle Linux end of 2006 also for Oracle Linux. With Oracle Linux it became easy for us because we just added the code to our build system and as we churned out Oracle Linux kernels whether it was for a public release or for customers that needed a one off fix where they also used asmlib, we didn't have to do any extra work it was just all nicely integrated. With the introduction of Oracle Linux's Unbreakable Enterprise Kernel and our interest in being able to exploit ASMLib more, we started working on a very exciting project called Data Integrity. Oracle (Martin Petersen in particular) worked for many years with the T10 standards committee and storage vendors and implemented Linux kernel support for DIF/DIX, data protection in the Linux kernel, note to those that wonder, yes it's all in mainline Linux and under the GPL. This basically gave us all the features in the Linux kernel to checksum a data block, send it to the storage adapter, which can then validate that block and checksum in firmware before it sends it over the wire to the storage array, which can then do another checksum and to the actual DISK which does a final validation before writing the block to the physical media. So what was missing was the ability for a userspace application (read: Oracle RDBMS) to write a block which then has a checksum and validation all the way down to the disk. application to disk. Because we have ASMLib we had an entry into the Linux kernel and Martin added support in ASMLib (kernel driver + userspace) for this functionality. Now, this is all based on relatively current Linux kernels, the oracleasm kernel module depends on the main kernel to have support for it so we can make use of it. Thanks to UEK and us having the ability to ship a more modern, current version of the Linux kernel we were able to introduce this feature into ASMLib for Linux from Oracle. This combined with the fact that we build the asm kernel module when we build every single UEK kernel allowed us to continue improving ASMLib and provide it to our customers. So today, we (Oracle) provide Oracle ASMLib for Oracle Linux and in particular on the Unbreakable Enterprise Kernel. We did the build/testing/delivery of ASMLib for RHEL until RHEL5 but since RHEL6 decided that it was too much effort for us to also maintain all the build and test environments for RHEL and we did not have the ability to use the latest kernel features to introduce the Data Integrity features and we didn't want to end up with multiple versions of asmlib as maintained by us. SuSE SLES still builds and comes with the oracleasm module and they do all the work and RHAT it certainly welcome to do the same. They don't have to rebuild the userspace library, it's really about the kernel module. And finally to re-iterate a few important things : Oracle ASM does not in any way require ASMLib to function completely. ASMlib is a small set of extensions, in particular to make device management easier but there are no extra features exposed through Oracle ASM with ASMLib enabled or disabled. Often customers confuse ASMLib with ASM. again, ASM exists on every Oracle supported OS and on every supported Linux OS, SLES, RHEL, OL withoutASMLib Oracle ASMLib userspace is available for OTN and the kernel module is shipped along with OL/UEK for every build and by SuSE for SLES for every of their builds ASMLib kernel module was built by us for RHEL4 and RHEL5 but we do not build it for RHEL6, nor for the OL6 RHCK kernel. Only for UEK ASMLib for Linux is/was a reference implementation for any third party vendor to be able to offer, if they want to, their own version for their own OS or storage ASMLib as provided by Oracle for Linux continues to be enhanced and evolve and for the kernel module we use UEK as the base OS kernel hope this helps.

    Read the article

  • Visual Studio Talk Show #119 is now online - Quand et dans quel contexte est-ce que adéquat est adéq

    - by guybarrette
    http://www.visualstudiotalkshow.com Joel Quimper: Quand et dans quel contexte est-ce que «adéquat» est «adéquat»? Nous discutons avec Joel Quimper des pratiques de développement et de la mauvaise habitude qui consiste à vouloir tout abstraire et tout généraliser. Un application offre une valeur réelle seulement lorsqu’elle est utilisée par des utilisateurs. Alors ou tracer la limite entre le sur-design, l'extensibilité et la réutilisabilité. Joel Quimper est un conseiller en architecture chez Microsoft Canada. Il travaille essentiellement avec les architectes des grandes entreprises de l'Est du Canada pour aider leur organisation à réaliser leur plein potentiel. Joel possède une vaste expérience dans la conception de solutions orientées service en utilisant les services web. Il est passionné par l'interopérabilité avec la plateforme .NET. Avant de rejoindre Microsoft, il a travaillé 10 ans pour IBM Canada dans plusieurs rôles. Plus récemment, il a travaillé comme architecte d'intégration WebSphere. Il a travaillé avec plusieurs clients dans la mise en œuvre réussie de solutions SOA. var addthis_pub="guybarrette";

    Read the article

  • UNIX - mount: only root can do that

    - by Travesty3
    I need to allow a non-root user to mount/unmount a device. I am a total noob when it comes to UNIX, so please dumb it down for me. I've been looking all over teh interwebz to find an answer and it seems everyone is giving the same one, which is to modify /etc/fstab to include that device with the 'user' option (or 'users', tried both). Cool, well I did that and it still says "mount: only root can do that". Here are the contents of my fstab: # /etc/fstab: static file system information. # # Use 'vol_id --uuid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # proc /proc proc defaults 0 0 # / was on /dev/mapper/minicc-root during installation UUID=1a69f02a-a049-4411-8c57-ff4ebd8bb933 / ext3 relatime,errors=remount-ro 0 1 # /boot was on /dev/sda5 during installation UUID=038498fe-1267-44c4-8788-e1354d71faf5 /boot ext2 relatime 0 2 # swap was on /dev/mapper/minicc-swap_1 during installation UUID=0bb583aa-84a8-43ef-98c4-c6cb25d20715 none swap sw 0 0 /dev/scd0 /media/cdrom0 udf,iso9660 user,noauto,exec,utf8 0 0 /dev/scd0 /media/floppy0 auto rw,user,noauto,exec,utf8 0 0 /dev/sdb1 /mnt/sdcard auto auto,user,rw,exec 0 0 My thumb drive partition shows up as /dev/sdb1. I'm pretty sure my fstab is set up OK, but everyone on the other posts seems to fail to mention how they actually call the 'mount' command once this entry is in the fstab file. I think this is where my problem may be. The command I use to mount the drive is: $ mount /dev/sdb1 /mnt/sdcard. /bin/mount is owned by root and is in the root group and has 4755 permissions. /bin/umount is owned by root and is in the root group and has 4755 permissions. /mnt/sdcard is owned by me and is in one of my groups and has 0755 permissions. My mount command works fine if I use sudo, but I need to be able to do this without sudo (need to be able to do it from a PHP script using shell_exec). Any suggestions? Sorry for making you read so much...just trying to get as much info in the initial post as possible to preemptively answer questions about configuration stuff. If I missed anything tho, ask away. Thanks! -Travis

    Read the article

  • ubuntu hardrive repartition without uninstalling ubuntu or windows 7 and losing data of hardrive

    - by user141692
    I have and asus r500v with 750 gb gpt system uefi motherboard core i7 3610qm, nvidia geforce gt, with ubuntu and w7 dual boot, I had problems installing ubuntu because of the grub but I fix it with https://bugs.launchpad.net/ubuntu/+source/grub2/+bug/807801, but I still have the problem of "warning: the partition is misaligned by 3072 bytes. this may result iin very poor performance. Repartitioning is suggested" in every linux partitioin I made and my 750 gb is not being used at the maximun capacity it only uses 698 gb. I want to make partitions so that the warning doesnt show up and I can use the maximum capacity of the HDD, as I did with another dual boot laptop (compaq presario cq40). I have the following partitions: unknown 1.0Mb: partition type: lynux Basic DAta partition, device: /dev/sda2 Usage: --, Partition flags: --, partition label:-- warning: the partition is misaligned by 3072 bytes. this may result in very poor performance. repartitioning is suggested. -system 210 Mb FAt, usage: Filesystem, partition type: EFI system Partition, Partition Flags:--, Label: system, Device: /dev/sda1, partition label: EFI system partition, Capacity 210MB, avilable:--, Mount Point: mounted at /boot/efi -134 Mb NTFS, usage: filesystem, partition type: linux basic data partition, partition flags:.--, device: /dev/sda7, partition label: --, capacity: 134MB,available:--, mount point: not mounted -OS 250 GB NTFS, usage: file system, partititon type: linux basic data partition, partition flags: --, type: NTFS, label: OS, device: /dev/sda3, partition label: basic data partition, capacity: 250 GB, available:-, mount point: not mounted -10GB FAT 32, usage: filesystem, partition type: EFI system partition, partition flags:--, type: FAT 32, label: --, device: /dev/sda4, partition label: --, capacity: 10GB, available:--, mount point: not mounted warning: the partition is misaligned by 3072 bytes. this may result in very poor performance. repartitioning is suggested. -10gb ext 4, usage: file system, partition type: linux basic data partition, partition flags:--, type: EXT4(version1) label:--, device: /dev/sda9, partition label:--, capacity: 10 GB, available:--, mount point at / warning: the partition is misaligned by 1536 bytes. this may result in very poor performance. repartitioning is suggested. -478GB ext4, usage: filesystem, partition type: linux basic data partition, partition flags:--, type: EXT4, label:--, device: /dev/sda5, partition label:--, capacity: 478gb, available:--, mount point: mounted at /home warning: the partition is misaligned by 512 bytes. this may result in very poor performance. repartitioning is suggested. -2.0gb Swap 2.0Gb, usage: swap space, partition type: linux swap partitioin, partition flags:-, device: /dev/sda6, partition label: capacity: 2.0gb warning: the partition is misaligned by 512 bytes. this may result in very poor performance. repartitioning is suggested. and as you can see it is not well organized so please help me to organize the partitions witahout uninstalling the w7, and if possible the grub2

    Read the article

  • Quantifying the effects of partition mis-alignment

    - by Matt
    I'm experiencing some significant performance issues on an NFS server. I've been reading up a bit on partition alignment, and I think I have my partitions mis-aligned. I can't find anything that tells me how to actually quantify the effects of mis-aligned partitions. Some of the general information I found suggests the performance penalty can be quite high (upwards of 60%) and others say it's negligible. What I want to do is determine if partition alignment is a factor in this server's performance problems or not; and if so, to what degree? So I'll put my info out here, and hopefully the community can confirm if my partitions are indeed mis-aligned, and if so, help me put a number to what the performance cost is. Server is a Dell R510 with dual E5620 CPUs and 8 GB RAM. There are eight 15k 2.5” 600 GB drives (Seagate ST3600057SS) configured in hardware RAID-6 with a single hot spare. RAID controller is a Dell PERC H700 w/512MB cache (Linux sees this as a LSI MegaSAS 9260). OS is CentOS 5.6, home directory partition is ext3, with options “rw,data=journal,usrquota”. I have the HW RAID configured to present two virtual disks to the OS: /dev/sda for the OS (boot, root and swap partitions), and /dev/sdb for a big NFS share: [root@lnxutil1 ~]# parted -s /dev/sda unit s print Model: DELL PERC H700 (scsi) Disk /dev/sda: 134217599s Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 63s 465884s 465822s primary ext2 boot 2 465885s 134207009s 133741125s primary lvm [root@lnxutil1 ~]# parted -s /dev/sdb unit s print Model: DELL PERC H700 (scsi) Disk /dev/sdb: 5720768639s Sector size (logical/physical): 512B/512B Partition Table: gpt Number Start End Size File system Name Flags 1 34s 5720768606s 5720768573s lvm Edit 1 Using the cfq IO scheduler (default for CentOS 5.6): # cat /sys/block/sd{a,b}/queue/scheduler noop anticipatory deadline [cfq] noop anticipatory deadline [cfq] Chunk size is the same as strip size, right? If so, then 64kB: # /opt/MegaCli -LDInfo -Lall -aALL -NoLog Adapter #0 Number of Virtual Disks: 2 Virtual Disk: 0 (target id: 0) Name:os RAID Level: Primary-6, Secondary-0, RAID Level Qualifier-3 Size:65535MB State: Optimal Stripe Size: 64kB Number Of Drives:7 Span Depth:1 Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU Current Cache Policy: WriteThrough, ReadAdaptive, Direct, No Write Cache if Bad BBU Access Policy: Read/Write Disk Cache Policy: Disk's Default Number of Spans: 1 Span: 0 - Number of PDs: 7 ... physical disk info removed for brevity ... Virtual Disk: 1 (target id: 1) Name:share RAID Level: Primary-6, Secondary-0, RAID Level Qualifier-3 Size:2793344MB State: Optimal Stripe Size: 64kB Number Of Drives:7 Span Depth:1 Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU Current Cache Policy: WriteThrough, ReadAdaptive, Direct, No Write Cache if Bad BBU Access Policy: Read/Write Disk Cache Policy: Disk's Default Number of Spans: 1 Span: 0 - Number of PDs: 7 If it's not obvious, virtual disk 0 corresponds to /dev/sda, for the OS; virtual disk 1 is /dev/sdb (the exported home directory tree).

    Read the article

  • ColdFusion Server crash after thousands of HTTP requests

    - by Jason Bristol
    We are running ColdFusion 8 on a windows server 2003 VPS with an API that exposes student records to a partner API through a connector. Our API returns around 50k student records serialized in XML format pretty seamlessly. My question originates when something very frightening happened today when we tested our connector to our partners API. Our entire website and web host went down. We assumed that our host was just having some issues and after 4 hours with no resolution and no response from their customer service we finally got a response from them claiming that they had an "unauthorized user" in their network. After our server was back up we were unable to connect to our website as if the web service or coldfusion itself had froze. This is really where my concern comes from as I fear we may have overloaded the web service. As I mentioned before we tried sending over 50k HTTP POST requests over to our partner's API, however everything stopped after around 1.6k Is this bad practice or is there some sort of rate limiting I can relax somewhere in server configuration? We managed to find a workaround, but it bypasses our connector which is critical to our design. This would have been a one time deal as the purpose of so many requests was to populate our partner's website with current data, after that hourly syncs will keep requests down to around 100 per hour. UPDATE Our partner API is owned and operated by Pardot. We are converting students to prospects by passing student data to their API which unfortunately only seems to accept one student at a time. For that reason we have to do all 50k requests individually. Our server has 4GB of RAM, an Intel Core 2 Duo @ 2.8GHz running Windows Server 2003 SP2. I monitored the server during a 100 student sync, a 400 student sync, and a 1.4k student sync with the following results: 100 students - 2.25GB of Memory, 30-40% CPU utilization, 0.2-0.3% network bandwidth 400 students - 2.30GB of Memory, 30-50% CPU utilization, 0.2-1.0% network bandwidth 1.4k students - 2.30GB of Memory, 30-70% CPU utilization, 0.2-1.0% network bandwidth I know this is a far cry from 50k students, but I don't want to risk taking down our CMS system again assuming that was the cause. To give you a look at our code: <cfif (#getStudents.statusCode# eq "200 OK")> <cftry> <cfloop index="StudentXML" array="#XmlSearch(responseSTUD,'/students/student')#"> <cfset StudentXML = XmlParse(StudentXML)> <cfhttp url="#PARDOT_CMS_UPSERT#" method="post" timeout="10000" > <cfhttpparam type="url" name="user_key" value="#PARDOT_CMS_USERKEY#"> <cfhttpparam type="url" name="api_key" value="#api_key#"> <cfhttpparam type="url" name="email" value="#StudentXML.student.email.XmlText#"> <cfhttpparam type="url" name="first_name" value="#StudentXML.student.first.XmlText#"> <cfhttpparam type="url" name="last_name" value="#StudentXML.student.last.XmlText#"> <cfhttpparam type="url" name="in_cms" value="#StudentXML.student.studentid.XmlText#"> <cfhttpparam type="url" name="company" value="#StudentXML.student.agencyname.XmlText#"> <cfhttpparam type="url" name="country" value="#StudentXML.student.countryname.XmlText#"> <cfhttpparam type="url" name="address_one" value="#StudentXML.student.address.XmlText#"> <cfhttpparam type="url" name="address_two" value="#StudentXML.student.address2.XmlText#"> <cfhttpparam type="url" name="city" value="#StudentXML.student.city.XmlText#"> <cfhttpparam type="url" name="state" value="#StudentXML.student.state_province.XmlText#"> <cfhttpparam type="url" name="zip" value="#StudentXML.student.postalcode.XmlText#"> <cfhttpparam type="url" name="phone" value="#StudentXML.student.phone.XmlText#"> <cfhttpparam type="url" name="fax" value="#StudentXML.student.fax.XmlText#"> <cfhttpparam type="url" name="output" value="simple"> </cfhttp> </cfloop> <cfcatch type="any"> <cfdump var="#cfcatch.Message#"> </cfcatch> </cftry> </cfif> UPDATE 2 I checked the CF logs and found a couple of these: "Error","jrpp-8","06/06/13","16:10:18","CMS-API","Java heap space The specific sequence of files included or processed is: D:\Clients\www.xxx.com\www\dev.cms\api\v1\api.cfm, line: 675 " java.lang.OutOfMemoryError: Java heap space at java.util.Arrays.copyOf(Arrays.java:2882) at java.io.CharArrayWriter.write(CharArrayWriter.java:105) at coldfusion.runtime.CharBuffer.replace(CharBuffer.java:37) at coldfusion.runtime.CharBuffer.replace(CharBuffer.java:50) at coldfusion.runtime.NeoBodyContent.write(NeoBodyContent.java:254) at cfapi2ecfm292155732._factor30(D:\Clients\www.xxx.com\www\dev.cms\api\v1\api.cfm:675) at cfapi2ecfm292155732._factor31(D:\Clients\www.xxx.com\www\dev.cms\api\v1\api.cfm:662) at cfapi2ecfm292155732._factor36(D:\Clients\www.xxx.com\www\dev.cms\api\v1\api.cfm:659) at cfapi2ecfm292155732._factor42(D:\Clients\www.xxx.com\www\dev.cms\api\v1\api.cfm:657) at cfapi2ecfm292155732._factor37(D:\Clients\www.xxx.com\www\dev.cms\api\v1\api.cfm) at cfapi2ecfm292155732._factor44(D:\Clients\www.xxx.com\www\dev.cms\api\v1\api.cfm:456) at cfapi2ecfm292155732._factor38(D:\Clients\www.xxx.com\www\dev.cms\api\v1\api.cfm) at cfapi2ecfm292155732._factor46(D:\Clients\www.xxx.com\www\dev.cms\api\v1\api.cfm:455) at cfapi2ecfm292155732._factor39(D:\Clients\www.xxx.com\www\dev.cms\api\v1\api.cfm) at cfapi2ecfm292155732._factor47(D:\Clients\www.xxx.com\www\dev.cms\api\v1\api.cfm:453) at cfapi2ecfm292155732.runPage(D:\Clients\www.xxx.com\www\dev.cms\api\v1\api.cfm:1) at coldfusion.runtime.CfJspPage.invoke(CfJspPage.java:192) at coldfusion.tagext.lang.IncludeTag.doStartTag(IncludeTag.java:366) at coldfusion.filter.CfincludeFilter.invoke(CfincludeFilter.java:65) at coldfusion.filter.ApplicationFilter.invoke(ApplicationFilter.java:279) at coldfusion.filter.RequestMonitorFilter.invoke(RequestMonitorFilter.java:48) at coldfusion.filter.MonitoringFilter.invoke(MonitoringFilter.java:40) at coldfusion.filter.PathFilter.invoke(PathFilter.java:86) at coldfusion.filter.ExceptionFilter.invoke(ExceptionFilter.java:70) at coldfusion.filter.ClientScopePersistenceFilter.invoke(ClientScopePersistenceFilter.java:28) at coldfusion.filter.BrowserFilter.invoke(BrowserFilter.java:38) at coldfusion.filter.NoCacheFilter.invoke(NoCacheFilter.java:46) at coldfusion.filter.GlobalsFilter.invoke(GlobalsFilter.java:38) at coldfusion.filter.DatasourceFilter.invoke(DatasourceFilter.java:22) at coldfusion.CfmServlet.service(CfmServlet.java:175) at coldfusion.bootstrap.BootstrapServlet.service(BootstrapServlet.java:89) at jrun.servlet.FilterChain.doFilter(FilterChain.java:86) Looks like I might have crashed the JVM in CF, is there a better way to do this? We are thinking of just exporting all records initially as a CSV file and importing it into Pardot seeing as we will never have to do a request this large again.

    Read the article

  • Moving new drive volume out of /media

    - by nomoreflash
    I have the following filesystem: Filesystem Size Used Avail Use% Mounted on /dev/sda1 33G 2.7G 29G 9% / udev 1007M 232K 1007M 1% /dev none 1007M 244K 1007M 1% /dev/shm none 1007M 292K 1007M 1% /var/run none 1007M 0 1007M 0% /var/lock none 1007M 0 1007M 0% /lib/init/rw none 33G 2.7G 29G 9% /var/lib/ureadahead/debugfs /dev/sdb1 137G 69M 137G 1% /media/New Volume I want the /media/New Volume to become part of /. Does anyone know how I can do that without using gparted etc.?

    Read the article

  • Why are the analoge stereo input and output of my M-Audio 24/96 soundcard not available to me in Ubu

    - by user37968
    I have installed Lucid on an old Mac PowerPC G4 desktop with a M-Audio Audiophile 24/96 soundcard. The only inputs and outputs I can select in the audio preferences are digital ones for the digital input and output. "lspci -v" shows the card as so: 0001:10:13.0 Multimedia audio controller: VIA Technologies Inc. ICE1712 [Envy24] PCI Multi-Channel I/O Controller (rev 02) Subsystem: VIA Technologies Inc. Device d634 Flags: bus master, medium devsel, latency 16, IRQ 53 I/O ports at 0440 [size=32] I/O ports at 04b0 [size=16] I/O ports at 04a0 [size=16] I/O ports at 0400 [size=64] Capabilities: <access denied> Kernel driver in use: ICE1712 Kernel modules: snd-ice1712 "cat /proc/asound/cards" as so: 0 [Tumbler ]: PMac Tumbler - PowerMac Tumbler PowerMac Tumbler (Dev 21) Sub-frame 0 1 [M2496 ]: ICE1712 - M Audio Audiophile 24/96 M Audio Audiophile 24/96 at 0x440, irq 53 "aplay -L" shows these as listed: pulse Playback/recording through the PulseAudio sound server front:CARD=Tumbler,DEV=0 PowerMac Tumbler, PowerMac Tumbler Front speakers front:CARD=M2496,DEV=0 M Audio Audiophile 24/96, ICE1712 multi Front speakers surround40:CARD=M2496,DEV=0 M Audio Audiophile 24/96, ICE1712 multi 4.0 Surround output to Front and Rear speakers surround41:CARD=M2496,DEV=0 M Audio Audiophile 24/96, ICE1712 multi 4.1 Surround output to Front, Rear and Subwoofer speakers surround50:CARD=M2496,DEV=0 M Audio Audiophile 24/96, ICE1712 multi 5.0 Surround output to Front, Center and Rear speakers surround51:CARD=M2496,DEV=0 M Audio Audiophile 24/96, ICE1712 multi 5.1 Surround output to Front, Center, Rear and Subwoofer speakers iec958:CARD=M2496,DEV=0 M Audio Audiophile 24/96, ICE1712 multi IEC958 (S/PDIF) Digital Audio Output I believe it is a problem with detecting the analogue input/output. Sometimes I can get sound from the device but it is a sheet of white noise and tinkering makes it go away again I don't know if that is a separate problem or if it is linked to not being able to see the analogue input/outputs in the sound preferences. Any help would be greatly appreciated As for the white noise I have installed the Envy24 control panel and spend lots of time playing with the settings but when I can get the white noise I can never get it to an quality where I can actually hear what is being played. The internal speaker plays audio fine and plugging in a NI Audio 4DJ via usb also plays sound, although with some static but I believe that is due to an underpowered usb2 pci expansion card not being able to get enough electricity to the device. Alternatively I have seen other people with problems with this device so it may be a bug in the driver but that is another matter. I would like to get the M-Audio card working so I can begin to enjoy my music once again. As a note, I do not currently have any audio equipment capable of sending or receiving audio via the digital inputs and output so I can not check if they are working. The sound preferences show a wide range of digital in and out options with various surround sound options but no analogue ins and outs.

    Read the article

  • Windows Setup could not configure Windows to run on this computer's hardware

    - by Hello71
    The whole installation goes smoothly up to the point of "Completing installation ...". The monitor changes resolution, after which a standard dialog box pops up saying Windows Setup could not configure Windows to run on this computer's hardware Then, in a few seconds, the whole machine powers down. Trying to restart produces the message: STOP: c000021a {Fatal System Error} 0x00000000 (0xc0000001 0x00100448) OR it boots into Setup and comes up with the message: Windows Setup encountered an unexpected error... (This is not the actual error, just paraphrasing) I tried using the OEM restore instead of a regular install, but it fails with the same error. (Even though it worked before...) General specs: HP Pavilion Elite e9262f Intel Core i5-750 Processor ATI Radeon HD 4650 Hitachi HDT721010SLA360 ATA Device 6GB DDR3 RAM SuperMulti DVD Burner with LightScribe Some built-in Wi-Fi module http://h10025.www1.hp.com/ewfrf/wc/document?docname=c01916917 I've tried disconnecting the wireless card and disabling the built-in Ethernet and Firewire via the BIOS, and replacing the wireless keyboard and mouse with wired USB ones. Didn't work. I've also tried changing the SATA controller settings in the BIOS to RAID, AHCI, and IDE, reinstalling each time I changed. Still not working. I think the reason why it is showing the Fatal System Error is because it didn't finish installing before it errored out and shut down, so the system is left in an inconsistent state. I've tried 3 different copies (including the OEM restore) of Windows 7 now, and they're all failing at the same point, with the same error message. I've tried to install Windows 7 maybe 10 times already, with the exact same error message at the exact same location. Hm... Interestingly, the 32-bit version of Windows 7 works, but the 64-bit version doesn't. Perhaps it was a badly burned disk? Reburning the 64-bit version still comes up with the same error. Here's a picture of the side of the case that clearly says it came with Windows 7 64-bit, along with the model number and CPU. sudo fdisk -l: Disk /dev/sda: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0009896f Device Boot Start End Blocks Id System /dev/sda1 * 1 13 104391 7 HPFS/NTFS /dev/sda2 14 94119 755906445 7 HPFS/NTFS /dev/sda3 119922 121602 13492224 7 HPFS/NTFS /dev/sda4 94120 119922 207257740+ 5 Extended /dev/sda5 119527 119922 3170769 82 Linux swap / Solaris /dev/sda6 107174 119526 99225441 83 Linux /dev/sda7 94120 107173 104856192 7 HPFS/NTFS Partition table entries are not in disk order

    Read the article

  • Why are the analoge stereo input and output of my M-Audio 24/96 soundcard not available to me in Ubuntu Lucid

    - by MIDoubleKO
    I have installed Lucid on an old Mac PowerPC G4 desktop with a M-Audio Audiophile 24/96 soundcard. The only inputs and outputs I can select in the audio preferences are digital ones for the digital input and output. "lspci -v" shows the card as so: 0001:10:13.0 Multimedia audio controller: VIA Technologies Inc. ICE1712 [Envy24] PCI Multi-Channel I/O Controller (rev 02) Subsystem: VIA Technologies Inc. Device d634 Flags: bus master, medium devsel, latency 16, IRQ 53 I/O ports at 0440 [size=32] I/O ports at 04b0 [size=16] I/O ports at 04a0 [size=16] I/O ports at 0400 [size=64] Capabilities: <access denied> Kernel driver in use: ICE1712 Kernel modules: snd-ice1712 "cat /proc/asound/cards" as so: 0 [Tumbler ]: PMac Tumbler - PowerMac Tumbler PowerMac Tumbler (Dev 21) Sub-frame 0 1 [M2496 ]: ICE1712 - M Audio Audiophile 24/96 M Audio Audiophile 24/96 at 0x440, irq 53 "aplay -L" shows these as listed: pulse Playback/recording through the PulseAudio sound server front:CARD=Tumbler,DEV=0 PowerMac Tumbler, PowerMac Tumbler Front speakers front:CARD=M2496,DEV=0 M Audio Audiophile 24/96, ICE1712 multi Front speakers surround40:CARD=M2496,DEV=0 M Audio Audiophile 24/96, ICE1712 multi 4.0 Surround output to Front and Rear speakers surround41:CARD=M2496,DEV=0 M Audio Audiophile 24/96, ICE1712 multi 4.1 Surround output to Front, Rear and Subwoofer speakers surround50:CARD=M2496,DEV=0 M Audio Audiophile 24/96, ICE1712 multi 5.0 Surround output to Front, Center and Rear speakers surround51:CARD=M2496,DEV=0 M Audio Audiophile 24/96, ICE1712 multi 5.1 Surround output to Front, Center, Rear and Subwoofer speakers iec958:CARD=M2496,DEV=0 M Audio Audiophile 24/96, ICE1712 multi IEC958 (S/PDIF) Digital Audio Output I believe it is a problem with detecting the analogue input/output. Sometimes I can get sound from the device but it is a sheet of white noise and tinkering makes it go away again I don't know if that is a separate problem or if it is linked to not being able to see the analogue input/outputs in the sound preferences. Any help would be greatly appreciated As for the white noise I have installed the Envy24 control panel and spend lots of time playing with the settings but when I can get the white noise I can never get it to an quality where I can actually hear what is being played. The internal speaker plays audio fine and plugging in a NI Audio 4DJ via usb also plays sound, although with some static but I believe that is due to an underpowered usb2 pci expansion card not being able to get enough electricity to the device. Alternatively I have seen other people with problems with this device so it may be a bug in the driver but that is another matter. I would like to get the M-Audio card working so I can begin to enjoy my music once again. As a note, I do not currently have any audio equipment capable of sending or receiving audio via the digital inputs and output so I can not check if they are working. The sound preferences show a wide range of digital in and out options with various surround sound options but no analogue ins and outs.

    Read the article

  • Routing RFC1918 addresses through dd-wrt via a switch

    - by espenfjo
    I am a bit stuck with an experiment of mine. I have a network looking somewhat like this. | Internet | | ---- |Switch| ---- | | Server w/pub IP | DD-WRT router 192.168.1.1 | | RFC1918 clients 192.168.1.0/24 What I want is for the RFC1918 clients to speak directly with each others. On the server with the public IP I have this route: 192.168.1.0/24 dev eth0 scope link and can see that packets are infact reaching the dd-wrt router for 192.168.1.1, even though if I get no answer. Trying to reach one of the RFC1918 clients from the public IP server will get no result, as the dd-wrt router is not announcing that network on to its external interface (arp who-has 192.168.1.107 tell xxx.xxx.xxx.xxx, but no answer). The router being an WLAN dd-wrt router has of course a load of routes, VLANs and interfaces: xxx.xxx.xxx.1 dev vlan2 scope link 192.168.1.0/24 dev br0 proto kernel scope link src 192.168.1.1 192.168.1.0/24 dev eth1 proto kernel scope link src 192.168.1.244 84.215.64.0/18 dev vlan2 proto kernel scope link src xxx.xxx.xxx.xxx 169.254.0.0/16 dev br0 proto kernel scope link src 169.254.255.1 127.0.0.0/8 dev lo scope link 0.0.0.0 via xxx.xxx.xxx.1 dev vlan2 xxx.xxx.xxx.xxx being the public IP, and xxx.xxx.xxx.1 being the default route for the public IP. I am not sure where to continue with this. I would recon that I both need routing on the dd-wrt router, as well as some iptables magic? Why do something this complex? Why not ;) Also, do not mind that "Internet" can get RFC1918 traffic, it wont go outside of the walls. EDIT 1: Following the tip from stew I do indeed get the correct ARP flowing. And adding an iptables rule for allowing traffic from that specific public IPd machine I get traffic between the systems! Oddly enough though, the speed I get from Server w/pub IP - RFC1918 clients are the same as if the traffic were routed out onto the Internet and back. Edit 2: Ok, disconnecting the external Internet connection will still give the same, crappy transfer speed. So it has to be something else. Edit 3: Ok, I guess there are other reasons for this crappy speed. Case closed. :)

    Read the article

  • Why does my Fedora HD come up as 124GB when it was created to be 700GB in Hyper-V?

    - by barfoon
    Any ideas? How can I tell fedora to use 700GB, and do I have reformat to do this? Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_computer-lv_root 124G 3.9G 114G 4% / /dev/sda1 194M 23M 162M 12% /boot tmpfs 995M 320K 994M 1% /dev/shm Hyper-V settings: Thank you, pvdisplay -C: PV VG Fmt Attr PSize PFree /dev/sda2 vg_mycomp lvm2 a- 127.29G 0 lvdisplay -C: LV VG Attr LSize Origin Snap% Move Log Copy% Convert lv_root vg_mycomp -wi-ao 125.36G lv_swap vg_mycomp -wi-ao 1.94G vgdisplay -C: VG #PV #LV #SN Attr VSize VFree vg_mycomp 1 2 0 wz--n- 127.29G 0

    Read the article

  • Linux server became extremely slow

    - by Ariel Aharonson
    I have a file sharing website, and my files hosted in a server with those system specifications: 32GB RAM 12x3TB 2x Intel Quad Core E5620 I have files in this server up to 4gb for each file. 446gb is full (/36TB) [root@hosted-by ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda2 50G 2.7G 44G 6% / tmpfs 16G 0 16G 0% /dev/shm /dev/sda1 97M 57M 36M 62% /boot /dev/mapper/VolGroup01-LogVol00 33T 494G 33T 2% /home And take a look at this: Why is the wa% so high? (I think that what makes the server to be so slow)

    Read the article

  • should I put my multi-device btrfs filesystem on disk partitions or raw devices?

    - by Glyph
    If I'm going to create a multi-device btrfs filesystem. The official recommendation from the documentation apppears to be to create it on raw devices; i.e. /dev/sdb, /dev/sdc, etc, but this is not explained. Are there any advantages to creating a partition table on these devices first, either GPT or MBR, and then creating the filesystem on /dev/sdb1, /dev/sdc1 et cetera? Does feeding btrfs whole devices have some particular advantage, or are these basically equivalent?

    Read the article

  • Fedora 11 System - Failed Hard Drive Removed, and Boot gets GRUB Hard Disk Error

    - by Mindful
    Greetings, I have a machine with a 120GB ATA drive that has what I thought to be non-essential data on it. I also have a 320GB SATA hard drive with the OS/Application/Files (good data I want to keep). My 120GB ATA is failing I believe, as my computer kept slowing to a halt. However, when I move the drive from BIOS my computer will not start, says "GRUB Hard Disk Error". I know that my Fedora system has an LVM setup. I am looking to just remove the 120GB drive from "the mix", and just have one hard drive. How do I recover ? Thank you. I have access to a Linux Live CD right now and can make any changes. However, it won't boot into my OS - it fails. UPDATE: here's my Grub.Conf # grub.conf generated by anaconda # # Note that you do not have to rerun grub after making changes to this file # NOTICE: You have a /boot partition. This means that # all kernel and initrd paths are relative to /boot/, eg. # root (hd1,0) # kernel /vmlinuz-version ro root=/dev/VolGroup00/LogVol00 # initrd /initrd-version.img #boot=/dev/sda1 default=0 timeout=5 splashimage=(hd1,0)/grub/splash.xpm.gz hiddenmenu title Fedora (2.6.30.10-105.2.23.fc11.i686.PAE) root (hd1,0) kernel /vmlinuz-2.6.30.10-105.2.23.fc11.i686.PAE ro root=/dev/VolGroup00/LogVol00 rhgb quiet initrd /initrd-2.6.30.10-105.2.23.fc11.i686.PAE.img title Fedora (2.6.30.9-102.fc11.i686.PAE) root (hd1,0) kernel /vmlinuz-2.6.30.9-102.fc11.i686.PAE ro root=/dev/VolGroup00/LogVol00 rhgb quiet initrd /initrd-2.6.30.9-102.fc11.i686.PAE.img title Fedora (2.6.27.24-170.2.68.fc10.i686.PAE) root (hd1,0) kernel /vmlinuz-2.6.27.24-170.2.68.fc10.i686.PAE ro root=/dev/VolGroup00/LogVol00 rhgb quiet initrd /initrd-2.6.27.24-170.2.68.fc10.i686.PAE.img title Fedora (2.6.27.24-170.2.68.fc10.i686) root (hd1,0) kernel /vmlinuz-2.6.27.24-170.2.68.fc10.i686 ro root=/dev/VolGroup00/LogVol00 rhgb quiet initrd /initrd-2.6.27.24-170.2.68.fc10.i686.img title Fedora (2.6.27.21-170.2.56.fc10.i686) root (hd1,0) kernel /vmlinuz-2.6.27.21-170.2.56.fc10.i686 ro root=/dev/VolGroup00/LogVol00 rhgb quiet initrd /initrd-2.6.27.21-170.2.56.fc10.i686.img title Fedora (2.6.27.19-170.2.35.fc10.i686) root (hd1,0) kernel /vmlinuz-2.6.27.19-170.2.35.fc10.i686 ro root=/dev/VolGroup00/LogVol00 rhgb quiet initrd /initrd-2.6.27.19-170.2.35.fc10.i686.img title Upgrade to Fedora 10 (Cambridge) kernel /upgrade/vmlinuz preupgrade repo=hd::/var/cache/yum/preupgrade stage2=http://chi-10g-1-mirror.fastsoft.net/pub/linux/fedora/linux/releases/10/Fedora/i386/os/images/install.img ks=hd:UUID=f11769ba-29bc-46de-8c40-a949720a438e:/upgrade/ks.cfg initrd /upgrade/initrd.img title Win rootnoverify (hd0,0) chainloader +1

    Read the article

  • Disk monitor script with long file systems

    - by DD.
    $ df -H Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_app001-lv_root 34G 12G 21G 35% / tmpfs 8.4G 0 8.4G 0% /dev/shm /dev/sda1 508M 54M 429M 12% /boot /dev/mapper/vg_app001-lv_home 19G 309M 17G 2% /home I want to run a disk monitor script but because the filesystem is so long the row has been split into two lines and the script fails. Any suggestions?

    Read the article

  • Group traffic shaping with traffic control?

    - by mmcbro
    I'm trying to limit the output bandwidth generated by an application with linux tc. This application sends me the source port of the request that I use has a filter to limit each user at a given downloadspeed. I feel that my setup could be managed way better if I had a better knowledge of linux tc. At the application level users are categorized as members of a group, each group have a limited bandwidth. Example : Members of group A : 512kbit/s Members of group B : 1Mbit/s Members of group C : 2Mbit/s When a user connects to the application, it retrieves the source port to the origin of the request from the user and sends me the source port and the bandwidth at which the user must be limited depending on group to which it belongs. With these informations I must add the appropriate rules so that the user (the source port in reality) is limited to the right bandwidth. If the user that connect isn't a member of any group it should be limited at a default bandwidth speed. I'm actually managing this by using a self made daemon that add or remove rules from when it receive a request from the application. With my little knowledge of tc I'm not able to limit other users (ones that aren't in a group, all others in fact) at a default speed and my configuration seems awful to me. Here is the base of my tc qdisc and classes : tc qdisc add dev eth0 root handle 1: htb tc class add dev eth0 parent 1: classid 1:1 htb rate 100mbps ceil 125mbps To classify a user at a given speed I have to add one subclass and then associate one filter to it : # a member of group A tc class add dev eth0 parent 1:1 classid 1:11 htb rate 512kbps ceil 512kbps # tts associated filter to match his source port tc filter add dev eth0 protocol ip parent 1:0 prio 1 u32 match ip sport 50001 flowid 1:11 # a member of group A again tc class add dev eth0 parent 1:1 classid 1:12 htb rate 512kbps ceil 512kbps # tts associated filter to match his source port tc filter add dev eth0 protocol ip parent 1:0 prio 1 u32 match ip sport 61524 flowid 1:12 # a member of group B again tc class add dev eth0 parent 1:1 classid 1:13 htb rate 1000kbps ceil 1000kbps # tts associated filter to match his source port tc filter add dev eth0 protocol ip parent 1:0 prio 1 u32 match ip sport 57200 flowid 1:13 I already know that a source port could be the same if its coming from a different IP address the thing is the application is behind a proxy so I don't have to manage any IP address in that situation. I would like to know how to manage the fact that for all other users (request/source port, whatever you name it) could be limited at a given speed each. I mean that each connection should be able to use at max 100kbit/s for example, not a shared 100kbit/s. I also would like to know if there is a way to simplify my rules. I don't know if it is possible to use only one class per group and associate multiple filters to the same class so each users could be handled by one class and not one class per user. I appreciate any advice, thanks.

    Read the article

  • route http and ssh traffic normally, everything else via vpn tunnel

    - by Normadize
    I've read quite a bit and am close, I feel, and I'm pulling my hair out ... please help! I have an OpenVPN cliend whose server sets local routes and also changes the default gw (I know I can prevent that with --route-nopull). I'd like to have all outgoing http and ssh traffic via the local gw, and everything else via the vpn. Local IP is 192.168.1.6/24, gw 192.168.1.1. OpenVPN local IP is 10.102.1.6/32, gw 192.168.1.5 OpenVPN server is at {OPENVPN_SERVER_IP} Here's the route table after openvpn connection: # ip route show table main 0.0.0.0/1 via 10.102.1.5 dev tun0 default via 192.168.1.1 dev eth0 proto static 10.102.1.1 via 10.102.1.5 dev tun0 10.102.1.5 dev tun0 proto kernel scope link src 10.102.1.6 {OPENVPN_SERVER_IP} via 192.168.1.1 dev eth0 128.0.0.0/1 via 10.102.1.5 dev tun0 169.254.0.0/16 dev eth0 scope link metric 1000 192.168.1.0/24 dev eth0 proto kernel scope link src 192.168.1.6 metric 1 This makes all packets go via to the VPN tunnel except those destined for 192.168.1.0/24. Doing wget -qO- http://echoip.org shows the vpn server's address, as expected, the packets have 10.102.1.6 as source address (the vpn local ip), and are routed via tun0 ... as reported by tcpdump -i tun0 (tcpdump -i eth0 sees none of this traffic). What I tried was: create a 2nd routing table holding the 192.168.1.6/24 routing info (copied from the main table above) add an iptables -t mangle -I PREROUTING rule to mark packets destined for port 80 add an ip rule to match on the mangled packet and point it to the 2nd routing table add an ip rule for to 192.168.1.6 and from 192.168.1.6 to point to the 2nd routing table (though this is superfluous) changed the ipv4 filter validation to none in net.ipv4.conf.tun0.rp_filter=0 and net.ipv4.conf.eth0.rp_filter=0 I also tried an iptables mangle output rule, iptables nat prerouting rule. It still fails and I'm not sure what I'm missing: iptables mangle prerouting: packet still goes via vpn iptables mangle output: packet times out Is it not the case that to achieve what I want, then when doing wget http://echoip.org I should change the packet's source address to 192.168.1.6 before routing it off? But if I do that, the response from the http server would be routed back to 192.168.1.6 and wget would not see it as it is still bound to tun0 (the vpn interface)? Can a kind soul please help? What commands would you execute after the openvpn connects to achieve what I want? Looking forward to hair regrowth ...

    Read the article

  • Run command automatically as root after login

    - by J V
    I'm using evrouter to simulate keypresses from my mouses extra buttons. It works great but I need to run the command with sudo to make it work so I can't just use my DE to handle autostart. I considered init.d but from what I've heard this only works for different stages of boot, and I need this to run as root after login. $ cat .evrouterrc "Logitech G500" "/dev/input/event4" any key/277 "XKey/0" "Logitech G500" "/dev/input/event4" any key/280 "XKey/9" "Logitech G500" "/dev/input/event4" any key/281 "XKey/8" $ sudo evrouter /dev/input/event4

    Read the article

< Previous Page | 51 52 53 54 55 56 57 58 59 60 61 62  | Next Page >