Search Results

Search found 767 results on 31 pages for '4k sectors'.

Page 17/31 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • How to prevent system to generate log file

    - by shantanu
    My Question is little bit surprising, but i need it. I am using a slow processor laptop, now i found that HDD has some bad sectors and HDD response becomes slow. But disk health is ok(according to smart tools). I can not change my HDD right now. So decide to reduce disk operation. How do i prevent system to generate log file or any other file which are used to keep history? I know LOG file is very important but i don't care it right now. Please help.

    Read the article

  • Ubuntu 10.10 USB drive not showing

    - by Ben
    USB is detecting but not showing the drives or mount details. Nothing there inside my /media folder and /mnt folder. I already enabled automatic mount and give privilege to user also. My sudo fdisk -l shows like: Disk /dev/sda: 250.1 GB, 250059350016 bytes 255 heads, 63 sectors/track, 30401 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000d3ba7 Device Boot Start End Blocks Id System /dev/sda1 * 1 29637 238053376 83 Linux /dev/sda2 29637 30402 6142977 5 Extended /dev/sda5 29637 30402 6142976 82 Linux swap / Solaris Any idea?

    Read the article

  • Is it Possible to Repair Boot Record on Windows Vista Partition on Dual Boot Ubuntu 12.04.02 LTS Install?

    - by PasBonRJB
    I recently successfully installed Ubuntu 12.04.02 Desktop as a "side-by-side" on an eMachines ET1641-02w PC; which before had Windows Vista; but was no longer bootable, because the boot sector somehow got trashed. After I installed Ubuntu I can go to Devices/OS and I can access all of my docs, pics, videos, music, etc. on the Windows Vista partition; but I still can't "boot" the Windows partition. I've seen several posts regarding recovering boot sectors using the Boot-repair app; but am wondering if this "boot-repair" is only for the Ubuntu partition. Can I use "boot-repair" to repair the Windows Vista boot sector?

    Read the article

  • ec2 instance won't boot!

    - by TheToolBox
    So I had a server lose connection while updating to 14.04LTS. Foolishly, it ended up getting rebooted, and now I'm here. I disconnected the volume and mounted it on another system, chrooted it, and updated the kernel. Still no dice. Any idea what the problem could be? Thanks in advance! The instance log is below: ******************* BLKFRONT for device/vbd/2049 ********** backend at /local/domain/0/backend/vbd/3005/2049 Failed to read /local/domain/0/backend/vbd/3005/2049/feature-barrier. Failed to read /local/domain/0/backend/vbd/3005/2049/feature-flush-cache. 16777216 sectors of 512 bytes ************************** [H[J Booting 'Ubuntu 14.04 LTS, memtest86+' root (hd0) Filesystem type is ext2fs, using whole disk kernel /boot/memtest86+.bin ============= Init TPM Front ================ Tpmfront:Error Unable to read device/vtpm/0/backend-id during tpmfront initialization! error = ENOENT Tpmfront:Info Shutting down tpmfront xc: error: panic: xc_dom_core.c:621: xc_dom_find_loader: no loader found: Invalid kernel xc_dom_parse_image returned -1 close(3) Error 9: Unknown boot failure Press any key to continue...

    Read the article

  • Samsung 830 very slow benchmark numbers

    - by alekop
    I just bought a new SSD, and installed a fresh copy of Windows on it. I didn't see any noticeable difference in boot times, app start-up times, so I decided to benchmark it. Asus P7P55D-E Intel i5-760 Samsung 830 256GB SATA III Windows 7 Ultimate 64-bit The Windows experience index gave the drive a 7.3 rating, but real-world performance is not particularly impressive. Any ideas why the numbers are so low? UPDATE: It turns out that SATA III support is turned off by default on the P7P55D motherboard. After enabling it in BIOS (Tools - Level Up), the scores went up: Read Write Seq 325 183 4K 16 49 IOPS 32K 28K It's an improvement, but still far below what they should be for this drive.

    Read the article

  • Will a database server perform better running on 2 CPUs with 16 cores or 4 CPUs with 8 cores?

    - by AlexOdin
    What I have: an online financial application (ASP.NET, C#) at peak we have 5K+ simultaneous users backend is running on Oracle 11g (active server + stand-by using Active Data Guard). At peak - 4K-5K database sessions Oracle is installed on Linux 5.8 (Oracle's unbreakable version) the database size: 7TB disk storage: NetApp (connected with 10GB network) I would like to replace old servers (IT will purchase HP blades BL685C). Servers will have 256GB of RAM. I need your help to figure out what to do with CPUs and cores. Options: 2 CPUs (2.3 GHz) with 16 cores each 4 CPUs (3.0 GHz) with 8 cores each Question: Which one should I pick? P.S. Next year, we will migrate from Oracle to SQL server. I hope, whatever option you recommend will work for both platforms

    Read the article

  • Symlink - Permission Denied

    - by John Smith
    I'm facing an interesting problem with plenty of Permission Denied outputs when using SymLinks Linux: Slackware 13.1 Directory with Symlink: root@Tower:/var/lib# ls -lah drwxr-xr-x 8 root root 0 2012-12-02 20:09 ./ drwxr-xr-x 15 root root 0 2012-12-01 21:06 ../ lrwxrwxrwx 1 ntop ntop 21 2012-12-02 20:09 ntop - /mnt/user/media/ntop6/ Symlinked Directory: root@Tower:/mnt/user/media# ls -lah drwxrwx--- 1 nobody users 1.4K 2012-12-02 19:28 ./ drwxrwx--- 1 nobody users 128 2012-11-18 16:06 ../ drwxrwxrwx 1 ntop ntop 320 2012-12-02 20:22 ntop6/ What I have done: I have used chown -h ntop:ntop on the ntop directory in /var/lib Just to be sure, I have chmod 777 to both directories Permission denied actions: root@Tower:/var/lib# sudo -u ntop mkdir /var/lib/ntop/test mkdir: cannot create directory `/var/lib/ntop/test': Permission denied Any ideas?

    Read the article

  • Recommendations for USB flash drive fast at writing small files

    - by Andrew Bainbridge
    I want a drive that I can be used as my work drive, storing a Subversion repo and sandbox for a small project. I'd also like it to be able to store a DVD rip. At the moment I've got a Super Talent pico-C 8gb. It's fast at reading and writing DVD rips, but the performance on small files (ie less than 4k) is utterly terrible (we're talking floppy disk speeds here). This Ars review measured a similar Super Talent drive and pretty much confirmed my measurements (take a look at the random write speeds on page 5). So, I'm looking for a 8gb or bigger drive that doesn't suck at read and write of small files and still has acceptable performance for very large files.

    Read the article

  • Throughput = BS * IOPS?

    - by Marvin
    I've seen in many places that throughput = bs * iops should be true. For example writing at 128k block size to a SAS disk that can support 190 IOPS should give a throughput of ~23 MBps - 23.75(MBs) = 128(BS)*190(SAS-15 IOPS)/1024. Now when I tested it in a VM against a monster NetApp filer I got theses results: # dd if=/dev/zero of=/tmp/dd.out bs=4k count=2097152 8589934592 bytes (8.6 GB) copied, 61.5996 seconds, 139 MB/s To view the IO rate of the VM I used iostat and esxtop, and they both showed around 250 IOPS. So to my understanding the throughput was supposed to be ~1000k: 1000(KBs) = 4(BS)*250(IOPS). dd of 8GB is twice the size of RAM of course, so no page caching here. What am I missing? Thanks!

    Read the article

  • Why does a Remote Desktop Redirected Printer Doc appear every time I connect to Windows Server 2003 SBS?

    - by Jim Dagg
    I've run into a weird, persistent issue regarding a remote desktop connection. Every time I successfully log into a server running Windows Server 2003 SBS, without taking any further action, after a few seconds a print job spontaneously appears on my machine, titled "Remote Desktop Redirected Printer Doc". The document is 4K, datatype RAW, processor "WinPrint". I've heard of people running into this issue before, but can't seem to hunt down a coherent solution. It's a minor annoyance, but I get annoyed when Windows complains about a print job that, as far as I know, came from nowhere. Any thoughts on why this would occur and how I could prevent it from happening?

    Read the article

  • Windows Explorer - How can an large file have a zero "Size on disk" value? What does it mean

    - by Jaans
    I would expect some discrepancy between "Size" and "Size on disk" in Windows Explorer due to file system allocations etc. Below is a screenshot of an example file on a Windows 2012 R2 file server that has a 81.4 MB "Size" but for the "Size on disk" it's 0 bytes. What gives? I have other files doing the same, but yet another set of files and folders behaving as expected showing the size on disk relatively close to the actual file size. The volume is a basic disk, formatted with NTFS and the default 4K allocation units. No compression is set for any file or folder on the volume. (For those more paranoid, I did a malware scan, and also confirmed there is not ADS streams associated with the file in question). The user account running Windows Explorer is the domain administrator, and the file owner is also the domain administrator. Thanks for reading!

    Read the article

  • SQL Server and Hyper-V Dynamic Memory Part 2

    - by SQLOS Team
    Part 1 of this series was an introduction and overview of Hyper-V Dynamic Memory. This part looks at SQL Server memory management and how the SQL engine responds to changing OS memory conditions.   Part 2: SQL Server Memory Management As with any Windows process, sqlserver.exe has a virtual address space (VAS) of 4GB on 32-bit and 8TB in 64-bit editions. Pages in its VAS are mapped to pages in physical memory when the memory is committed and referenced for the first time. The collection of VAS pages that have been recently referenced is known as the Working Set. How and when SQL Server allocates virtual memory and grows its working set depends on the memory model it uses. SQL Server supports three basic memory models:   1. Conventional Memory Model   The Conventional model is the default SQL Server memory model and has the following properties: - Dynamic - can grow or shrink its working set in response to load and external (operating system) memory conditions. - OS uses 4K pages – (not to be confused with SQL Server “pages” which are 8K regions of committed memory).- Pageable - Can be paged out to disk by the operating system.   2. Locked Page Model The locked page memory model is set when SQL Server is started with "Lock Pages in Memory" privilege*. It has the following characteristics: - Dynamic - can grow or shrink its working set in the same way as the Conventional model.- OS uses 4K pages - Non-Pageable – When memory is committed it is locked in memory, meaning that it will remain backed by physical memory and will not be paged out by the operating system. A common misconception is to interpret "locked" as non-dynamic. A SQL Server instance using the locked page memory model will grow and shrink (allocate memory and release memory) in response to changing workload and OS memory conditions in the same way as it does with the conventional model.   This is an important consideration when we look at Hyper-V Dynamic Memory – “locked” memory works perfectly well with “dynamic” memory.   * Note in “Denali” (Standard Edition and above), and in SQL 2008 R2 64-bit (Enterprise and above editions) the Lock Pages in Memory privilege is all that is required to set this model. In 2008 R2 64-Bit standard edition it also requires trace flag 845 to be set, in 2008 R2 32-bit editions it requires sp_configure 'awe enabled' 1.   3. Large Page Model The Large page model is set using trace flag 834 and potentially offers a small performance boost for systems that are configured with large pages. It is characterized by: - Static - memory is allocated at startup and does not change. - OS uses large (>2MB) pages - Non-Pageable The large page model is supported with Hyper-V Dynamic Memory (and Hyper-V also supports large pages), but you get no benefit from using Dynamic Memory with this model since SQL Server memory does not grow or shrink. The rest of this article will focus on the locked and conventional SQL Server memory models.   When does SQL Server grow? For “dynamic” configurations (Conventional and Locked memory models), the sqlservr.exe process grows – allocates and commits memory from the OS – in response to a workload. As much memory is allocated as is required to optimally run the query and buffer data for future queries, subject to limitations imposed by:   - SQL Server max server memory setting. If this configuration option is set, the buffer pool is not allowed to grow to more than this value. In SQL Server 2008 this value represents single page allocations, and in “Denali” it represents any size page allocations and also managed CLR procedure allocations.   - Memory signals from OS. The operating system sets a signal on memory resource notification objects to indicate whether it has memory available or whether it is low on available memory. If there is only 32MB free for every 4GB of memory a low memory signal is set, which continues until 64MB/4GB is free. If there is 96MB/4GB free the operating system sets a high memory signal. SQL Server only allocates memory when the high memory signal is set.   To summarize, for SQL Server to grow you need three conditions: a workload, max server memory setting higher than the current allocation, high memory signals from the OS.    When does SQL Server shrink caches? SQL Server as a rule does not like to return memory to the OS, but it will shrink its caches in response to memory pressure. Memory pressure can be divided into “internal” and “external”.   - External memory pressure occurs when the operating system is running low on memory and low memory signals are set. The SQL Server Resource Monitor checks for low memory signals approximately every 5 seconds and it will attempt to free memory until the signals stop.   To free memory SQL Server does the following: ·         Frees unused memory. ·         Notifies Memory Manager Clients to release memory o   Caches – Free unreferenced cache objects. o   Buffer pool - Based on oldest access times.   The freed memory is released back to the operating system. This process continues until the low memory resource notifications stop.    - Internal memory pressure occurs when the size of different caches and allocations increase but the SQL Server process needs to keep its total memory within a target value. For example if max server memory is set and certain caches are growing large, it will cause SQL to free memory for re-use internally, but not to release memory back to the OS. If you lower the value of max server memory you will generate internal memory pressure that will cause SQL to release memory back to the OS.    Memory pressure handling has not changed much since SQL 2005 and it was described in detail in a blog post by Slava Oks.   Note that SQL Server Express is an exception to the above behavior. Unlike other editions it does not assume it is the most important process running on the system but tries to be more “desktop” friendly. It will empty its working set after a period of inactivity.   How does SQL Server respond to changing OS memory?    In SQL Server 2005 support for Hot-Add memory was introduced. This feature, available in Enterprise and above editions, allows the server to make use of any extra physical memory that was added after SQL Server started. Being able to add physical memory when the system is running is limited to specialized hardware, but with the Hyper-V Dynamic Memory feature, when new memory is allocated to a guest virtual machine, it looks like hot-add physical memory to the guest. What this means is that thanks to the hot-add memory feature, SQL Server 2005 and higher can dynamically grow if more “physical” memory is granted to a guest VM by Hyper-V dynamic memory.   SQL Server checks OS memory every second and dynamically adjusts its “target” (based on available OS memory and max server memory) accordingly.   In “Denali” Standard Edition will also have sqlserver.exe support for hot-add memory when running virtualized (i.e. detecting and acting on Hyper-V Dynamic Memory allocations).   How does a SQL Server workload in a guest VM impact Hyper-V dynamic memory scheduling?   When a SQL workload causes the sqlserver.exe process to grow its working set, the Hyper-V memory scheduler will detect memory pressure in the guest VM and add memory to it. SQL Server will then detect the extra memory and grow according to workload demand. In our tests we have seen this feedback process cause a guest VM to grow quickly in response to SQL workload - we are still working on characterizing this ramp-up.    How does SQL Server respond when Hyper-V removes memory from a guest VM through ballooning?   If pressure from other VM's cause Hyper-V Dynamic Memory to take memory away from a VM through ballooning (allocating memory with a virtual device driver and returning it to the host OS), Windows Memory Manager will page out unlocked portions of memory and signal low resource notification events. When SQL Server detects these events it will shrink memory until the low memory notifications stop (see cache shrinking description above).    This raises another question. Can we make SQL Server release memory more readily and hence behave more "dynamically" without compromising performance? In certain circumstances where the application workload is predictable it may be possible to have a job which varies "max server memory" according to need, lowering it when the engine is inactive and raising it before a period of activity. This would have limited applicaability but it is something we're looking into.   What Memory Management changes are there in SQL Server “Denali”?   In SQL Server “Denali” (aka SQL11) the Memory Manager has been re-written to be more efficient. The main changes are summarized in this post. An important change with respect to Hyper-V Dynamic Memory support is that now the max server memory setting includes any size page allocations and managed CLR procedure allocations it now represents a closer approximation to total sqlserver.exe memory usage. This makes it easier to calculate a value for max server memory, which becomes important when configuring virtual machines to work well with Hyper-V Dynamic Memory Startup and Maximum RAM settings.   Another important change is no more AWE or hot-add support for 32-bit edition. This means if you're running a 32-bit edition of Denali you're limited to a 4GB address space and will not be able to take advantage of dynamically added OS memory that wasn't present when SQL Server started (though Hyper-V Dynamic Memory is still a supported configuration).   In part 3 we’ll develop some best practices for configuring and using SQL Server with Dynamic Memory. Originally posted at http://blogs.msdn.com/b/sqlosteam/

    Read the article

  • Realtek Semiconductor Co., Ltd. RTL8101E/RTL8102E

    - by Sebastian Bugiu
    I had Ubuntu 11.10 and in the last few weeks I experienced an obscure problem: after I had the computer running for a few days I could no longer connect to google.com or anything related to google. All sites worked with all browsers (Firefox, chrome, opera) except google. It remained in the connecting phase for a few minutes and either timed out or finally connected with this huge delay. Even if I entered other sites such as this one, if it had anything to do with google such adsense or gstatic or whatever with g in it, that site took a long time to load waiting in connecting to gstatic.com . Anything google related took minutes to work, but everything else worked instantly! I tried rebooting or using other machine(with windows on it) and this worked, so it's not network related. But after a few days it started not working again... So I upgraded to the Precise Pangolin hoping this behavior would go away. It didn't! After a few days I get the same behavior as in 11.10. What am I supposed to do? Reboot every other day? I didn't have this problem with neither 10.10 or 11.04. I found the Realtek RTL8168/8111E issue with the r8169 driver but this is not exactly the same card so probably trying r8168 won't help. Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8101E/RTL8102E PCI Express Fast Ethernet controller (rev 02) Subsystem: Toshiba America Info Systems Device ff1c Flags: bus master, fast devsel, latency 0, IRQ 44 I/O ports at 4000 [size=256] Memory at d0010000 (64-bit, prefetchable) [size=4K] Memory at d0000000 (64-bit, prefetchable) [size=64K] Capabilities: [40] Power Management version 7 Capabilities: [50] MSI: Enable+ Count=1/1 Maskable- 64bit+ Capabilities: [70] Express Endpoint, MSI 01 Capabilities: [ac] MSI-X: Enable- Count=2 Masked- Capabilities: [cc] Vital Product Data Capabilities: [100] Advanced Error Reporting Capabilities: [140] Virtual Channel Capabilities: [160] Device Serial Number 09-00-00-00-ff-ff-00-00 Kernel driver in use: r8169 Kernel modules: r8169

    Read the article

  • Complex knowledge management system with CRM..written internally

    - by JonH
    We've all heard of salesforce and sugarcrm and the likes of systems like this. Unfortunately at my workplace we have been asked to write a similiar system (rather then license or purchase). Basically the database is fairly large. Think of modules such as: Corporate groups, customers, programs, projects, sub projects, and issue management. In simple terms a corporate group has one to many customers. A program has one or more projects. A project has one or more sub projects. And an issue can be created on many sub projects. Of course the system is a bit more complex but instead of listing every single module I think its best to keep it simple. In any event, the system in its current state has only two resources to be working on it (basically we have to do it all: CSS, database, jquery, asp.net and C#). We've started off well by defining the UI master and footer pages that way we can reuse those across all of our pages. Now comes the hard part. The system will have about 4k end users with say 5-10% being concurrent users. We are wondering if it makes sense to cache our database data (For say 5-10 minutes) rather then continously hit our database. The reason being is some of these pages may have 5-10 search filters associated with the page. Imagine every time a selection is made from a search box how many database hits. Also some of these search fields cascade so selecting for instance an initial drop down may cascade several drop down boxes under them. Is it wrong to cache because I am not finding too many articles on whether it is a good idea or not. Remember the system is similiar to say a CRM system where we manage our various customers, projects, sub projects, issues, etc.

    Read the article

  • Announcing Monthly Silverlight Giveaways on MichaelCrump.Net

    - by mbcrump
    I've been working with different companies to give away a set of Silverlight controls to my blog readers for the past few months. I’ve decided to setup this page and a friendly URL for everyone to keep track of my monthly giveaway. The URL to bookmark is: http://giveaways.michaelcrump.net. My main goal for giving away the controls is: I love for the Silverlight Community and giving away nice controls always sparks interest in Silverlight. To spread the word about a company and let the user decide if it meet their particular situation. I am a “control” junkie, it helps me solve my own personal business problems since I know what is out there. Provide some additional hits for my blog. Below is a grid that I will update monthly with the current giveaway. Feel free to bookmark this post and visit it monthly. Month Name Giveaway Description Blog Post Detailing Controls October 2010 Telerik Silverlight Controls –RadControls http://michaelcrump.net/archive/2010/10/15/win-telerik-radcontrols-for-silverlight-799-value.aspx November 2010 Mindscape Mindscape Mega-Pack http://michaelcrump.net/archive/2010/11/11/mindscape-silverlight-controls--free-mega-pack-contest.aspx December 2010 Infragistics Silverlight Controls + Silverlight Data Visualization http://michaelcrump.net/mbcrump/archive/2010/12/15/win-a-set-of-infragistics-silverlight-controls-with-data-visualization.aspx January 2011 Cellbi Cellbi Silverlight Controls http://michaelcrump.net/mbcrump/archive/2011/01/05/cellbi-silverlight-controls-giveaway-5-license-to-give-away.aspx February 2011 *BOOKED*     To Third Party Silverlight Companies: If you create any type of Silverlight control/application and would like to feature it on my blog then you may contact me at michael[at]michaelcrump[dot]net. Giving away controls has proven to be beneficial for both parties as I have around 4k twitter followers and average around 1000 page views a day. Contact me today and give back to the Silverlight Community.  Subscribe to my feed

    Read the article

  • Creating properly aligned partitions on a replacement disk

    - by Marius Gedminas
    I've a typical small office server with two hard disks configured for RAID-1 (mirroring). Each disk has several partitions: one for swap, the others paired in several /dev/mdX arrays. Every couple of years one of the disks dies and is replaced. The replacement typically goes something like this: # copy partition table from the remaining good disk to the empty replacement disk # (instead of /dev/good_disk and /dev/new_disk I use /dev/sda and /dev/sdb, as appropriate) sfdisk -d /dev/good_disk | sfdisk /dev/new_disk # install boot loader grub-install /dev/new_disk # create swap partition reusing the same UUID, so I don't need to edit /etc/fstab mkswap /dev/new_disk1 -U xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx # hot-add the new partitions to my RAID arrays mdadm /dev/md0 -a /dev/new_disk2 mdadm /dev/md1 -a /dev/new_disk5 mdadm /dev/md2 -a /dev/new_disk6 mdadm /dev/md3 -a /dev/new_disk7 mdadm /dev/md4 -a /dev/new_disk8 The disks were originally partitioned with cfdisk back in 2009, and so the partition table is aligned traditionally to cylinder boundaries (255 heads * 63 sectors). This is not the optimum configuration for new 4K-sector drives. My question is: how can I create a set of partitions for the new disk and ensure they're properly aligned, and have correct sizes for my RAID arrays (rounding up is acceptable, I suppose, but rounding down is definitely not)?

    Read the article

  • Network really slow with TL-WN951N wireless card

    - by Sam
    I literally just installed ubuntu and it seems to be working great except the network is deadly slow. I'm running a TL-WN951N wireless card which can download at about 600-700 KB/s in windows but in Ubuntu the max speed it seems to get is around 5KB/s. I guess I should note that my WAP is only wireless-G but like I said, I can get much better speeds in Windows. I'm testing the speeds by downloading files from here: http://mirror.internode.on.net/pub/test/ Anyone have any idea? I saw some people recommend downloading drivers and compiling them myself but I'm really really new to all of this so would appreciate someone babying me through it so I don't brick my computer! Here are the results of lspci -v: 04:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 02) Subsystem: Giga-byte Technology GA-EP45-DS5 Motherboard Flags: bus master, fast devsel, latency 0, IRQ 44 I/O ports at c000 [size=256] Memory at e9110000 (64-bit, prefetchable) [size=4K] Memory at e9100000 (64-bit, prefetchable) [size=64K] [virtual] Expansion ROM at e9120000 [disabled] [size=64K] Capabilities: <access denied> Kernel driver in use: r8169 Kernel modules: r8169 05:02.0 Network controller: Atheros Communications Inc. AR5008 Wireless Network Adapter (rev 01) Subsystem: Atheros Communications Inc. Device 3071 Flags: bus master, 66MHz, medium devsel, latency 168, IRQ 18 Memory at e9200000 (32-bit, non-prefetchable) [size=64K] Capabilities: <access denied> Kernel driver in use: ath9k Kernel modules: ath9k

    Read the article

  • Trade In, Trade Up Promotion: SPARC Consolidation Now Through May 31st

    - by swalker
    Dear Partner, Installed Base Business (IBB) technology refresh is one of the most important activities for Oracle, for you and for your customers. It allows your existing customers to benefit from the most up-to-date, best-of-breed Oracle products. And it’s an exciting time to perform a technology refresh: a new SPARC promotion is available now, closing 31st May 2012. Customers trading in older SPARC systems and upgrading to a new SPARC SuperCluster T4-4 or SPARC Enterprise M8000/M9000 can get $4,000 per CPU. Discount is pre-approved and upfront (maximum discounts apply). The major highlights are as follows: Targeted Systems: Upgrade to SPARC M8000, M9000, SuperCluster Qualified installed base upgrade from: All older-generations of SPARC systemsPromotional offer: Trade-in Value: $4K per CPU Pre-approved maximum discount (including trade-in) not to exceed 60% on M8/9000 systems and 25% on SuperCluster No-cost dock-to-dock shipping, and environmentally safe disposal of the returned hardware through Oracle best-of-class recycling processes. Recommendations: We recommend you to take the following actions: As usual, please register your opportunities in OMM When you do so, please make sure you place the following Campaign Names in the “Marketing Initiative” field of OMM: Campaign Name : EMEA_Tech Refresh-IBB Campaign_12H1_Follow Up_O For all the details: Please view rules, and FAQs. For more information, please visit the Promo Partner Site here. For more information on IBB and the Oracle Upgrade Advantage Program (UAP):http://www.oracle.com/us/products/servers-storage/upgrade-advantage-program/index.html http://www.oracle.com/partners/secure/sales/oracle-ibb-program-for-partners-184291.html Contacts: For questions, please contact your favorite Oracle Partner Account Manager.

    Read the article

  • WIFI/LAN not working after Installing Ubuntu 13.10

    - by user183025
    I can't connect to my WIFI after installing Ubuntu 13.10. I tried re-installing it three times (thinking that I did something wrong during the installation) but I still can't connect to my WIFI. It just gives me the message that I'm offline and that I can't connect to my WIFI. I also tried connecting to the internet using LAN cable but with the same results. I tried google but it seems that there's no solution to this yet.... Anybody knows how to solve this? Thanks! FYI: H/W path Device Class Description ====================================================== system RC530/RC730 (To be filled by O.E.M.) /0 bus RC530/RC730 /0/0 memory 64KiB BIOS /0/4 processor Intel(R) Core(TM) i5-2410M CPU @ 2.30GHz /0/4/5 memory 32KiB L1 cache /0/41 memory 4GiB System Memory /0/41/0 memory DIMM [empty] /0/41/1 memory 4GiB SODIMM DDR3 Synchronous 1333 MHz (0.8 ns) /0/100 bridge 2nd Generation Core Processor Family DRAM Controller 02:00.0 Network controller: Intel Corporation Centrino Wireless-N 130 (rev 34) Subsystem: Intel Corporation Centrino Wireless-N 130 BGN Flags: bus master, fast devsel, latency 0, IRQ 44 Memory at f7200000 (64-bit, non-prefetchable) [size=8K] Capabilities: <access denied> Kernel driver in use: iwlwifi Kernel modules: iwlwifi 03:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 06) Subsystem: Samsung Electronics Co Ltd Device c0c1 Flags: bus master, fast devsel, latency 0, IRQ 41 I/O ports at b000 [size=256] Memory at f2104000 (64-bit, prefetchable) [size=4K] Memory at f2100000 (64-bit, prefetchable) [size=16K] Capabilities: <access denied> Kernel driver in use: r8169 Kernel modules: r8169

    Read the article

  • Single django instance with subdomains for each app in the django project

    - by jwesonga
    I have a django project (django+apache+mod_wsgi+nginx) with multiple apps, I'd like to map each app as a subdomain: project/ app1 (domain.com) app2 (sub1.domain.com) app3 (sub3.domain.com) I have a single .wsgi script serving the project, which is stored in a folder /apache. Below is my vhost file. I'm using a single vhost file instead of separate ones for each sub-domain: <VirtualHost *:8080> ServerAdmin [email protected] ServerName www.domain.com ServerAlias domain.com DocumentRoot /home/path/to/app/ Alias /admin_media/ /usr/local/lib/python2.6/dist-packages/django/contrib/admin/media <Directory /home/path/to/wsgi/apache/> Order deny,allow Allow from all </Directory> LogLevel warn ErrorLog /home/path/to/logs/apache_error.log CustomLog /home/path/to/logs/apache_access.log combined WSGIDaemonProcess domain.com user=www-data group=www-data threads=25 WSGIProcessGroup domain.com WSGIScriptAlias / /home/path/to/apache/kcdf.wsgi </VirtualHost> <VirtualHost *:8081> ServerAdmin [email protected] ServerName sub1.domain.com ServerAlias sub1.domain.com DocumentRoot /home/path/to/app Alias /admin_media/ /usr/local/lib/python2.6/dist-packages/django/contrib/admin/media <Directory /home/path/to/wsgi/apache/> Order deny,allow Allow from all </Directory> LogLevel warn ErrorLog /home/path/to/logs/apache_error.log CustomLog /home/path/to/logs/apache_access.log combined WSGIDaemonProcess sub1.domain.com user=www-data group=www-data threads=25 WSGIProcessGroup sub1.domain.com WSGIScriptAlias / /home/path/to/apache/kcdf.wsgi </VirtualHost> My Nginx configuration for the domain.com: server { listen 80; server_name domain.com; access_log off; error_log off; # proxy to Apache 2 and mod_wsgi location / { proxy_pass http://127.0.0.1:8080/; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_max_temp_file_size 0; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffer_size 4k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; } } Configuration for the sub.domain.com: server { listen 80; server_name sub.domain.com; access_log off; error_log off; # proxy to Apache 2 and mod_wsgi location / { proxy_pass http://127.0.0.1:8081/; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_max_temp_file_size 0; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffer_size 4k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; } } This set up doesn't seem to work, everything seems to point to the main domain. I've tried http://effbot.org/zone/django-multihost.htm which kind of worked but seems to have issues with loading my css,images,js files.

    Read the article

  • memcpy segmentation fault on linux but not os x

    - by Andre
    I'm working on implementing a log based file system for a file as a class project. I have a good amount of it working on my 64 bit OS X laptop, but when I try to run the code on the CS department's 32 bit linux machines, I get a seg fault. The API we're given allows writing DISK_SECTOR_SIZE (512) bytes at a time. Our log record consists of the 512 bytes the user wants to write as well as some metadata (which sector he wants to write to, the type of operation, etc). All in all, the size of the "record" object is 528 bytes, which means each log record spans 2 sectors on the disk. The first record writes 0-512 on sector 0, and 0-15 on sector 1. The second record writes 16-512 on sector 1, and 0-31 on sector 2. The third record writes 32-512 on sector 2, and 0-47 on sector 3. ETC. So what I do is read the two sectors I'll be modifying into 2 freshly allocated buffers, copy starting at record into buf1+the calculated offset for 512-offset bytes. This works correctly on both machines. However, the second memcpy fails. Specifically, "record+DISK_SECTOR_SIZE-offset" in the below code segfaults, but only on the linux machine. Running some random tests, it gets more curious. The linux machine reports sizeof(Record) to be 528. Therefore, if I tried to memcpy from record+500 into buf for 1 byte, it shouldn't have a problem. In fact, the biggest offset I can get from record is 254. That is, memcpy(buf1, record+254, 1) works, but memcpy(buf1, record+255, 1) segfaults. Does anyone know what I'm missing? Record *record = malloc(sizeof(Record)); record->tid = tid; record->opType = OP_WRITE; record->opArg = sector; int i; for (i = 0; i < DISK_SECTOR_SIZE; i++) { record->data[i] = buf[i]; // *buf is passed into this function } char* buf1 = malloc(DISK_SECTOR_SIZE); char* buf2 = malloc(DISK_SECTOR_SIZE); d_read(ad->disk, ad->curLogSector, buf1); d_read(ad->disk, ad->curLogSector+1, buf2); memcpy(buf1+offset, record, DISK_SECTOR_SIZE-offset); memcpy(buf2, record+DISK_SECTOR_SIZE-offset, offset+sizeof(Record)-sizeof(record->data));

    Read the article

  • ImgBurn fails to burn data CD-R disk due to "Layouts do not match" error

    - by 0xAether
    I have a reoccurring problem with the program ImgBurn. Whenever I try and burn anything to a CD-R using ImgBurn it burns just fine, except for when I go and verify the disk. It tells me that the "Layouts do not match". Windows 7 shows the disk as completely blank. Although, I see on the bottom of the disk it has been written to. I can burn ISO files to DVD-R's just fine. This only seems to happen with CD-R's. The CD-R's I'm using are Memorex Cool Colors 52x CD-R's. I have looked on Google, and it seems like I'm not the only one this happens to. Unfortunately, no one is able to provide an explanation. I have included the log file from the last CD I just burnt. If you need anything else to better diagnose this problem, I will gladly provide it. ; //****************************************\\ ; ImgBurn Version 2.5.7.0 - Log ; Monday, 19 November 2012, 16:11:57 ; \\****************************************// ; ; I 16:04:55 ImgBurn Version 2.5.7.0 started! I 16:04:55 Microsoft Windows 7 Ultimate x64 Edition (6.1, Build 7601 : Service Pack 1) I 16:04:55 Total Physical Memory: 4,156,380 KB - Available: 3,317,144 KB I 16:04:55 Initialising SPTI... I 16:04:55 Searching for SCSI / ATAPI devices... I 16:04:56 -> Drive 1 - Info: Optiarc DVD RW AD-7560S SH03 (D:) (SATA) I 16:04:56 Found 1 DVD±RW/RAM! I 16:05:37 Operation Started! I 16:05:37 Source File: C:\Users\Aaron\Desktop\VMware Workstation 9.iso I 16:05:37 Source File Sectors: 223,057 (MODE1/2048) I 16:05:37 Source File Size: 456,820,736 bytes I 16:05:37 Source File Volume Identifier: VMwareWorksta9 I 16:05:37 Source File Volume Set Identifier: 20121119_2102 I 16:05:37 Source File File System(s): ISO9660, Joliet I 16:05:37 Destination Device: [1:0:0] Optiarc DVD RW AD-7560S SH03 (D:) (SATA) I 16:05:37 Destination Media Type: CD-R (Disc ID: 97m17s06f, Moser Baer India) I 16:05:37 Destination Media Supported Write Speeds: 10x, 16x, 20x, 24x I 16:05:37 Destination Media Sectors: 359,847 I 16:05:37 Write Mode: CD I 16:05:37 Write Type: SAO I 16:05:37 Write Speed: 6x I 16:05:37 Lock Volume: Yes I 16:05:37 Test Mode: No I 16:05:37 OPC: No I 16:05:37 BURN-Proof: Enabled W 16:05:37 Write Speed Miscompare! - MODE SENSE: 1,764 KB/s (10x), GET PERFORMANCE: 11,080 KB/s (63x) W 16:05:37 Write Speed Miscompare! - MODE SENSE: 1,764 KB/s (10x), GET PERFORMANCE: 11,080 KB/s (63x) W 16:05:37 Write Speed Miscompare! - MODE SENSE: 1,764 KB/s (10x), GET PERFORMANCE: 11,080 KB/s (63x) W 16:05:37 Write Speed Miscompare! - MODE SENSE: 1,764 KB/s (10x), GET PERFORMANCE: 11,080 KB/s (63x) W 16:05:37 Write Speed Miscompare! - MODE SENSE: 1,764 KB/s (10x), GET PERFORMANCE: 11,080 KB/s (63x) W 16:05:37 Write Speed Miscompare! - Wanted: 1,058 KB/s (6x), Got: 1,764 KB/s (10x) / 11,080 KB/s (63x) W 16:05:37 The drive only supports writing these discs at 10x, 16x, 20x, 24x. I 16:05:38 Filling Buffer... (80 MB) I 16:05:40 Writing LeadIn... I 16:06:07 Writing Session 1 of 1... (1 Track, LBA: 0 - 223056) I 16:06:07 Writing Track 1 of 1... (MODE1/2048, LBA: 0 - 223056) I 16:11:00 Synchronising Cache... I 16:11:18 Exporting Graph Data... I 16:11:18 Graph Data File: C:\Users\Aaron\AppData\Roaming\ImgBurn\Graph Data Files\Optiarc_DVD_RW_AD-7560S_SH03_MONDAY-NOVEMBER-19-2012_4-05_PM_97m17s06f_6x.ibg I 16:11:18 Export Successfully Completed! I 16:11:18 Operation Successfully Completed! - Duration: 00:05:41 I 16:11:18 Average Write Rate: 1,522 KB/s (10.1x) - Maximum Write Rate: 1,544 KB/s (10.3x) I 16:11:18 Cycling Tray before Verify... W 16:11:23 Waiting for device to become ready... I 16:11:47 Device Ready! E 16:11:47 CompareImageFileLayouts Failed! - Session Count Not Equal (1/0) E 16:11:47 Verify Failed! - Reason: Layouts do not match. I 16:11:57 Close Request Acknowledged I 16:11:57 Closing Down... I 16:11:57 Shutting down SPTI... I 16:11:57 ImgBurn closed!

    Read the article

  • How do I determine if my controller is in IDE or AHCI mode in Linux?

    - by philcolbourn
    I have an old MacBook Pro 4,1 (early 2008) - but I suspect an answer would apply to many MacBook Pros. It has an Intel IDE/SATA controller (ICH8M/ICH8M-E). I have installed a patched MBR that is supposed to put my controller into AHCI mode. It does this by setting some controller port value that I don't understand. This seems to work as I get this from lspci: 00:1f.1 IDE interface: Intel Corporation 82801HM/HEM (ICH8M/ICH8M-E) IDE Controller (rev 03) 00:1f.2 IDE interface: Intel Corporation 82801HM/HEM (ICH8M/ICH8M-E) SATA Controller [AHCI mode] (rev 03) Now most, perhaps all, sites that provide a solution (enabling AHCI) suggest that after a sleep/wake cycle that a controller will revert to IDE mode due to how Apple support Windows. They recommend disabling sleep. From author of patchedcode.bin I think Enabling AHCI for Windows on MacBooks NB: I do not have bootcamp installed and I do not have Windows installed. Is there a way to prove that my controller is in IDE or AHCI mode? Background Data Using patchedcode.bin MBR I get this in syslog: Jun 12 22:33:22 max kernel: [ 1.860955] ahci 0000:00:1f.2: version 3.0 Jun 12 22:33:22 max kernel: [ 1.861052] ahci 0000:00:1f.2: irq 45 for MSI/MSI-X Jun 12 22:33:22 max kernel: [ 1.861117] ahci 0000:00:1f.2: AHCI 0001.0100 32 slots 3 ports 1.5 Gbps 0x1 impl SATA mode Jun 12 22:33:22 max kernel: [ 1.861120] ahci 0000:00:1f.2: flags: 64bit ncq sntf pm led clo pio slum part ccc ems Jun 12 22:33:22 max kernel: [ 1.861130] ahci 0000:00:1f.2: setting latency timer to 64 Jun 12 22:33:22 max kernel: [ 1.880880] ACPI: Video Device [GFX0] (multi-head: yes rom: no post: no) Jun 12 22:33:22 max kernel: [ 1.880983] scsi2 : ahci Jun 12 22:33:22 max kernel: [ 1.884552] scsi3 : ahci Jun 12 22:33:22 max kernel: [ 1.886932] scsi4 : ahci Jun 12 22:33:22 max kernel: [ 1.886998] ata3: SATA max UDMA/133 abar m2048@0xdb504000 port 0xdb504100 irq 45 Jun 12 22:33:22 max kernel: [ 1.887000] ata4: DUMMY Jun 12 22:33:22 max kernel: [ 1.887002] ata5: DUMMY Jun 12 22:33:22 max kernel: [ 2.204103] ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jun 12 22:33:22 max kernel: [ 2.204656] ata3.00: ATA-8: FUJITSU MHY2200BH, 0081000D, max UDMA/100 Jun 12 22:33:22 max kernel: [ 2.204662] ata3.00: 390721968 sectors, multi 16: LBA48 NCQ (depth 31/32), AA Jun 12 22:33:22 max kernel: [ 2.205324] ata3.00: configured for UDMA/100 Jun 12 22:33:22 max kernel: [ 2.205554] scsi 2:0:0:0: Direct-Access ATA FUJITSU MHY2200B 0081 PQ: 0 ANSI: 5 Using my original MBR I get this from syslog: Jun 13 18:07:13 max kernel: [ 0.622861] ata_piix 0000:00:1f.1: version 2.13 Jun 13 18:07:13 max kernel: [ 0.622869] ata_piix 0000:00:1f.1: power state changed by ACPI to D0 Jun 13 18:07:13 max kernel: [ 0.622924] ata_piix 0000:00:1f.1: setting latency timer to 64 Jun 13 18:07:13 max kernel: [ 0.623339] scsi0 : ata_piix Jun 13 18:07:13 max kernel: [ 0.623730] scsi1 : ata_piix Jun 13 18:07:13 max kernel: [ 0.623765] ata1: PATA max UDMA/100 cmd 0x8108 ctl 0x811c bmdma 0x80e0 irq 21 Jun 13 18:07:13 max kernel: [ 0.623767] ata2: PATA max UDMA/100 cmd 0x8100 ctl 0x8118 bmdma 0x80e8 irq 21 Jun 13 18:07:13 max kernel: [ 0.623810] ata_piix 0000:00:1f.2: MAP [ Jun 13 18:07:13 max kernel: [ 0.623811] P0 -- -- -- ] Jun 13 18:07:13 max kernel: [ 0.623866] ata_piix 0000:00:1f.2: setting latency timer to 64 Jun 13 18:07:13 max kernel: [ 0.624241] scsi2 : ata_piix Jun 13 18:07:13 max kernel: [ 0.624558] scsi3 : ata_piix Jun 13 18:07:13 max kernel: [ 0.624862] ata3: SATA max UDMA/133 cmd 0x80f8 ctl 0x8114 bmdma 0x8020 irq 18 Jun 13 18:07:13 max kernel: [ 0.624865] ata4: SATA max UDMA/133 cmd 0x80f0 ctl 0x8110 bmdma 0x8028 irq 18 Jun 13 18:07:13 max kernel: [ 1.208879] ata3.00: ATA-8: FUJITSU MHY2200BH, 0081000D, max UDMA/100 Jun 13 18:07:13 max kernel: [ 1.208882] ata3.00: 390721968 sectors, multi 16: LBA48 NCQ (depth 0/32) Jun 13 18:07:13 max kernel: [ 1.208961] ata1.01: ATAPI: MATSHITA DVD+/-RW UJ-867S, 1.00, max UDMA/33 Jun 13 18:07:13 max kernel: [ 1.216186] ata3.00: configured for UDMA/100 Jun 13 18:07:13 max kernel: [ 1.224396] ata1.01: configured for UDMA/33

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >