Search Results

Search found 11003 results on 441 pages for 'usb storage'.

Page 162/441 | < Previous Page | 158 159 160 161 162 163 164 165 166 167 168 169  | Next Page >

  • Windows Server 2008 R2 SP1 Drivers Missing Would Not Install?

    - by bleepzter
    I am building my dev machine with WS2008R2SP1. I consider Windows Server as the best development environment for C# since I tend to focus on server-centric (integration) applications. The desktop flavor of Windows (7) doesn't play nicely with things like Commerce Server or BizTalk... so I to like stick to the environment I develop for. Previously I used to develop inside of VM's but I've found that it is super inefficient and tends to take a toll on the laptops. (I've gone through two of them in 6 months). Problem is that I multiple devices that do not want to be recognized by Windows: My machine is Dell Precision M4500: Intel Core i7-Q740, 1TB HDD, 8GB RAM, Dell re-branded Broadcom DW1501 802x11n Half-Mini Card, Dell re-branded Broadcom DW375 Bluetooh Module, Intel 82577LM Gigabit Network connection NVidia Quadro FX1800 Graphics The devices in question are the Dell rebranded broadcom network and bluetooth adapters: Broadcom USH: USB\VID_0A5C&PID_5800&REV_0101&MI_00 USB\VID_0A5C&PID_5800&MI_00 DW375 Bluetooth Module USB\VID_413C&PID_8187&REV_0517 USB\VID_413C&PID_8187 When I ran the broadcom installers I get "Operating System not supported" which I think is a big oversight on Broadcom's part. Why check for system version string? UGHGHGH Moreover if I try to manually force the driver in windows... I get an error: Driver Management concluded the process to install driver FileRepository\btwampsecfl.inf_amd64_neutral_d8fc2b85d035ed47\btwampsecfl.inf for Device Instance ID USB\VID_0A5C&PID_5800&MI_00\7&66DE6C9&0&0000 with the following status: 0xe0000217. '- or - Driver Management concluded the process to install driver FileRepository\btwampfl.inf_amd64_neutral_d4c4acf036c61299\btwampfl.inf for Device Instance ID USB\VID_413C&PID_8187\90004EEEF5A6 with the following status: 0xe0000217. I googled the 0xe0000217 error code and it says Bad service install section in the driver inf file... Any ideas on how to fix this? I also tried the post by BetaIQ on MSDN Forum, unfortunately the links to the driver package included in the post were dead :( PS. On a side note I also do mobile development for Android, iOS, and Windows Phone, and BB. Having the bluetooth is quite useful with mobile devices.

    Read the article

  • Bacula & Multiple Tape Devices, and so on

    - by Tom O'Connor
    Bacula won't make use of 2 tape devices simultaneously. (Search for #-#-# for the TL;DR) A little background, perhaps. In the process of trying to get a decent working backup solution (backing up 20TB ain't cheap, or easy) at $dayjob, we bought a bunch of things to make it work. Firstly, there's a Spectra Logic T50e autochanger, 40 slots of LTO5 goodness, and that robot's got a pair of IBM HH5 Ultrium LTO5 drives, connected via FibreChannel Arbitrated Loop to our backup server. There's the backup server.. A Dell R715 with 2x 16 core AMD 62xx CPUs, and 32GB of RAM. Yummy. That server's got 2 Emulex FCe-12000E cards, and an Intel X520-SR dual port 10GE NIC. We were also sold Commvault Backup (non-NDMP). Here's where it gets really complicated. Spectra Logic and Commvault both sent respective engineers, who set up the library and the software. Commvault was running fine, in so far as the controller was working fine. The Dell server has Ubuntu 12.04 server, and runs the MediaAgent for CommVault, and mounts our BlueArc NAS as NFS to a few mountpoints, like /home, and some stuff in /mnt. When backing up from the NFS mountpoints, we were seeing ~= 290GB/hr throughput. That's CRAP, considering we've got 20-odd TB to get through, in a <48 hour backup window. The rated maximum on the BlueArc is 700MB/s (2460GB/hr), the rated maximum write speed on the tape devices is 140MB/s, per drive, so that's 492GB/hr (or double it, for the total throughput). So, the next step was to benchmark NFS performance with IOzone, and it turns out that we get epic write performance (across 20 threads), and it's like 1.5-2.5TB/hr write, but read performance is fecking hopeless. I couldn't ever get higher than 343GB/hr maximum. So let's assume that the 343GB/hr is a theoretical maximum for read performance on the NAS, then we should in theory be able to get that performance out of a) CommVault, and b) any other backup agent. Not the case. Commvault seems to only ever give me 200-250GB/hr throughput, and out of experimentation, I installed Bacula to see what the state of play there is. If, for example, Bacula gave consistently better performance and speeds than Commvault, then we'd be able to say "**$.$ Refunds Plz $.$**" #-#-# Alas, I found a different problem with Bacula. Commvault seems pretty happy to read from one part of the mountpoint with one thread, and stream that to a Tape device, whilst reading from some other directory with the other thread, and writing to the 2nd drive in the autochanger. I can't for the life of me get Bacula to mount and write to two tape drives simultaneously. Things I've tried: Setting Maximum Concurrent Jobs = 20 in the Director, File and Storage Daemons Setting Prefer Mounted Volumes = no in the Job Definition Setting multiple devices in the Autochanger resource. Documentation seems to be very single-drive centric, and we feel a little like we've strapped a rocket to a hamster, with this one. The majority of example Bacula configurations are for DDS4 drives, manual tape swapping, and FreeBSD or IRIX systems. I should probably add that I'm not too bothered if this isn't possible, but I'd be surprised. I basically want to use Bacula as proof to stick it to the software vendors that they're overpriced ;) I read somewhere that @KyleBrandt has done something similar with a modern Tape solution.. Configuration Files: *bacula-dir.conf* # # Default Bacula Director Configuration file Director { # define myself Name = backuphost-1-dir DIRport = 9101 # where we listen for UA connections QueryFile = "/etc/bacula/scripts/query.sql" WorkingDirectory = "/var/lib/bacula" PidDirectory = "/var/run/bacula" Maximum Concurrent Jobs = 20 Password = "yourekiddingright" # Console password Messages = Daemon DirAddress = 0.0.0.0 #DirAddress = 127.0.0.1 } JobDefs { Name = "DefaultFileJob" Type = Backup Level = Incremental Client = backuphost-1-fd FileSet = "Full Set" Schedule = "WeeklyCycle" Storage = File Messages = Standard Pool = File Priority = 10 Write Bootstrap = "/var/lib/bacula/%c.bsr" } JobDefs { Name = "DefaultTapeJob" Type = Backup Level = Incremental Client = backuphost-1-fd FileSet = "Full Set" Schedule = "WeeklyCycle" Storage = "SpectraLogic" Messages = Standard Pool = AllTapes Priority = 10 Write Bootstrap = "/var/lib/bacula/%c.bsr" Prefer Mounted Volumes = no } # # Define the main nightly save backup job # By default, this job will back up to disk in /nonexistant/path/to/file/archive/dir Job { Name = "BackupClient1" JobDefs = "DefaultFileJob" } Job { Name = "BackupThisVolume" JobDefs = "DefaultTapeJob" FileSet = "SpecialVolume" } #Job { # Name = "BackupClient2" # Client = backuphost-12-fd # JobDefs = "DefaultJob" #} # Backup the catalog database (after the nightly save) Job { Name = "BackupCatalog" JobDefs = "DefaultFileJob" Level = Full FileSet="Catalog" Schedule = "WeeklyCycleAfterBackup" # This creates an ASCII copy of the catalog # Arguments to make_catalog_backup.pl are: # make_catalog_backup.pl <catalog-name> RunBeforeJob = "/etc/bacula/scripts/make_catalog_backup.pl MyCatalog" # This deletes the copy of the catalog RunAfterJob = "/etc/bacula/scripts/delete_catalog_backup" Write Bootstrap = "/var/lib/bacula/%n.bsr" Priority = 11 # run after main backup } # # Standard Restore template, to be changed by Console program # Only one such job is needed for all Jobs/Clients/Storage ... # Job { Name = "RestoreFiles" Type = Restore Client=backuphost-1-fd FileSet="Full Set" Storage = File Pool = Default Messages = Standard Where = /srv/bacula/restore } FileSet { Name = "SpecialVolume" Include { Options { signature = MD5 } File = /mnt/SpecialVolume } Exclude { File = /var/lib/bacula File = /nonexistant/path/to/file/archive/dir File = /proc File = /tmp File = /.journal File = /.fsck } } # List of files to be backed up FileSet { Name = "Full Set" Include { Options { signature = MD5 } File = /usr/sbin } Exclude { File = /var/lib/bacula File = /nonexistant/path/to/file/archive/dir File = /proc File = /tmp File = /.journal File = /.fsck } } Schedule { Name = "WeeklyCycle" Run = Full 1st sun at 23:05 Run = Differential 2nd-5th sun at 23:05 Run = Incremental mon-sat at 23:05 } # This schedule does the catalog. It starts after the WeeklyCycle Schedule { Name = "WeeklyCycleAfterBackup" Run = Full sun-sat at 23:10 } # This is the backup of the catalog FileSet { Name = "Catalog" Include { Options { signature = MD5 } File = "/var/lib/bacula/bacula.sql" } } # Client (File Services) to backup Client { Name = backuphost-1-fd Address = localhost FDPort = 9102 Catalog = MyCatalog Password = "surelyyourejoking" # password for FileDaemon File Retention = 30 days # 30 days Job Retention = 6 months # six months AutoPrune = yes # Prune expired Jobs/Files } # # Second Client (File Services) to backup # You should change Name, Address, and Password before using # #Client { # Name = backuphost-12-fd # Address = localhost2 # FDPort = 9102 # Catalog = MyCatalog # Password = "i'mnotjokinganddontcallmeshirley" # password for FileDaemon 2 # File Retention = 30 days # 30 days # Job Retention = 6 months # six months # AutoPrune = yes # Prune expired Jobs/Files #} # Definition of file storage device Storage { Name = File # Do not use "localhost" here Address = localhost # N.B. Use a fully qualified name here SDPort = 9103 Password = "lalalalala" Device = FileStorage Media Type = File } Storage { Name = "SpectraLogic" Address = localhost SDPort = 9103 Password = "linkedinmakethebestpasswords" Device = Drive-1 Device = Drive-2 Media Type = LTO5 Autochanger = yes } # Generic catalog service Catalog { Name = MyCatalog # Uncomment the following line if you want the dbi driver # dbdriver = "dbi:sqlite3"; dbaddress = 127.0.0.1; dbport = dbname = "bacula"; DB Address = ""; dbuser = "bacula"; dbpassword = "bbmaster63" } # Reasonable message delivery -- send most everything to email address # and to the console Messages { Name = Standard mailcommand = "/usr/lib/bacula/bsmtp -h localhost -f \"\(Bacula\) \<%r\>\" -s \"Bacula: %t %e of %c %l\" %r" operatorcommand = "/usr/lib/bacula/bsmtp -h localhost -f \"\(Bacula\) \<%r\>\" -s \"Bacula: Intervention needed for %j\" %r" mail = root@localhost = all, !skipped operator = root@localhost = mount console = all, !skipped, !saved # # WARNING! the following will create a file that you must cycle from # time to time as it will grow indefinitely. However, it will # also keep all your messages if they scroll off the console. # append = "/var/lib/bacula/log" = all, !skipped catalog = all } # # Message delivery for daemon messages (no job). Messages { Name = Daemon mailcommand = "/usr/lib/bacula/bsmtp -h localhost -f \"\(Bacula\) \<%r\>\" -s \"Bacula daemon message\" %r" mail = root@localhost = all, !skipped console = all, !skipped, !saved append = "/var/lib/bacula/log" = all, !skipped } # Default pool definition Pool { Name = Default Pool Type = Backup Recycle = yes # Bacula can automatically recycle Volumes AutoPrune = yes # Prune expired volumes Volume Retention = 365 days # one year } # File Pool definition Pool { Name = File Pool Type = Backup Recycle = yes # Bacula can automatically recycle Volumes AutoPrune = yes # Prune expired volumes Volume Retention = 365 days # one year Maximum Volume Bytes = 50G # Limit Volume size to something reasonable Maximum Volumes = 100 # Limit number of Volumes in Pool } Pool { Name = AllTapes Pool Type = Backup Recycle = yes AutoPrune = yes # Prune expired volumes Volume Retention = 31 days # one Moth } # Scratch pool definition Pool { Name = Scratch Pool Type = Backup } # # Restricted console used by tray-monitor to get the status of the director # Console { Name = backuphost-1-mon Password = "LastFMalsostorePasswordsLikeThis" CommandACL = status, .status } bacula-sd.conf # # Default Bacula Storage Daemon Configuration file # Storage { # definition of myself Name = backuphost-1-sd SDPort = 9103 # Director's port WorkingDirectory = "/var/lib/bacula" Pid Directory = "/var/run/bacula" Maximum Concurrent Jobs = 20 SDAddress = 0.0.0.0 # SDAddress = 127.0.0.1 } # # List Directors who are permitted to contact Storage daemon # Director { Name = backuphost-1-dir Password = "passwordslinplaintext" } # # Restricted Director, used by tray-monitor to get the # status of the storage daemon # Director { Name = backuphost-1-mon Password = "totalinsecurityabound" Monitor = yes } Device { Name = FileStorage Media Type = File Archive Device = /srv/bacula/archive LabelMedia = yes; # lets Bacula label unlabeled media Random Access = Yes; AutomaticMount = yes; # when device opened, read it RemovableMedia = no; AlwaysOpen = no; } Autochanger { Name = SpectraLogic Device = Drive-1 Device = Drive-2 Changer Command = "/etc/bacula/scripts/mtx-changer %c %o %S %a %d" Changer Device = /dev/sg4 } Device { Name = Drive-1 Drive Index = 0 Archive Device = /dev/nst0 Changer Device = /dev/sg4 Media Type = LTO5 AutoChanger = yes RemovableMedia = yes; AutomaticMount = yes; AlwaysOpen = yes; RandomAccess = no; LabelMedia = yes } Device { Name = Drive-2 Drive Index = 1 Archive Device = /dev/nst1 Changer Device = /dev/sg4 Media Type = LTO5 AutoChanger = yes RemovableMedia = yes; AutomaticMount = yes; AlwaysOpen = yes; RandomAccess = no; LabelMedia = yes } # # Send all messages to the Director, # mount messages also are sent to the email address # Messages { Name = Standard director = backuphost-1-dir = all } bacula-fd.conf # # Default Bacula File Daemon Configuration file # # # List Directors who are permitted to contact this File daemon # Director { Name = backuphost-1-dir Password = "hahahahahaha" } # # Restricted Director, used by tray-monitor to get the # status of the file daemon # Director { Name = backuphost-1-mon Password = "hohohohohho" Monitor = yes } # # "Global" File daemon configuration specifications # FileDaemon { # this is me Name = backuphost-1-fd FDport = 9102 # where we listen for the director WorkingDirectory = /var/lib/bacula Pid Directory = /var/run/bacula Maximum Concurrent Jobs = 20 #FDAddress = 127.0.0.1 FDAddress = 0.0.0.0 } # Send all messages except skipped files back to Director Messages { Name = Standard director = backuphost-1-dir = all, !skipped, !restored }

    Read the article

  • Does Hyper-V support SCSI Pass-through discs in a Server 2003 R2 VM?

    - by Peter Bernier
    I'm running into some difficulties getting pass-through disks to be accessible to a Hyper-v server 2003 r2 virtual machine. Host OS : Server 2008 R2 full w/Hyper-V role Guest OS : Server 2003 R2 (Windows Home Server) The guest's OS disk is a pass-through disk on the IDE controller (not the best solution, but I can live with it). My storage disks will be pass-through disks on the SCSI controller. I'm able to see all of the disks that I'll be using for the VM on the host without issue. The problem that I'm having is that I can't seem to get the guest OS to be able to 'see' the storage drives (as pass-through disks on the SCSI controller). Here's what I'm doing : On the host, the storage drive is set to 'Offline' just like the OS disk (this is required for pass-through to work). In the VM, the storage drive is on the SCSI controller. Hyper-V Integration Tools are installed in guest. That's as far as I'm able to get. I don't see the drive in Computer Management, or in Windows Explorer (I've tried with an unformatted disk, as well as after formatting a partition). I am able to see a removable device that lists the disk's model number in the Guest, but I can't seem to access the storage. (I get an entry in Device Manager that needs drivers, but nothing on the Integration Tools disc works..) Trouble-shooting steps I've tried : If put the pass-through drive on the IDE controller, I can see it in the Guest. If put the storage drive 'Online' in the host and create a VHD on it on the SCSI controller, I can see it in the Guest. I suppose I could create a fixed-size VHD that consumes the entire disk, but I'd rather not have that overhead. I've also extracted the contents of the Integration Tools drivers (x86 and amd64) and tried pointing the disk controller to each of those, with no luck. Can anyone offer suggestions as to how I can get this to work properly?

    Read the article

  • I flashed my DS4700 with a 7 series firmware, now my DS4300 cannot read the disks I moved to that lo

    - by Daniel Hoeving
    In preparation for adding a number of 1Tb SATA disks to our DS4700 I flashed the controller firmware from a 6 series (which only supports up to 2Tb logical drives) to a 7 series (which supports larger than 2Tb logical drives). Attached to this DS4700 was a EXP710 expansion drawer that we had planned to migrate out to our co-location to allieviate the storage issues we were having there. Unfortunately these two projects were planned in isolation to one another so I was at the time unaware of the issue that this would cause. Prior to migrating the drawer I was reading the "IBM TotalStorage DS4000 EXP700 and EXP710 Storage Expansion EnclosuresInstallation, User’s, and Maintenance Guide" and discovered this: Controller firmware 6.xx or earlier has a different metadata (DACstore) data structure than controller firmware 7.xx.xx.xx. Metadata consists of the array and logical drive configuration data. These two metadata data structures are not interchangeable. When powered up and in Optimal state, the storage subsystem with controller firmware level 7.xx.xx.xx can convert the metadata from the drives configured in storage subsystems with controller firmware level 6.xx or earlier to controller firmware level 7.xx.xx.xx metadata data structure. However, the storage subsystem with controller firmware level 6.xx or earlier cannot read the metadata from the drives configured in storage subsystems with controller firmware level 7.xx.xx.xx or later. I had assumed that if I deleted the logical drives and array information on the EXP710 prior to migrating it to the DS4300 (6.60.22 firmware) this would satisfy the above, unfortunately I was wrong. So my question is a) Is it possible to restore the DAC information to its factory settings, b) What tool(s) would I use to accomplish this, or c) is this a lost cause? Daniel.

    Read the article

  • Creating Windows 8.1 system image error

    - by Random
    I'm experiencing "not enough space" error when trying to create system image to a USB hard drive: Detailed error: ERROR - A Volume Shadow Copy Service operation error has occurred: (0x8004231f) Insufficient storage available to create either the shadow copy storage file or other shadow copy data. Blah, blah... There is not enough disk space to create the volume shadow copy on the storage location. Make sure that, for all volumes to be backup up, the minimum required disk space for shadow copy creation is available. This applies to both the backup storage destination and volumes included in the backup. Minimum requirement: For volumes less than 500 megabytes, the minimum is 50 megabytes of free space. For volumes more than 500 megabytes, the minimum is 320 megabytes of free space. Recommended: At least 1 gigabyte of free disk space on each volume if volume size is more than 1 gigabyte. ERROR - A Volume Shadow Copy Service operation error has occurred: (0x8004231f) Insufficient storage available to create either the shadow copy storage file or other shadow copy data. I'd tried both - PowerShell wbAdmin start backup -backupTarget:E: -include:C: -allCritical -quiet and via Control Panel - File History button Clearly both EFI and Windows Recovery Environment partitions don't meet requirements coming from System Image tool (pic below) On top of that all system partitions are now shown as 100% free in Disk Management, it's disturbing but far from the actual state. My question is - hot to create System Image in Windows 8.1?

    Read the article

  • Linux Sound Drivers - Recompiling kernel with ehcd-hcd module

    - by jhome
    I just installed Linux Mint and I am trying to get my usb soundcard to work. I am completely new to Linux so I have only been learning about all the important parts of the OS this afternoon. No one I know has compiled kernel so I can't ask anyone. I am following instructions on this site: http://wiki.ubuntuusers.de/Benutzer/BigMc#Final-setup-of-US-122L-US-144 "Recompile your Kernel with ehcd-hcd as a module: Instructions for Ubuntu (german) Change "Device Drivers - USB support - EHCI HCD (USB 2.0) support)" to "M" when configuring. Make sure you also install the kernel headers." I know how to do any of these processes. Someone mentioned a package manager. Is recompiling basically in layman's terms reinstallation? Where do I go to the change device drivers? Cheers Edit: I am attempting to be able to use my Tascam US-144 on Linux Mint. Apparently, to use it properly it has to regress to functionality of a former unit (US-122), so according to the instructions a USB port has to be USB 1.0, rather than USB 2.0. I've tried using that ndis** programme for wireless drivers but with no success.

    Read the article

  • printing parallel over ethernet cable

    - by Crudler
    I have a bit of an interesting challenge :) I have a machine with a parallel printer output, i want it to be able to print instead to a printer in a different room and i know that parallel isnt great over big distances. i found this: http://www.amazon.com/over-Cat5-Extension-Cable-Adapter/dp/B002WJ9S6Y%3FSubscriptionId%3DAKIAINHICTCYYZGJWT4Q%26tag%3Dusbprintercables.net-20%26linkCode%3Dxm2%26camp%3D2025%26creative%3D165953%26creativeASIN%3DB002WJ9S6Y which will let me connect over cat5, but its usb to cat 5. my machine can only output on parallel (its not a computer) so what i was thinking of getting is a parallel(f) to usb and usb to parallel (M) for either side i.e. machine - parallel - usb - cat5 - usb - parallel -printer just seems a bit messy :) suggestions? another thing i would like to try is to get rid of the old school parallel printer and instead use a network based multi function. would this be possible? i.e. machine - parallel -usb - cat5 - ethernet print server - network printer this might be rougher because the machine cannot "know" that we are using a network printer. it can ONLY print to LPT1 Thanks!

    Read the article

  • Weird PCI bug: lots of missed packets, or data comes in "bursts"

    - by Thomas O
    I have an ABIT KN9 motherboard. It has one PCI-e x16 slot, three PCI-e x4 slots and two legacy PCI. My problem is with the legacy PCI (which I shall just call "PCI".) I currently have an Nvidia GeForce 8600 GT (a low end card) installed in the x16 slot and a TV card in PCI #1; the x4 slots are unused, as is PCI #2. I plan to upgrade the graphics card soon, the current card was spare. I sometimes install a USB expander in PCI #2 but it causes a lot of problems - see below. The problem is under Linux (Ubuntu 10.10, Linux 2.6.35-22-generic), but probably under all operating systems (I have not yet been able to test Windows, but I suspect it will do the same as the problems occur on the BIOS/POST side too, e.g. when using a USB keyboard on the expander the keyboard will not work at all) PCI has an enourmous delay, and packets arrive in large chunks. For example, when using the USB expander, my USB mouse lags and jumps in large steps every second or so, while using the motherboard USB does not present this problem. My TV card will only do one or two frames per second, and the program (xawtv) usually times out and crashes. In dmesg, I'm getting messages like: bttv0: timeout: drop=74, irq=154/100476, risc=31f6256c, bits: VSYNC HSYNC OFLOW RISCI for my TV card, and similar timeout issues for my USB expander with a mouse. I received the motherboard, processor and RAM second hand and have only just got around to building it, so I don't know if this problem has always existed, or if it's a result of my set up. If anyone has any hints or solutions it would be appreciated - this is kind of a show-stopper for me.

    Read the article

  • Getting Dell E6320 with I7 to work with 3 monitors at 1920x1080p x 3

    - by MadBoy
    I want to buy Dell E6320 which comes with Intel Core I7-2620M (2.70GHz, 4MB cache, Dual Core) with Intel HD Graphics 3000. Laptop will come with docking station. I want to connect 3 monitors to that docking station so that working at home would give me some additional boost. Docking station will allow me to connect only 2 monitors so I'm looking at following other options: Matrox TRIPLEHEAD2GO DIGITAL Edition or TRIPLEHEAD2GO DP Edition. But reading Matrox Support Page intel GPU can't run the highest resolution with 3 monitors connected, it even gets worse since it seems monitors would have to be able to work at 50hz. Also I'm not sure but it seems that Matrox doesn't split the monitors as 3 separate monitors but simply as one big space (which is a bit opposite to what I need) Buy 2 or maybe just 1 USB based monitor but it would also mean having 1 or 2 different monitors then the main one, unless I buy 3 USB based monitors which would mean more money to spend. Also I found only couple of models and most of them require USB 3.0 and no other cables to plug in (nice but costly - couldn't find decent monitor with only USB for sending signal and having power connected normally) . But docking station has only one USB 3.0 port. Can I use hub and still get it to work? Find some converters from Digital to USB (I think DisplayLink does some?) Buy different laptop but what kind? I need it to be I7, small (13"), fast and lightweight. At same time it requires docking station that I can use at home to connect 3 external monitors. Some other suggested solution... Edit: I need 3 monitors for work in terms of coding in Visual Studio or having word/excel/outlook open. Nothing fancy. Maybe some movie once in a while.

    Read the article

  • How to install a desktop environment onto Ubuntu Server -- but without internet access or a CDROM?

    - by James
    I am playing around with a computer which has no CDROM drive or internet access and I have installed Ubuntu Server onto it. I have that all up and running nicely but now I'd like to install Xfce, GNOME or something similar so I can load up a desktop environment from the command line if I wish. Obviously with internet access or a CDROM, this would be a simple task of using apt-get and it finding & retrieving the packages for me, I assume, but I do not have either. I do however have a USB drive and I have used Unetbootin to make it into a bootable drive with the Ubuntu Server disk image files on there. I have mounted the USB drive to /media/usb0 and tried the command "sudo apt-cdrom add -d /media/usb0" to get apt to recognise the USb drive as an "Ubuntu CD" -- a source of package files but apt-get doesn't seem to be finding Xfce.. I try "sudo apt-get install xfce" and "sudo apt-get install xfce4" but neither find the package.. I would prefer to have Xfce but GNOME would be OK too.. My question is, am I doing something wrong? I figured that the Ubuntu Server disk (or rather, my Ubuntu Server USB drive) might not have any desktop environment packages on there so I tried the Xubuntu Desktop disk too (again, from my USB drive). I tried "sudo apt-get install xubuntu-desktop" but it couldn't find the package - even though it is listed under the /casper/ directory in some MANIFEST file. Anyone see where I'm going wrong? Maybe apt-get install is looking somewhere other than my USB drive? Maybe my commands are wrong? Maybe the disks don't even have the desktop environments on!? Thanks in advance guys, any input would be much appreciated. Cheers - James

    Read the article

  • Windows 7 hangs after going into sleep a second time

    - by Brian Stephenson
    I've searched everywhere around Google and can't figure out why this is happening so I decide to ask here to see if anyone has a problem like this. Like it says in the title, whenever I sleep ONCE I'm able to wake the system, but going back to sleep again AFTER waking up for the first time results in it hanging on no input and no output, with the fan spinning as fast as possible and alot of heat being spewed out by the fan as well. I've tried various things like setting all USB Hub Root's to not get switched off for power saving, disabling USB selective suspend, disabling PCI-e link state power management, and even unplugging ALL USB devices and it wont wake up after the second attempt. And I've even waited up to a full hour of the CPU fan spinning loudly and it's still stuck trying to wake up. The only USB devices I use are a Microsoft USB Comfort Curve Keyboard 2000 (IntelliType Pro) and a generic HID compliant mouse from Creative model number OMC90S "CREATIVE MOUSE OPTICAL LITE". My other devices like external drives and controllers are unplugged when I'm not using them as having too many USB devices plugged in at a time causes a deadlock on almost all of the ports I have. Here's my system specifications (Most of these are from CPU-Z): Brand: Gateway DX4300-19 Mainboard: Gateway RS780 Chipset: AMD 780G Rev 00 Southbridge: AMD SB700 Rev 00 LPCIO: ITE IT8718 BIOS: American Megatrends Inc. ver P01-A4 09/15/2009 CPU: AMD Phenom II X4 810 at 2.60 GHz RAM: 8.0 GB DDR2 Dual Channel Ganged Mode at 400 MHz GPU: ATI Radeon HD3200 Graphics Intergrated - RS780 OS: Windows 7 Home Premium x64 OEM (Acer Group) HDD: WDC WD10EADS-22M2B0 1.0 TB (Western Digital Green Caviar) My BIOS has absolutely no control over how I setup the sleep mode to be either S1 or S3. So I can't check these settings or even change them. Hybrid sleep is also disabled, I can successfully go into hibernation and wake from hibernation but this is painfully slow due to a harddrive problem I'm having with this "Green Drive". (Hibernation takes over ~3 minutes to complete) Any help would be appreciated, thanks.

    Read the article

  • Debian Bluetooth headphones not working

    - by cYrus
    Hardware Headphones Bluetooth dongle Maybe not exactly these models. Setup I tried to follow some guides, here's what I've done so far: Install software: sudo apt-get install bluez-utils bluez-alsa Reboot (just to be sure): $ dmesg | grep -i bluetooth [ 20.268212] Bluetooth: Core ver 2.16 [ 20.268230] Bluetooth: HCI device and connection manager initialized [ 20.268233] Bluetooth: HCI socket layer initialized [ 20.268235] Bluetooth: L2CAP socket layer initialized [ 20.268239] Bluetooth: SCO socket layer initialized [ 20.284685] Bluetooth: RFCOMM TTY layer initialized [ 20.284692] Bluetooth: RFCOMM socket layer initialized [ 20.284693] Bluetooth: RFCOMM ver 1.11 [ 20.335375] Bluetooth: BNEP (Ethernet Emulation) ver 1.3 [ 20.335378] Bluetooth: BNEP filters: protocol multicast The deamon is running: $ /etc/init.d/bluetooth status [ ok ] bluetooth is running. Plug the dongle: $ dmesg | tail [...] [23108.352034] usb 5-2: new full-speed USB device number 2 using ohci_hcd [23108.571131] usb 5-2: New USB device found, idVendor=0a12, idProduct=0001 [23108.571136] usb 5-2: New USB device strings: Mfr=0, Product=0, SerialNumber=0 [23108.629042] usbcore: registered new interface driver btusb Put the headphones in pairing mode, and try scanning: $ hcitool scan Scanning ... Found nothing. What's next? What should I try? I'll update this answer as soon as you provide me hints.

    Read the article

  • Backtrack, Wi-Fi not working

    - by hradecek
    I've installed Backtrack 5R3 KDE, and I realized that my wireless is not working, but wired is working fine. Here's the lshw output: *-network description: Ethernet interface product: RTL8101E/RTL8102E PCI Express Fast Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:02:00.0 logical name: eth0 version: 05 serial: 04:7d:7b:b7:46:f8 size: 100MB/s capacity: 100MB/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=full firmware=rtl_nic/rtl8105e-1.fw ip=192.168.2.2 latency=0 link=yes multicast=yes port=MII speed=100MB/s resources: irq:42 ioport:2000(size=256) memory:f0404000-f0404fff memory:f0400000-f0403fff lspci output: 00:00.0 Host bridge: Intel Corporation 2nd Generation Core Processor Family DRAM Controller (rev 09) 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) 00:14.0 USB Controller: Intel Corporation Panther Point USB xHCI Host Controller (rev 04) 00:16.0 Communication controller: Intel Corporation Panther Point MEI Controller #1 (rev 04) 00:1a.0 USB Controller: Intel Corporation Panther Point USB Enhanced Host Controller #2 (rev 04) 00:1b.0 Audio device: Intel Corporation Panther Point High Definition Audio Controller (rev 04) 00:1c.0 PCI bridge: Intel Corporation Panther Point PCI Express Root Port 1 (rev c4) 00:1c.1 PCI bridge: Intel Corporation Panther Point PCI Express Root Port 2 (rev c4) 00:1d.0 USB Controller: Intel Corporation Panther Point USB Enhanced Host Controller #1 (rev 04) 00:1f.0 ISA bridge: Intel Corporation Panther Point LPC Controller (rev 04) 00:1f.2 SATA controller: Intel Corporation Panther Point 6 port SATA AHCI Controller (rev 04) 00:1f.3 SMBus: Intel Corporation Panther Point SMBus Controller (rev 04) 02:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8101E/RTL8102E PCI Express Fast Ethernet controller (rev 05)

    Read the article

  • ModSecurity compile error on nginx

    - by user146481
    I'm trying to install ModSecurity on nginx with the following instructions : wget https://github.com/SpiderLabs/ModSecurity/archive/master.zip unzip master cd ModSecurity-master ./autogen.sh ./configure --enable-standalone-module And i got the following error : Checking plataform... Identified as Linux configure: looking for Apache module support via DSO through APXS configure: error: couldn't find APXS After installing httpd-devel httpd-devel and running ./configure --enable-standalone-module --with-apxs=/usr/sbin/apxs ; make modsecurity compile workes but still have another error of nginx compilation : ./configure --add-module=/usr/local/src/john/ModSecurity-master/nginx/modsecurity and i got this error : gcc -c -pipe -O -W -Wall -Wpointer-arith -Wno-unused-parameter -Werror -g -I src/core -I src/event -I src/event/modules -I src/os/unix -I /usr/include/apache2 -I /usr/include/apr-1.0 -I /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../standalone -I /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2 -I /usr/include/libxml2 -I objs -I src/http -I src/http/modules -I src/mail \ -o objs/addon/modsecurity/ngx_http_modsecurity.o \ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/ngx_http_modsecurity.c In file included from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/ngx_http_modsecurity.c:28: /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../standalone/api.h:20:23: error: http_core.h: No such file or directory /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../standalone/api.h:21:26: error: http_request.h: No such file or directory In file included from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/modsecurity.h:37, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../standalone/api.h:23, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/ngx_http_modsecurity.c:28: /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_logging.h:41:23: error: apr_pools.h: No such file or directory In file included from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/modsecurity.h:38, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../standalone/api.h:23, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/ngx_http_modsecurity.c:28: /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_multipart.h:26:25: error: apr_general.h: No such file or directory /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_multipart.h:27:24: error: apr_tables.h: No such file or directory In file included from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/modsecurity.h:38, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../standalone/api.h:23, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/ngx_http_modsecurity.c:28: /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_multipart.h:44: error: expected specifier-qualifier-list before ‘apr_array_header_t’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_multipart.h:65: error: expected specifier-qualifier-list before ‘apr_array_header_t’ cc1: warnings being treated as errors /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_multipart.h:135: error: data definition has no type or storage class /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_multipart.h:135: error: type defaults to ‘int’ in declaration of ‘apr_status_t’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_multipart.h:135: error: expected ‘,’ or ‘;’ before ‘multipart_cleanup’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_multipart.h:137: error: expected declaration specifiers or ‘...’ before ‘apr_table_t’ In file included from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/modsecurity.h:39, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../standalone/api.h:23, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/ngx_http_modsecurity.c:28: /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_pcre.h:41: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_pcre.h:45: error: expected ‘)’ before ‘*’ token In file included from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/modsecurity.h:40, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../standalone/api.h:23, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/ngx_http_modsecurity.c:28: /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_util.h:19:27: error: apr_file_info.h: No such file or directory In file included from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:41, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_util.h:29, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/modsecurity.h:40, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../standalone/api.h:23, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/ngx_http_modsecurity.c:28: /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/persist_dbm.h:21: error: data definition has no type or storage class /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/persist_dbm.h:21: error: type defaults to ‘int’ in declaration of ‘apr_table_t’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/persist_dbm.h:21: error: expected ‘,’ or ‘;’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/persist_dbm.h:24: error: expected declaration specifiers or ‘...’ before ‘apr_table_t’ In file included from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:42, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_util.h:29, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/modsecurity.h:40, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../standalone/api.h:23, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/ngx_http_modsecurity.c:28: /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:20:19: error: httpd.h: No such file or directory /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:21:24: error: ap_release.h: No such file or directory /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:24:26: error: apr_optional.h: No such file or directory In file included from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:42, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_util.h:29, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/modsecurity.h:40, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../standalone/api.h:23, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/ngx_http_modsecurity.c:28: /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:30: error: expected declaration specifiers or ‘...’ before ‘modsec_register_tfn’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:30: error: expected declaration specifiers or ‘...’ before ‘(’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:30: error: data definition has no type or storage class /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:30: error: type defaults to ‘int’ in declaration of ‘APR_DECLARE_OPTIONAL_FN’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:31: error: expected declaration specifiers or ‘...’ before ‘modsec_register_operator’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:31: error: expected declaration specifiers or ‘...’ before ‘(’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:31: error: data definition has no type or storage class /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:31: error: type defaults to ‘int’ in declaration of ‘APR_DECLARE_OPTIONAL_FN’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:32: error: expected declaration specifiers or ‘...’ before ‘modsec_register_variable’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:33: error: expected declaration specifiers or ‘...’ before ‘(’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:32: error: data definition has no type or storage class /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:36: error: type defaults to ‘int’ in declaration of ‘APR_DECLARE_OPTIONAL_FN’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:37: error: expected declaration specifiers or ‘...’ before ‘modsec_register_reqbody_processor’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:37: error: expected declaration specifiers or ‘...’ before ‘(’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:37: error: data definition has no type or storage class /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:37: error: type defaults to ‘int’ in declaration of ‘APR_DECLARE_OPTIONAL_FN’ In file included from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:42, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_util.h:29, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/modsecurity.h:40, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../standalone/api.h:23, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/ngx_http_modsecurity.c:28: /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:56: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:58: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:65: error: data definition has no type or storage class /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:65: error: type defaults to ‘int’ in declaration of ‘apr_status_t’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:65: error: expected ‘,’ or ‘;’ before ‘input_filter’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:68: error: data definition has no type or storage class /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:68: error: type defaults to ‘int’ in declaration of ‘apr_status_t’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:68: error: expected ‘,’ or ‘;’ before ‘output_filter’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:70: error: data definition has no type or storage class /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:70: error: type defaults to ‘int’ in declaration of ‘apr_status_t’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:70: error: expected ‘,’ or ‘;’ before ‘read_request_body’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:77: error: data definition has no type or storage class /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:77: error: type defaults to ‘int’ in declaration of ‘apr_status_t’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:77: error: expected ‘,’ or ‘;’ before ‘send_error_bucket’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:83: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:85: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:93: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:95: error: expected ‘)’ before ‘*’ token In file included from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_util.h:29, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/modsecurity.h:40, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../standalone/api.h:23, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/ngx_http_modsecurity.c:28: /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:43:25: error: http_config.h: No such file or directory In file included from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_util.h:29, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/modsecurity.h:40, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../standalone/api.h:23, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/ngx_http_modsecurity.c:28: /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:59: error: expected declaration specifiers or ‘...’ before ‘apr_array_header_t’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:61: error: data definition has no type or storage class /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:61: error: type defaults to ‘int’ in declaration of ‘apr_status_t’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:61: error: expected ‘,’ or ‘;’ before ‘collection_original_setvar’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:63: error: expected declaration specifiers or ‘...’ before ‘apr_pool_t’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:67: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:70: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:75: error: expected declaration specifiers or ‘...’ before ‘apr_array_header_t’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:76: error: expected declaration specifiers or ‘...’ before ‘apr_pool_t’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:86: error: expected specifier-qualifier-list before ‘apr_pool_t’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:94: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:101: error: expected specifier-qualifier-list before ‘apr_pool_t’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:111: error: data definition has no type or storage class /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:111: error: type defaults to ‘int’ in declaration of ‘apr_status_t’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:111: error: expected ‘,’ or ‘;’ before ‘msre_ruleset_process_phase’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:113: error: data definition has no type or storage class /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:113: error: type defaults to ‘int’ in declaration of ‘apr_status_t’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:113: error: expected ‘,’ or ‘;’ before ‘msre_ruleset_process_phase_internal’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:115: error: expected declaration specifiers or ‘...’ before ‘apr_pool_t’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:143: error: expected specifier-qualifier-list before ‘apr_ipsubnet_t’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:149: error: expected specifier-qualifier-list before ‘apr_array_header_t’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:189: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:219: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:235: error: expected specifier-qualifier-list before ‘fn_tfn_execute_t’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:239: error: expected declaration specifiers or ‘...’ before ‘fn_tfn_execute_t’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:258: error: expected declaration specifiers or ‘...’ before ‘apr_table_t’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:258: error: expected declaration specifiers or ‘...’ before ‘apr_pool_t’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:285: error: expected specifier-qualifier-list before ‘apr_table_t’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:341: error: expected declaration specifiers or ‘...’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:341: error: type defaults to ‘int’ in declaration of ‘apr_status_t’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:341: error: ‘apr_status_t’ declared as function returning a function /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:341: error: ‘apr_status_t’ redeclared as different kind of symbol /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:113: note: previous declaration of ‘apr_status_t’ was here /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:342: error: expected declaration specifiers or ‘...’ before ‘apr_pool_t’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:342: error: ‘fn_action_execute_t’ declared as function returning a function /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:369: error: expected specifier-qualifier-list before ‘fn_action_init_t’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:399: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:403: error: expected declaration specifiers or ‘...’ before ‘apr_array_header_t’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:403: error: ‘msre_parse_vars’ declared as function returning a function /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:415: error: expected specifier-qualifier-list before ‘apr_size_t’ In file included from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/modsecurity.h:40, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../standalone/api.h:23, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/ngx_http_modsecurity.c:28: /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_util.h:54: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_util.h:62: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_util.h:66: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_util.h:68: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_util.h:70: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_util.h:74: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_util.h:76: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_util.h:82: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_util.h:88: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_util.h:90: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_util.h:92: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_util.h:100: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_util.h:102: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_util.h:104: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_util.h:106: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_util.h:108: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_util.h:110: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_util.h:112: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_util.h:114: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_util.h:128: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_util.h:132: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_util.h:136: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_util.h:140: error: data definition has no type or storage class /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_util.h:140: error: type defaults to ‘int’ in declaration of ‘apr_fileperms_t’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_util.h:140: error: expected ‘,’ or ‘;’ before ‘mode2fileperms’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_util.h:144: error: expected declaration specifiers or ‘...’ before ‘apr_pool_t’ In file included from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/modsecurity.h:41, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../standalone/api.h:23, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/ngx_http_modsecurity.c:28: /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_xml.h:43: error: ‘xml_cleanup’ declared as function returning a function In file included from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/modsecurity.h:42, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../standalone/api.h:23, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/ngx_http_modsecurity.c:28: /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_geo.h:38:25: error: apr_file_io.h: No such file or directory In file included from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/modsecurity.h:42, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../standalone/api.h:23, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/ngx_http_modsecurity.c:28: /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_geo.h:58: error: expected specifier-qualifier-list before ‘apr_file_t’ In file included from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/modsecurity.h:43, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../standalone/api.h:23, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/ngx_http_modsecurity.c:28: /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_gsb.h:22:22: error: apr_hash.h: No such file or directory In file included from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/modsecurity.h:43, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../standalone/api.h:23, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/ngx_http_modsecurity.c:28: /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_gsb.h:25: error: expected specifier-qualifier-list before ‘apr_file_t’ In file included from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/modsecurity.h:44, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../standalone/api.h:23, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/ngx_http_modsecurity.c:28: /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_unicode.h:25: error: expected specifier-qualifier-list before ‘apr_file_t’ In file included from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/modsecurity.h:46, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../standalone/api.h:23, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/ngx_http_modsecurity.c:28: /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_crypt.h:34: error: expected ‘)’ before ‘*’ token In file included from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../standalone/api.h:23, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/ngx_http_modsecurity.c:28: /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/modsecurity.h:48:23: error: ap_config.h: No such file or directory /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/modsecurity.h:49:21: error: apr_md5.h: No such file or directory /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/modsecurity.h:50:25: error: apr_strings.h: No such file or directory /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/modsecurity.h:54:22: error: http_log.h: No such file or directory /usr/local/src/john/ModSecurity-master/nginx/modsecurity/ngx_http_modsecurity.c:938: error: ‘ngx_http_modsecurity_ctx_t’ has no member named ‘req’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/ngx_http_modsecurity.c:938: error: too many arguments to function ‘ConvertNgxStringToUTF8’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/ngx_http_modsecurity.c:942: error: ‘ngx_http_modsecurity_ctx_t’ has no member named ‘req’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/ngx_http_modsecurity.c:944: error: ‘ngx_http_modsecurity_ctx_t’ has no member named ‘req’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/ngx_http_modsecurity.c:952: error: ‘modsecurity_read_body_cb’ undeclared (first use in this function) make[1]: *** [objs/addon/modsecurity/ngx_http_modsecurity.o] Error 1 make[1]: Leaving directory `/usr/local/src/john/nginx-1.2.5' make: *** [build] Error 2 Note : I'm using nginx as the only webserver and i do not have apache installed. OS : Centos 6 64bit How can i solve this problem And do you have another easy way to install modsecurity with nginx ?

    Read the article

  • iSCSI timeouts under high load

    - by Antonio
    I have two servers connected via Gigabit Ethernet. One is iSCSI target, the second one is initiator. When I run mkfs.ext4 at initiator, after a while disk IO slows down critically. In the target host I can see the following in syslog: Sep 14 09:40:03 sh11 tgtd: abort_task_set(1139) found 119668c 0 Sep 14 09:40:03 sh11 tgtd: abort_cmd(1115) found 119668c 6 Sep 14 09:40:03 sh11 tgtd: abort_task_set(1139) found 119668d 0 Sep 14 09:40:03 sh11 tgtd: abort_cmd(1115) found 119668d 6 Sep 14 09:40:03 sh11 tgtd: abort_task_set(1139) found 119668e 0 Sep 14 09:40:03 sh11 tgtd: abort_cmd(1115) found 119668e 6 Sep 14 09:40:03 sh11 tgtd: abort_task_set(1139) found 1196696 0 Sep 14 09:40:03 sh11 tgtd: abort_cmd(1115) found 1196696 6 Sep 14 09:40:03 sh11 tgtd: abort_task_set(1139) found 119669e 0 Sep 14 09:40:03 sh11 tgtd: abort_cmd(1115) found 119669e 6 Sep 14 09:40:04 sh11 tgtd: abort_task_set(1139) found 119669f 0 Sep 14 09:40:04 sh11 tgtd: abort_cmd(1115) found 119669f 6 And load average grows to 12 or even more: # uptime 12:37:00 up 23 days, 13:25, 1 user, load average: 12.00, 7.00, 4.00 CentOS 6.3 tgtd 1.0.24 Intel Pentium 4 2.4GHz 1Gb RAM 2Tb WD Cavlar Green SATA 2.0 #lspci 00:00.0 Host bridge: Intel Corporation 82845G/GL[Brookdale-G]/GE/PE DRAM Controller/Host-Hub Interface (rev 02) 00:01.0 PCI bridge: Intel Corporation 82845G/GL[Brookdale-G]/GE/PE Host-to-AGP Bridge (rev 02) 00:1d.0 USB controller: Intel Corporation 82801DB/DBL/DBM (ICH4/ICH4-L/ICH4-M) USB UHCI Controller #1 (rev 02) 00:1d.1 USB controller: Intel Corporation 82801DB/DBL/DBM (ICH4/ICH4-L/ICH4-M) USB UHCI Controller #2 (rev 02) 00:1d.2 USB controller: Intel Corporation 82801DB/DBL/DBM (ICH4/ICH4-L/ICH4-M) USB UHCI Controller #3 (rev 02) 00:1d.7 USB controller: Intel Corporation 82801DB/DBM (ICH4/ICH4-M) USB2 EHCI Controller (rev 02) 00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev 82) 00:1f.0 ISA bridge: Intel Corporation 82801DB/DBL (ICH4/ICH4-L) LPC Interface Bridge (rev 02) 00:1f.1 IDE interface: Intel Corporation 82801DB (ICH4) IDE Controller (rev 02) 00:1f.3 SMBus: Intel Corporation 82801DB/DBL/DBM (ICH4/ICH4-L/ICH4-M) SMBus Controller (rev 02) 00:1f.5 Multimedia audio controller: Intel Corporation 82801DB/DBL/DBM (ICH4/ICH4-L/ICH4-M) AC'97 Audio Controller (rev 02) 01:00.0 VGA compatible controller: Advanced Micro Devices [AMD] nee ATI RV200 QW [Radeon 7500] 02:01.0 Ethernet controller: D-Link System Inc DGE-530T Gigabit Ethernet Adapter (rev 11) (rev 11) 02:02.0 RAID bus controller: VIA Technologies, Inc. VT6421 IDE/SATA Controller (rev 50) 02:03.0 RAID bus controller: VIA Technologies, Inc. VT6421 IDE/SATA Controller (rev 50) 02:04.0 RAID bus controller: Silicon Image, Inc. SiI 3114 [SATALink/SATARaid] Serial ATA Controller (rev 02) 02:08.0 Ethernet controller: Intel Corporation 82801DB PRO/100 VE (CNR) Ethernet Controller (rev 82) Is there a way to tune target host to avoid these timeouts?

    Read the article

  • Failed to start up after upgrading software in ubuntu 10.10

    - by Landy
    I've been running Ubuntu 10.10 in a physical x86-64 machine. Today Update Manager reminded me that there are some updates to install and I confirmed the action. I should had read the update list but I didn't. I can only remember there is an update about cups. After the upgrading, Update Manager requires a restart and I confirmed too. But after the restart, the computer can't start up. There are errors in the console. Begin: Running /scripts/init-premount ... done. Begin: Mounting root file system ... Begin: Running /scripts/local-top ... done. [xxx]usb 1-8: new high speed USB device using ehci_hcd and address 3 [xxx]usb 2-1: new full speed USB device using ohci_hcd and address 2 [xxx]hub 2-1:1.0: USB hub found [xxx]hub 2-1:1.0: 4 ports detected [xxx]usb 2-1.1: new low speed USB device using ohci_hcd and address 3 Gave up waiting for root device. Common probles: - Boot args (cat /proc/cmdline) - Check rootdelay=(did the system wait long enough) - Check root= (did the system wait for the right device?) - Missing modules (cat /proc/modules; ls /dev) FATAL: Could not load /lib/modules/2.6.35-22-generic/modules.dep: No such file or directory FATAL: Could not load /lib/modules/2.6.35-22-generic/modules.dep: No such file or directory ALERT! /dev/sda1 does not exist. Dropping to a shell! BusyBox v1.15.3 (Ubuntu 1:1.15.3-1ubuntu5) built-in shell(ash) Enter 'help' for a list of built-in commands. (initramfs)[cursor is here] At the moment, I can't input anything in the console. The keyboard doesn't work at all. What's wrong? How can I check boot args or "root=" as suggested? How can I fix this issue? Thanks. =============== PS1: the /dev/sda1 is type ext4 (rw,nosuid,nodev) PS2: the /dev/sda1 can be mounted and accessed successfully under SUSE 11 SP1 x64.

    Read the article

  • Using HTML 5 SessionState to save rendered Page Content

    - by Rick Strahl
    HTML 5 SessionState and LocalStorage are very useful and super easy to use to manage client side state. For building rich client side or SPA style applications it's a vital feature to be able to cache user data as well as HTML content in order to swap pages in and out of the browser's DOM. What might not be so obvious is that you can also use the sessionState and localStorage objects even in classic server rendered HTML applications to provide caching features between pages. These APIs have been around for a long time and are supported by most relatively modern browsers and even all the way back to IE8, so you can use them safely in your Web applications. SessionState and LocalStorage are easy The APIs that make up sessionState and localStorage are very simple. Both object feature the same API interface which  is a simple, string based key value store that has getItem, setItem, removeitem, clear and  key methods. The objects are also pseudo array objects and so can be iterated like an array with  a length property and you have array indexers to set and get values with. Basic usage  for storing and retrieval looks like this (using sessionStorage, but the syntax is the same for localStorage - just switch the objects):// set var lastAccess = new Date().getTime(); if (sessionStorage) sessionStorage.setItem("myapp_time", lastAccess.toString()); // retrieve in another page or on a refresh var time = null; if (sessionStorage) time = sessionStorage.getItem("myapp_time"); if (time) time = new Date(time * 1); else time = new Date(); sessionState stores data that is browser session specific and that has a liftetime of the active browser session or window. Shut down the browser or tab and the storage goes away. localStorage uses the same API interface, but the lifetime of the data is permanently stored in the browsers storage area until deleted via code or by clearing out browser cookies (not the cache). Both sessionStorage and localStorage space is limited. The spec is ambiguous about this - supposedly sessionStorage should allow for unlimited size, but it appears that most WebKit browsers support only 2.5mb for either object. This means you have to be careful what you store especially since other applications might be running on the same domain and also use the storage mechanisms. That said 2.5mb worth of character data is quite a bit and would go a long way. The easiest way to get a feel for how sessionState and localStorage work is to look at a simple example. You can go check out the following example online in Plunker: http://plnkr.co/edit/0ICotzkoPjHaWa70GlRZ?p=preview which looks like this: Plunker is an online HTML/JavaScript editor that lets you write and run Javascript code and similar to JsFiddle, but a bit cleaner to work in IMHO (thanks to John Papa for turning me on to it). The sample has two text boxes with counts that update session/local storage every time you click the related button. The counts are 'cached' in Session and Local storage. The point of these examples is that both counters survive full page reloads, and the LocalStorage counter survives a complete browser shutdown and restart. Go ahead and try it out by clicking the Reload button after updating both counters and then shutting down the browser completely and going back to the same URL (with the same browser). What you should see is that reloads leave both counters intact at the counted values, while a browser restart will leave only the local storage counter intact. The code to deal with the SessionStorage (and LocalStorage not shown here) in the example is isolated into a couple of wrapper methods to simplify the code: function getSessionCount() { var count = 0; if (sessionStorage) { var count = sessionStorage.getItem("ss_count"); count = !count ? 0 : count * 1; } $("#txtSession").val(count); return count; } function setSessionCount(count) { if (sessionStorage) sessionStorage.setItem("ss_count", count.toString()); } These two functions essentially load and store a session counter value. The two key methods used here are: sessionStorage.getItem(key); sessionStorage.setItem(key,stringVal); Note that the value given to setItem and return by getItem has to be a string. If you pass another type you get an error. Don't let that limit you though - you can easily enough store JSON data in a variable so it's quite possible to pass complex objects and store them into a single sessionStorage value:var user = { name: "Rick", id="ricks", level=8 } sessionStorage.setItem("app_user",JSON.stringify(user)); to retrieve it:var user = sessionStorage.getItem("app_user"); if (user) user = JSON.parse(user); Simple! If you're using the Chrome Developer Tools (F12) you can also check out the session and local storage state on the Resource tab:   You can also use this tool to refresh or remove entries from storage. What we just looked at is a purely client side implementation where a couple of counters are stored. For rich client centric AJAX applications sessionStorage and localStorage provide a very nice and simple API to store application state while the application is running. But you can also use these storage mechanisms to manage server centric HTML applications when you combine server rendering with some JavaScript to perform client side data caching. You can both store some state information and data on the client (ie. store a JSON object and carry it forth between server rendered HTML requests) or you can use it for good old HTTP based caching where some rendered HTML is saved and then restored later. Let's look at the latter with a real life example. Why do I need Client-side Page Caching for Server Rendered HTML? I don't know about you, but in a lot of my existing server driven applications I have lists that display a fair amount of data. Typically these lists contain links to then drill down into more specific data either for viewing or editing. You can then click on a link and go off to a detail page that provides more concise content. So far so good. But now you're done with the detail page and need to get back to the list, so you click on a 'bread crumbs trail' or an application level 'back to list' button and… …you end up back at the top of the list - the scroll position, the current selection in some cases even filters conditions - all gone with the wind. You've left behind the state of the list and are starting from scratch in your browsing of the list from the top. Not cool! Sound familiar? This a pretty common scenario with server rendered HTML content where it's so common to display lists to drill into, only to lose state in the process of returning back to the original list. Look at just about any traditional forums application, or even StackOverFlow to see what I mean here. Scroll down a bit to look at a post or entry, drill in then use the bread crumbs or tab to go back… In some cases returning to the top of a list is not a big deal. On StackOverFlow that sort of works because content is turning around so quickly you probably want to actually look at the top posts. Not always though - if you're browsing through a list of search topics you're interested in and drill in there's no way back to that position. Essentially anytime you're actively browsing the items in the list, that's when state becomes important and if it's not handled the user experience can be really disrupting. Content Caching If you're building client centric SPA style applications this is a fairly easy to solve problem - you tend to render the list once and then update the page content to overlay the detail content, only hiding the list temporarily until it's used again later. It's relatively easy to accomplish this simply by hiding content on the page and later making it visible again. But if you use server rendered content, hanging on to all the detail like filters, selections and scroll position is not quite as easy. Or is it??? This is where sessionStorage comes in handy. What if we just save the rendered content of a previous page, and then restore it when we return to this page based on a special flag that tells us to use the cached version? Let's see how we can do this. A real World Use Case Recently my local ISP asked me to help out with updating an ancient classifieds application. They had a very busy, local classifieds app that was originally an ASP classic application. The old app was - wait for it: frames based - and even though I lobbied against it, the decision was made to keep the frames based layout to allow rapid browsing of the hundreds of posts that are made on a daily basis. The primary reason they wanted this was precisely for the ability to quickly browse content item by item. While I personally hate working with Frames, I have to admit that the UI actually works well with the frames layout as long as you're running on a large desktop screen. You can check out the frames based desktop site here: http://classifieds.gorge.net/ However when I rebuilt the app I also added a secondary view that doesn't use frames. The main reason for this of course was for mobile displays which work horribly with frames. So there's a somewhat mobile friendly interface to the interface, which ditches the frames and uses some responsive design tweaking for mobile capable operation: http://classifeds.gorge.net/mobile  (or browse the base url with your browser width under 800px)   Here's what the mobile, non-frames view looks like:   As you can see this means that the list of classifieds posts now is a list and there's a separate page for drilling down into the item. And of course… originally we ran into that usability issue I mentioned earlier where the browse, view detail, go back to the list cycle resulted in lost list state. Originally in mobile mode you scrolled through the list, found an item to look at and drilled in to display the item detail. Then you clicked back to the list and BAM - you've lost your place. Because there are so many items added on a daily basis the full list is never fully loaded, but rather there's a "Load Additional Listings"  entry at the button. Not only did we originally lose our place when coming back to the list, but any 'additionally loaded' items are no longer there because the list was now rendering  as if it was the first page hit. The additional listings, and any filters, the selection of an item all were lost. Major Suckage! Using Client SessionStorage to cache Server Rendered Content To work around this problem I decided to cache the rendered page content from the list in SessionStorage. Anytime the list renders or is updated with Load Additional Listings, the page HTML is cached and stored in Session Storage. Any back links from the detail page or the login or write entry forms then point back to the list page with a back=true query string parameter. If the server side sees this parameter it doesn't render the part of the page that is cached. Instead the client side code retrieves the data from the sessionState cache and simply inserts it into the page. It sounds pretty simple, and the overall the process is really easy, but there are a few gotchas that I'll discuss in a minute. But first let's look at the implementation. Let's start with the server side here because that'll give a quick idea of the doc structure. As I mentioned the server renders data from an ASP.NET MVC view. On the list page when returning to the list page from the display page (or a host of other pages) looks like this: https://classifieds.gorge.net/list?back=True The query string value is a flag, that indicates whether the server should render the HTML. Here's what the top level MVC Razor view for the list page looks like:@model MessageListViewModel @{ ViewBag.Title = "Classified Listing"; bool isBack = !string.IsNullOrEmpty(Request.QueryString["back"]); } <form method="post" action="@Url.Action("list")"> <div id="SizingContainer"> @if (!isBack) { @Html.Partial("List_CommandBar_Partial", Model) <div id="PostItemContainer" class="scrollbox" xstyle="-webkit-overflow-scrolling: touch;"> @Html.Partial("List_Items_Partial", Model) @if (Model.RequireLoadEntry) { <div class="postitem loadpostitems" style="padding: 15px;"> <div id="LoadProgress" class="smallprogressright"></div> <div class="control-progress"> Load additional listings... </div> </div> } </div> } </div> </form> As you can see the query string triggers a conditional block that if set is simply not rendered. The content inside of #SizingContainer basically holds  the entire page's HTML sans the headers and scripts, but including the filter options and menu at the top. In this case this makes good sense - in other situations the fact that the menu or filter options might be dynamically updated might make you only cache the list rather than essentially the entire page. In this particular instance all of the content works and produces the proper result as both the list along with any filter conditions in the form inputs are restored. Ok, let's move on to the client. On the client there are two page level functions that deal with saving and restoring state. Like the counter example I showed earlier, I like to wrap the logic to save and restore values from sessionState into a separate function because they are almost always used in several places.page.saveData = function(id) { if (!sessionStorage) return; var data = { id: id, scroll: $("#PostItemContainer").scrollTop(), html: $("#SizingContainer").html() }; sessionStorage.setItem("list_html",JSON.stringify(data)); }; page.restoreData = function() { if (!sessionStorage) return; var data = sessionStorage.getItem("list_html"); if (!data) return null; return JSON.parse(data); }; The data that is saved is an object which contains an ID which is the selected element when the user clicks and a scroll position. These two values are used to reset the scroll position when the data is used from the cache. Finally the html from the #SizingContainer element is stored, which makes for the bulk of the document's HTML. In this application the HTML captured could be a substantial bit of data. If you recall, I mentioned that the server side code renders a small chunk of data initially and then gets more data if the user reads through the first 50 or so items. The rest of the items retrieved can be rather sizable. Other than the JSON deserialization that's Ok. Since I'm using SessionStorage the storage space has no immediate limits. Next is the core logic to handle saving and restoring the page state. At first though this would seem pretty simple, and in some cases it might be, but as the following code demonstrates there are a few gotchas to watch out for. Here's the relevant code I use to save and restore:$( function() { … var isBack = getUrlEncodedKey("back", location.href); if (isBack) { // remove the back key from URL setUrlEncodedKey("back", "", location.href); var data = page.restoreData(); // restore from sessionState if (!data) { // no data - force redisplay of the server side default list window.location = "list"; return; } $("#SizingContainer").html(data.html); var el = $(".postitem[data-id=" + data.id + "]"); $(".postitem").removeClass("highlight"); el.addClass("highlight"); $("#PostItemContainer").scrollTop(data.scroll); setTimeout(function() { el.removeClass("highlight"); }, 2500); } else if (window.noFrames) page.saveData(null); // save when page loads $("#SizingContainer").on("click", ".postitem", function() { var id = $(this).attr("data-id"); if (!id) return true; if (window.noFrames) page.saveData(id); var contentFrame = window.parent.frames["Content"]; if (contentFrame) contentFrame.location.href = "show/" + id; else window.location.href = "show/" + id; return false; }); … The code starts out by checking for the back query string flag which triggers restoring from the client cache. If cached the cached data structure is read from sessionStorage. It's important here to check if data was returned. If the user had back=true on the querystring but there is no cached data, he likely bookmarked this page or otherwise shut down the browser and came back to this URL. In that case the server didn't render any detail and we have no cached data, so all we can do is redirect to the original default list view using window.location. If we continued the page would render no data - so make sure to always check the cache retrieval result. Always! If there is data the it's loaded and the data.html data is restored back into the document by simply injecting the HTML back into the document's #SizingContainer element:$("#SizingContainer").html(data.html); It's that simple and it's quite quick even with a fully loaded list of additional items and on a phone. The actual HTML data is stored to the cache on every page load initially and then again when the user clicks on an element to navigate to a particular listing. The former ensures that the client cache always has something in it, and the latter updates with additional information for the selected element. For the click handling I use a data-id attribute on the list item (.postitem) in the list and retrieve the id from that. That id is then used to navigate to the actual entry as well as storing that Id value in the saved cached data. The id is used to reset the selection by searching for the data-id value in the restored elements. The overall process of this save/restore process is pretty straight forward and it doesn't require a bunch of code, yet it yields a huge improvement in the usability of the site on mobile devices (or anybody who uses the non-frames view). Some things to watch out for As easy as it conceptually seems to simply store and retrieve cached content, you have to be quite aware what type of content you are caching. The code above is all that's specific to cache/restore cycle and it works, but it took a few tweaks to the rest of the script code and server code to make it all work. There were a few gotchas that weren't immediately obvious. Here are a few things to pay attention to: Event Handling Logic Timing of manipulating DOM events Inline Script Code Bookmarking to the Cache Url when no cache exists Do you have inline script code in your HTML? That script code isn't going to run if you restore from cache and simply assign or it may not run at the time you think it would normally in the DOM rendering cycle. JavaScript Event Hookups The biggest issue I ran into with this approach almost immediately is that originally I had various static event handlers hooked up to various UI elements that are now cached. If you have an event handler like:$("#btnSearch").click( function() {…}); that works fine when the page loads with server rendered HTML, but that code breaks when you now load the HTML from cache. Why? Because the elements you're trying to hook those events to may not actually be there - yet. Luckily there's an easy workaround for this by using deferred events. With jQuery you can use the .on() event handler instead:$("#SelectionContainer").on("click","#btnSearch", function() {…}); which monitors a parent element for the events and checks for the inner selector elements to handle events on. This effectively defers to runtime event binding, so as more items are added to the document bindings still work. For any cached content use deferred events. Timing of manipulating DOM Elements Along the same lines make sure that your DOM manipulation code follows the code that loads the cached content into the page so that you don't manipulate DOM elements that don't exist just yet. Ideally you'll want to check for the condition to restore cached content towards the top of your script code, but that can be tricky if you have components or other logic that might not all run in a straight line. Inline Script Code Here's another small problem I ran into: I use a DateTime Picker widget I built a while back that relies on the jQuery date time picker. I also created a helper function that allows keyboard date navigation into it that uses JavaScript logic. Because MVC's limited 'object model' the only way to embed widget content into the page is through inline script. This code broken when I inserted the cached HTML into the page because the script code was not available when the component actually got injected into the page. As the last bullet - it's a matter of timing. There's no good work around for this - in my case I pulled out the jQuery date picker and relied on native <input type="date" /> logic instead - a better choice these days anyway, especially since this view is meant to be primarily to serve mobile devices which actually support date input through the browser (unlike desktop browsers of which only WebKit seems to support it). Bookmarking Cached Urls When you cache HTML content you have to make a decision whether you cache on the client and also not render that same content on the server. In the Classifieds app I didn't render server side content so if the user comes to the page with back=True and there is no cached content I have to a have a Plan B. Typically this happens when somebody ends up bookmarking the back URL. The easiest and safest solution for this scenario is to ALWAYS check the cache result to make sure it exists and if not have a safe URL to go back to - in this case to the plain uncached list URL which amounts to effectively redirecting. This seems really obvious in hindsight, but it's easy to overlook and not see a problem until much later, when it's not obvious at all why the page is not rendering anything. Don't use <body> to replace Content Since we're practically replacing all the HTML in the page it may seem tempting to simply replace the HTML content of the <body> tag. Don't. The body tag usually contains key things that should stay in the page and be there when it loads. Specifically script tags and elements and possibly other embedded content. It's best to create a top level DOM element specifically as a placeholder container for your cached content and wrap just around the actual content you want to replace. In the app above the #SizingContainer is that container. Other Approaches The approach I've used for this application is kind of specific to the existing server rendered application we're running and so it's just one approach you can take with caching. However for server rendered content caching this is a pattern I've used in a few apps to retrofit some client caching into list displays. In this application I took the path of least resistance to the existing server rendering logic. Here are a few other ways that come to mind: Using Partial HTML Rendering via AJAXInstead of rendering the page initially on the server, the page would load empty and the client would render the UI by retrieving the respective HTML and embedding it into the page from a Partial View. This effectively makes the initial rendering and the cached rendering logic identical and removes the server having to decide whether this request needs to be rendered or not (ie. not checking for a back=true switch). All the logic related to caching is made on the client in this case. Using JSON Data and Client RenderingThe hardcore client option is to do the whole UI SPA style and pull data from the server and then use client rendering or databinding to pull the data down and render using templates or client side databinding with knockout/angular et al. As with the Partial Rendering approach the advantage is that there's no difference in the logic between pulling the data from cache or rendering from scratch other than the initial check for the cache request. Of course if the app is a  full on SPA app, then caching may not be required even - the list could just stay in memory and be hidden and reactivated. I'm sure there are a number of other ways this can be handled as well especially using  AJAX. AJAX rendering might simplify the logic, but it also complicates search engine optimization since there's no content loaded initially. So there are always tradeoffs and it's important to look at all angles before deciding on any sort of caching solution in general. State of the Session SessionState and LocalStorage are easy to use in client code and can be integrated even with server centric applications to provide nice caching features of content and data. In this post I've shown a very specific scenario of storing HTML content for the purpose of remembering list view data and state and making the browsing experience for lists a bit more friendly, especially if there's dynamically loaded content involved. If you haven't played with sessionStorage or localStorage I encourage you to give it a try. There's a lot of cool stuff that you can do with this beyond the specific scenario I've covered here… Resources Overview of localStorage (also applies to sessionStorage) Web Storage Compatibility Modernizr Test Suite© Rick Strahl, West Wind Technologies, 2005-2013Posted in JavaScript  HTML5  ASP.NET  MVC   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Samba4 [homes] share

    - by SambaDrivesMeCrazy
    I am having issues with the [homes] share. OS is Ubuntu 12.04. I've installed samba 4.0.3, bind9 dlz, ntp, winbind, everything but pam modules, and did all the tests from https://wiki.samba.org/index.php/Samba_AD_DC_HOWTO. Running getent passwd and getent user work just fine. Creating a simple share works just fine too. I can manage the users, GPOs, and DNS from the windows mmc snap-ins. I can join winxp,7,8 to the domain and log on perfectly. I can change my passwords from windows, etc..etc.. I could say that everything is fine and be happy :) buuuut, no, home directories do not work. Searching in here, and on our good friend google I gathered that a simple [homes] read only = no path = /storage-server/users/ and mapping the user's home folder in dsa.msc to \\server-001\username or \\server-001\homes should get me a home share I could map for my user homedir. But the snap-in give me an error saying that it cannot create the home folder because the network name has not been found (rough translation from portuguese). also, running root@server-001:/storage-server/users# smbclient //server-001/test -Utest%'12345678' -c 'ls' Domain=[MYDOMAIN] OS=[Unix] Server=[Samba 4.0.3] tree connect failed: NT_STATUS_BAD_NETWORK_NAME Server name is alright, if I go for a simple share on the same server it opens just fine. If I map the user homedir to this simple share it works. What I want is that I dont have to go and manually make a new folder on linux everytime I create a new user on windows. It looks like permissions but I cant find any documentation on this (yes I've tried the manpages, but its hard to tell with so many options on man smb.conf alone). My smb.conf right now looks like this (pretty simple I know) # Global parameters [global] workgroup = MYDOMAIN realm = MYDOMAIN.LAN netbios name = SERVER-001 server role = active directory domain controller server services = s3fs, rpc, nbt, wrepl, ldap, cldap, kdc, drepl, winbind, ntp_signd, kcc, dnsupdate [netlogon] path = /usr/local/samba/var/locks/sysvol/mydomain.lan/scripts read only = No [sysvol] path = /usr/local/samba/var/locks/sysvol read only = No [homes] read only = no path = /storage-server/users Folder permissions /storage-server drwxr-xr-x 6 root root 4096 Fev 15 15:17 storage-server /storage-server/users drwxrwxrwx 6 root root 4096 Fev 18 17:05 users/ Yes, I was desperate enough to set 777 on the users folder... not proud of it. Any pointers in the right direction would be very welcome. Edited to include: root@server-001:/# wbinfo --user-info=test MYDOMAIN\test:*:3000045:100:test:/home/MYDOMAIN/test:/bin/false root@server-001:/# wbinfo -n test S-1-5-21-1957592451-3401938807-633234758-1128 SID_USER (1) root@server-001:/# id test uid=3000045(MYDOMAIN\test) gid=100(users) grupos=100(users) root@server-001:/# wbinfo -U 3000045 S-1-5-21-1957592451-3401938807-633234758-1128 root@server-001:/# Edit 2: getent passwd | grep test MYDOMAIN\test:*:3000045:100:test:/home/MYDOMAIN/test:/bin/false I have no idea how to change that home folder to /storage-server/users/test so I just went and ln -s /storage-server/users /home/MYDOMAIN just in case. still, no changes, same errors. Edit 3 On log.smbd I get the following error when trying to set the test user home folder to \server-001\test [2013/02/20 14:22:08.446658, 2] ../source3/smbd/service.c:418(create_connection_session_info) user 'MYDOMAIN\Administrator' (from session setup) not permitted to access this share (test)

    Read the article

  • Is it possible to mount a disk image, created with dd, to a directory on a mounted external usb hdd?

    - by Keeper Hood
    I have an image of my home (/dev/sda3) partition, which I've created using the "dd" command. dd if=/dev/sda3 of=/path/to/disk.img I've deleted the home partition via gparted in order to enlarge my /dev/root partition. Then I've recreated the /dev/sda3 partition which is smaller in size then the one I've backed up to the image. I was wondering since I have a 2TB external HDD, could it be possible to mount my backed up image on the external HDD and then copy the files into the /home directory. Since the external HDD would be already in a "mounted state", I'm unsure whether this is a good idea, mounting on a mounted device. I'm running Slackware 13.37 (64bit). used ext4 on all the partitions. resized the root partition with gparted live cd. I've tried mount -t ext4 /path/to/disk.img /mng/image -o loop It gave me an fs error (wrong fs type, bad option, bad superblock on dev/loop/0) Then i did dmesg | tail which outputs: EXT4-fs (loop0) : bad geometry: block count 29009610 exceeds size of defice (1679229 blocks) I have no idea what to do, I want to restore my /home data from the image I've backed up.

    Read the article

< Previous Page | 158 159 160 161 162 163 164 165 166 167 168 169  | Next Page >