Search Results

Search found 335 results on 14 pages for 'oops'.

Page 10/14 | < Previous Page | 6 7 8 9 10 11 12 13 14  | Next Page >

  • How (recipe) to build only one kernel module?

    - by Pro Backup
    I have a bug in a Linux kernel module that causes the stock Ubuntu 14.04 kernel to oops (crash). That is why I want to edit/patch the source of only that single kernel module to add some extra debug output. The kernel module in question is mvsas and not necessary to boot. For that reason I don't see any need to update any initrd images. I have read a lot of information (as shown below) and find the setup and build process confusion. I need two recipes: to setup/configure the build environment once steps to do after editing any source file of this kernel module (.c and .h) and converting that edit into a new kernel module (.ko) The sources that have been used are: build one kernel module - Google search http://www.linuxquestions.org/questions/linux-kernel-70/rebuilding-a-single-kernel-module-595116/ http://stackoverflow.com/questions/8744087/how-to-recompile-just-a-single-kernel-module http://www.pixelbeat.org/docs/rebuild_kernel_module.html How do I build a single in-tree kernel module? http://ubuntuforums.org/showthread.php?t=1153067 http://ubuntuforums.org/showthread.php?t=2112166 http://ubuntuforums.org/showthread.php?t=1115593 build one kernel module ubuntu - Google search 'make +single +kernel +module' - Ask Ubuntu 'make +kernel +module' - Ask Ubuntu My makefile results in: No rule to make target `arch/x86/tools/relocs.c', needed '"Invalid module format"' - Ask Ubuntu Driver installation: compiling source code for newer kernel Modprobe: 'Invalid nodule format', yet works after insmod "Symbol version dump" "is missing" - Google search http://stackoverflow.com/questions/9425523/should-i-care-that-the-symbol-version-dump-is-missing-how-do-i-get-one Where can I find the corresponding Module.symvers and .config files for 12.04.3 i386 server? "no symbol version for module_layout" when trying to load usbhid.ko Broken links inside Linux header file folder 'make modules_install' - Ask Ubuntu 'modules_install' - Ask Ubuntu Empty build directory in custom compiled kernel Not able to see pr_info output In which directory are the kernel source files and how can I recompile it? How can I compile and install that patched libata-eh.c file? 'modules_install +depmod' - Ask Ubuntu modules_install depmod - Google search "make modules_install" - Google search http://www.csee.umbc.edu/courses/undergraduate/CMSC421/fall02/burt/projects/howto_build_kernel.html http://unix.stackexchange.com/questions/20864/what-happens-in-each-step-of-the-linux-kernel-building-process https://wiki.ubuntu.com/KernelCustomBuild http://www.cyberciti.biz/tips/build-linux-kernel-module-against-installed-kernel-source-tree.html http://www.linuxforums.org/forum/kernel/170617-solved-make-modules_install-different-path.html "make prepare" - Google search "make prepare" "scripts/kconfig/conf --silentoldconfig Kconfig" - Google search http://ubuntuforums.org/showthread.php?t=1963515 ubuntu "make prepare" version - Google search http://stackoverflow.com/questions/8276245/how-to-compile-a-kernel-module-against-a-new-source https://help.ubuntu.com/community/Kernel/Compile How do I compile a kernel module? How to add a custom driver to my kernel? Compile and loading kernel module without compiling the kernel

    Read the article

  • Crash Report in Ubuntu... hardware problem?

    - by Andrew
    Got this on my machine. I was just browsing the web on Chrome and my computer froze. I recently just built this machine. I have a feeling it is a hardware problem... Possibly one of my parts arrived broken in some way.... Starting anac(h)ronistic cron Stopping anac(h)ronistic cron Stopping cold plug devices Stopping log initial device creation Starting enable remaining boot-time encrypted block devices Starting configure network device security Starting configure virtual network devices Starting save udev log and update rules Stopping configure virtual network devices Stopping save udev log and update rules Checking battery state... Stopping System V runlevel compatibility Stopping enable remaining boot-time encrypted block devices Stopping Mount filesystems on boot 91.573384] BUG: unable to handle kernel NULL pointer dereference at (null) 91.573437] IP: [<ffffffff81313514>] strcmp+0x14/0x30 91.573470] PGD 1f7822067 PUD 1ed7a6067 PMD 0 91.573498] Oops: 0000 [#1] SMP 91.573519] CPU 3 91.573531] Modules linked in: dm_crypt bnep snd_hda_codec_realtek rfcomm bluetooth parport_pc ppdev arc4 fglrx(P) rt2800usb rt2800lib crc_ccitt rt2x00usb rt2x00lib mac0021 cfg80211 psmouse snd_hda_intel snd_hda_codec snd_hwdep snd_pcm snd_seq_midi snd_rawmidi snd_seq_midi_event snd_seq snd_timer send_seq_device snd joydev mac_hid mei(C) soundcore serio_raw snd_page_alloc lp parport ses enclosure usbhid hid i915 drm_kms_helper drm i2c_algo_bit mxm_umi tg_video wmi usb_storage 91.573826] 91.573837] Pid: 2297, comm: update-notifier Tainted: P C O 3.2.0-29-generic #46-Ubuntu To Be Filled By O.E.M. To Be Filled By O.E.M./Z77 Extreme4 91.573912] RIP: 0010:[<ffffffff81313514>] [<ffffffff81313514>] strcmp+0x14/0x30 91.573954] RSP: 0018:ffff8801f83f5bb8 EFLAGS: 00010246 91.573982] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000 91.574019] RDX: 0000000000000069 RSI: 0000000000000000 RDI: ffff88021adb26f8 91.574056] RBP: ffff8801f83f5bb8 R08: ffff88022f2d6e80 R09: 0000000000000000 91.574093] R10: ffff88021e7dbf00 R11: 0000000000000003 R12: ffff88021c10eb40 91.574130] R13: 0000000000000000 R14: ffff88021adb26f8 R15: ffff8801f83f5d40 91.574168] FS: 00007f958cf53940(0000) GS:ffff88022f2c0000(0000) kn1GS:0000000000000000 91.574210] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 91.574240] CR2: 0000000000000000 CR3: 000000021f6d7000 CR4: 00000000000406e0 91.574277] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 91.574314] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000000 91.574351] Process update-notifier (pid: 2297, threadinfo ffff801f83f4000, task ffff880208fe2e00) 91.574397] Stack: 91.574409] ffff8801f83f5be8 ffffffff811ed509 ffff88021adb26c0 ffff88021b8b7020 91.574453] ffff88021b461c60 fffffffffffffffe ffff8801f83f5c18 ffffffff811ed61f 91.574496] ffff88021adb26c0 ffff88021b8b7020 ffff8801f83f5dc8 0000000000000001 91.574539] Call Trace: 91.574558] [<ffffffff811ed509] sysfs_find_dirent+0x59/0x110 91.574591] [<ffffffff811ed61f] sysfs_lookup+0x5f/0x110 91.574621] [<ffffffff81182745] d_alloc_and_lookup+0x45/0x90 91.574654] [<ffffffff8118fe65] ? d_lookup+0x35/0x60 91.574683] [<ffffffff811848d2] do_lookup+0x202/0x310 91.574712] [<ffffffff8118660c] path_lookupat+0x11c/0x750 91.574744] [<ffffffff81318db7] ? __strncpy_from_user+0x27/0x60 91.574778] [<ffffffff81186c71] do_path_lookup+0x31/0xc0 91.574809] [<ffffffff81187779] user_path_at_empty+0x59/0xa0 91.574842] [<ffffffff81187822] ? do_filp_open+0x42/0xa0 91.574872] [<ffffffff811877d1] user_path_at+0x11/0x20 91.574902] [<ffffffff8117c80a] vfs_fstatat+0x3a/0x70 91.574933] [<ffffffff81161cff] ? kmem_cache_free+0x2f/0x110 91.574965] [<ffffffff8117c85e] vfs_lstat+-x31/0x70 91.574993] [<ffffffff8117c9fa] sys_newlstat+0x1a/0x40 91.575022] [<ffffffff81176ee1] ? do_sys_open+0x171/0x220 91.575053] [<ffffffff8117cb1a] ? sys_readlinkat+0x7a/0xb0 91.575086] [<ffffffff81661ec2] system_call_fastpath+0x16/0x1b 91.575118] Code: 83 c1 01 40 84 ff 75 ef 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 00 00 55 31 c0 48 89 e5 66 2e 0f 1f 84 00 00 00 00 00 0f b6 14 07 <3a> 14 06 75 0f 48 83 c0 01 84 d2 75 ef 31 c0 5d c3 0f 1f 00 19 91.577243] RIP [<ffffffff81313514>] strcmp+0x14/0x30 91.579314] RSP <ffff8801f83f5bb8> 91.581385] CR2: 0000000000000000

    Read the article

  • Looking for tips on managing complexity with SCM repositories

    - by Philip Regan
    I am a solo developer in my department and I have a lot of individual projects, all created and managed by me. I started using SVN at ProjectLocker via Versions on the Mac a couple years ago when the variety of projects started getting unwieldy. Scenario 1: Now I have a process that is of reasonable complexity it can be broken up into multiple smaller applications and they all share files. In one phase, there is a single shared file—a constants file—that is shared between a Cocoa app and an iPhone app framework. In the second phase, the iPhone app framework will be used to create individual apps of the same ilk—controller classes and what not will all be the same—but with different content in each. The problem that I am running across is that the file in the first phase is in one repository with the application that started it, and the app framework is in a second, separate repository. Scenario 2: I have another application framework that partially relies on code from an open source project. This is all internal, non-commerical work, but again, the application framework is going to be used to create a variety of unique products and processes. So, now I have an internally managed repository and an externally managed one out of my control. I make little changes to the open source code to meet the needs of my framework when there is an update I download, but I never commit back into the external repository (though, now that I think about it, I don't think I'm committing it to mine either. Oops). The Problem I have all of this set up on my production Mac quite nicely, but duplicating and subsequently maintaining that environment on my laptop has been challenging. For Scenario 1, I've thought of merging these two projects together into the same repository because they are, for all intents and purposes inextricably linked. But, Scenario 2, I think I'm stuck just managing files as best I can. The Question I'm wondering if anyone has any tips on how to manage either of these situations, as well as other complex SCM scenarios when it comes to linking various files from various repositories together. My familiarity with SVN only comes from my work with Versions. It's been great, but I'm a little out of my depth here.

    Read the article

  • Audio Stutters at gdm

    - by Allan
    Ok I have a problem every 2 times out of 3 I login (I cant be specific it fairly random) I get a Stuttering GDM warning (not the login sound just the Bell sound to wake you up) the only way to stop it is to login I have a Fujitsu Siemens Amilo 1718 with a 2gig of memory (only hardware mod) using 10.10 Maverick and I have disabled KMS as my system was freezing as per the release notes. The only time this has happened before on the same machine was when I gave Kubuntu a try when 10.04 came out then it happened at the login screen and at random times while listening to music in any program. By the way audio is fine as is almost everything else once I have logged in. I would like an answer to this as I am an advocate of Ubuntu and its kind of embarrassing when the first thing that happens is *bing*. as requested Daniel alsa-info Pulse verbose log Not sure how useful the pulse log will be as I cant replicate the bug with a terminal open but I wouldnt be asking the question if I knew the answer so..... Edit 24/12/2010 ......been living on cocktail sausages and pickled onions for five days now made a make shift splint with cocktail sticks..... oops so updated the alsa drivers but I still get the same message in the dmesg No response from codec, disabling MSI: last cmd=0x10a90000 googleing it brings up a forum post from some other distro with a green logo the only common denominator seems to be graphics ie ATI Radeon XPRESS 200M which is why I have had to turn of kms as the chip is so old that small mice try to eat the "kernel" ;) funnily enough following the bug link at the end of the post, I found a comment about "Ubuntu Black Magic" so mabey I am coming at this from the wrong angle...... Bad Joo Joo any one. I will try the second part of Daniels Fix and Update with the result. The final Edit: (Plays air guitar) In the end neither of these solved the problem as such However I have given Roland a tick for reminding me of the solution and I gave Daniel the Bounty for the effort in trying to solve the problem. The answer for future readers was the enable the correct HD Audio Model I found the answer back when using Karmic Koala 9.10 in this forum post Amilo Li1718 Skype - Can't get it working... the model is options snd-hda-intel model=3stack position_fix=1 enable=yes which can be added to the end of alsa-base.conf thanks all for helping and hope anyone with a similar problem will find the answer here.

    Read the article

  • Blank Icon in Control Panel?

    - by Wesley
    Hi all, Just recently I opened my Control Panel and saw that a blank icon came up before my list of icons. Has anyone else experienced this before? Would it be safe to delete it? Thanks in advance. EDIT: Oops, I should have mentioned this too, but when I try creating a shortcut on the desktop, I get a dialog box saying "Windows could not create the shortcut. Check if the disk is full." Clearly it isn't, since I still have 102 GB of free hard disk space... any ideas? EDIT2: Here is a screenie of my complete Control Panel: EDIT3: *So I did a manual search for .CPL in Windows Search and then looked at the list in TweakUI and got 3 unmatched .CPL files. They are [ncpa.CPL], [sapi.CPL], and [QuickTime.CPL]. I clearly know the QuickTime one, but not too sure about the rest. SCRATCH THAT, I FIGURED IT OUT... BUT IT'S NOT THE BLANK ICON.

    Read the article

  • So Close: How to get this SSH login working (.bashrc)

    - by This_Is_Fun
    Objective: SSH login ( + eliminate warning message) / run 2 commands / stay logged in: EDIT: Oops, I made a mistake (see below): This code does ~95% of what I wanted to do # .bashrc # Run two commands and stay logged in to new server. alias gr='ssh -t -p 5xx4x [email protected] 2> /dev/null "cd /var; ls; /bin/bash -i"' Now, after successful login / verify user logged in = root pts/0 2011-01-30 22:09 Try to 'logout' = bash: logout: not login shell: use `exit' I seem to have full root access w/o being logged into the shell? (The " /bin/bash -i " was added to 'Stay logged in' but doesn't work quite as expected) FYI: The question is "How to get this SSH login working" & it is mostly solved, sorry I made a mess... ... .. . Original Question Here: # .bashrc # Run two commands and stay logged in to new server. alias gr='ssh -t -p 5xx4x [email protected] "cd /var; ls; /bin/bash -i"' # (hack) Hide "map back to the address - POSSIBLE BREAK-IN ATTEMPT!" message. alias gr='ssh -p 5xx4x [email protected] 2> /dev/null' Both examples 'work' as shown; When I try to add the ' 2 /dev/null ' to the first example, then the whole thing breaks. I'm out of time trying to solve the warning message other ways, so is it possible to combine both examples to make example #1 work w/o the warning message? Thank you. ps. If you also know a proper way to kill the login warning message, please do tell (the 'standard' "edit host file" advice isn't working for me)

    Read the article

  • Gmail and FB is not rendering properly in Firefox

    - by Andy
    I am unable read Gmail in my Firefox browser on my laptop. However, there are no problems when I try to access Gmail on my office desktop. In fact, Gmail renders fine on IE on both computers. Things I have tried: Uninstalling and reinstalling Firefox. Using Firefox 3.6.26 and Firefox 10; same problem. Clearing the cache. But all to no avail. For starters: My background image does not get loaded. It says "Oops. Your selected image failed to load". The buttons for the new look do not get rendered, but when I do a mouse-over I see text which helps me navigate. When I try to read a message, I see the subject line and I see the reply / forward box. The message body does not get rendered, it seems like a styling error. Check this screenshot where the icons in the GMail new look are not rendered properly Check this screenshot where even facebook is not getting rendered properly EDIT: 25th Feb 2012 GMAIL is getting rendered fine in the old look

    Read the article

  • Is it possible to setup an internal test email server to keep all mail sent to it?

    - by MattGrommes
    We have a need at my work to setup a test email server that will take all mail sent to it for delivery and instead just dump it into an account for later retrieval. I've been out of the email server configuration game long enough that I think that's possible but I don't know for sure. As a more specific example of what we need: We have code that sends emails to outside clients in certain cases. We want to point our code to a test server that will accept those emails, but not let them get to the outside world (yes, it's happened before, oops). We then need to be able to verify that Email X would have gotten sent to Client Y if we had sent to the real server. As a bonus, we have a error email alias on our real server that goes to the programmers that we would like to keep getting email from. So anything sent to that alias on the test server would forward to our real server for delivery. My preference is for postfix but our IT staff seems set on using sendmail (or Exchange) for everything so hints/pointers for either server would be helpful. Thanks a lot.

    Read the article

  • Windows 7 - mysteriously missing free HDD space

    - by sYnfo
    I have Windows7 installed on 50GB (Oops, it should have been 45GB, sorry) partition, and every now and then it gets full, and I have to resize that partition. I always thought it is quite normal. But it happened again today and this time, I'm sure it is not normal, because since last resizing (35GB 45GB) I did not install any new apps or whatever. Also, sum of sizes off all, including hidden & system, root folders and files is ~18GB, yet windows is indicating that all 50GB are used up... Any idea what is going on? EDIT: Great tools everyone! (SourceForge appears to be offline at the moment, I'll check WinDirStat later) Alas, non of them solved my problem just yet... Screenshot from SpaceSniffer: On the right there is some kind of "Unknows Space", any idea what that could be? EDIT2: After those two apps failing to help much I didn't expect it, but WinDirStat actually helped. It showed that those missing 27GB are in my Temp folder (Well, that should have been my first guess anyway). There I found hundreds of ~100MB files, named like HTT????.tmp. After some googling it appears to be a problem with ESET NOD32 antivirus and it's ThreatSense feature. Thank you all for help! :)

    Read the article

  • How use DNS server to create simple HA (High availability) of my website?

    - by marc22
    Welcome, How can i use DNS server to create simple HA (High availability) of website ? For example if my web-server ( for better understanding i use internal IP in real it will be other hosting companies) 192.168.0.120 :80 (is offline) traffic go to 192.168.0.130 :80 You have right, i use bad word "hight avability" of course i was thinking about failover. Using few IP in A records is good for simple load-balancing. But not in case, if i want notice user about failure (for example display page, Oops something is wrong without our server, we working on it) against "can't establish connection". I was thinking about setting up something like this 2 DNS servers, one installed on www server Both have low TTL on my domain, set up 2 ns records first for DNS with my apache server second to other dns If user try connect he will get ip of www server using first dns, if that dns is offline (probably www server is also down) so it will try second NS record, what will point to another dns, that dns will point to "backup" page. That's what i would like to do. If You have other idea please share. Reverse proxy is not option, because IP of server can change, or i can use other country for backup.

    Read the article

  • WSUS trying to download all updates again

    - by Tim Alexander
    The server hosting WSUS had a catastrophic failure and we have had to rebuild the system drives. Luckily the DB and content store for WSUS are on a seperate drive so were unaffected. During the rebuild process we thought it was time to update the server to 2008 R2 (from 2003 R2). Have got the server running and installed the WSUS role, detached the DB form SQL Express 2008 R2 and attached the original. Carried out the wsusutil.exe movecontent command with a -skipcopy switch pointing to the original content store. All looked good until I saw the front page stating it is trying to download files for 6,436 updates at around 344,565 MB!!!!!! Oops, I thought, something not right here. The content store I have on disk is only 75GB but I am thinking that some vital step has been missed in the restoration process. Either way is there a way to make WSUS reindex its local content store or something as I am unsure that downloading 344 gigabytes is a viable way forward! EDIT: Never rains but it pours. AM now getting a CLSID: FX {8b6499ed-0241-e032-6508-da4b1c879d7e} error could not create snap in. think a reinstall of WSUS is in order.

    Read the article

  • really weird DNS problem in Ubuntu {after one month, seems like ISP problem}

    - by OmniWired
    Hello everyone. I been having this random dns problem, in Ubuntu 10.04 and in 10.10 it started about 2 weeks ago after (I believe) an update. Basically when I go to a website randomly I get that the website I'm visiting is not available ("Oops! Google Chrome could not connect to ..." & "This webpage is not available."). I tested with Chromium "7.0.515.0 (58587)" and Firefox minefield (4.0ish) and 3.6.9. I did these 4 things already: /etc/default/grub GRUB_CMDLINE_LINUX="ipv6.disable=1" and this: /etc/sysctl.conf net.ipv6.conf.all.disable_ipv6 = 1 net.ipv6.conf.default.disable_ipv6 = 1 net.ipv6.conf.lo.disable_ipv6 = 1 *disabling Chromium DNS pre-fetching *using Google and OpenDNS servers as well as ISP DNS servers. But didn't improve, also no other computers in my network have the same problem. All computer wired to the same router. I'm a software engineer that run out of ideas, please help me. Thanks in advance. UPDATE: some programs (synaptic / firefox update/ vuze(azureus)) say connection refused for the error. Most of the time a second try will fix the "refusal". UPDATE2: I found out with Wireshark, that everytime I have this problem i've got this 192.168.0.10 8.8.8.8 ICMP Destination unreachable (Port unreachable) Confirmed an ISP error. ISP;Speedy Location: Argentina, Buenos Aires (capital Federal) Area.

    Read the article

  • Kernel Logging disabled?

    - by Tiffany Walker
    uname -a Linux host 2.6.32-279.9.1.el6.i686 #1 SMP Tue Sep 25 20:26:47 UTC 2012 i686 i686 i386 GNU/Linux And start ups: ls /etc/init.d/ abrt-ccpp certmonger dovecot irqbalance matahari-broker mdmonitor nfs proftpd rpcbind single ypbind abrtd cgconfig functions kdump matahari-host messagebus nfslock psacct rpcgssd smartd abrt-oops cgred haldaemon killall matahari-network mysqld ntpd qpidd rpcidmapd sshd acpid cpuspeed halt ktune matahari-rpc named ntpdate quota_nld rpcsvcgssd sssd atd crond httpd lfd ma tahari-service netconsole oddjobd rdisc rsyslog sysstat auditd csf ip6tables lvm2-lvmetad matahari-sysconfig netfs portreserve restorecond sandbox tuned autofs cups iptables lvm2-monitor matahari-sysconfig-console network postfix rngd saslauthd udev-post But when I installed CSF/LFD I am getting nothing. LFD does not create lfd.log and nor are any blocks being logged in /var/log/messages either from the firewall. This is not natural. I looked for klogd but maybe I am looking in the wrong place for it to see if it is enabled? ls /etc/init.d/syslog ls: cannot access /etc/init.d/syslog: No such file or directory Also noticed no syslog? Also noticed this: csf -d 84.113.21.201 Adding 84.113.21.201 to csf.deny and iptables DROP... iptables: No chain/target/match by that name. iptables: No chain/target/match by that name. I've never seen this before and this is a dedicated box. Also: ./csftest.pl Testing ip_tables/iptable_filter...OK Testing ipt_LOG...OK Testing ipt_multiport/xt_multiport...OK Testing ipt_REJECT...OK Testing ipt_state/xt_state...OK Testing ipt_limit/xt_limit...OK Testing ipt_recent...OK Testing xt_connlimit...OK Testing ipt_owner/xt_owner...OK Testing iptable_nat/ipt_REDIRECT...OK Testing iptable_nat/ipt_DNAT...OK RESULT: csf should function on this server iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination

    Read the article

  • Remote Socket Read In Multi-Threaded Application Returns Zero Bytes or EINTR (104)

    - by user39891
    Hi. Am a c-coder for a while now - neither a newbie nor an expert. Now, I have a certain daemoned application in C on a PPC Linux. I use PHP's socket_connect as a client to connect to this service locally. The server uses epoll for multiplexing connections via a Unix socket. A user submitted string is parsed for certain characters/words using strstr() and if found, spawns 4 joinable threads to different websites simultaneously. I use socket, connect, write and read, to interact with the said webservers via TCP on their port 80 in each thread. All connections and writes seems successful. Reads to the webserver sockets fail however, with either (A) all 3 threads seem to hang, and only one thread returns -1 and errno is set to 104. The responding thread takes like 10 minutes - an eternity long:-(. *I read somewhere that the 104 (is EINTR?), which in the network context suggests that ...'the connection was reset by peer'; or (B) 0 bytes from 3 threads, and only 1 of the 4 threads actually returns some data. Isn't the socket read/write thread-safe? I use thread-safe (and reentrant) libc functions such as strtok_r, gethostbyname_r, etc. *I doubt that the said webhosts are actually resetting the connection, because when I run a single-threaded standalone (everything else equal) all things works perfectly right, but of course in series not parallel. There's a second problem too (oops), I can't write back to the client who connect to my epoll-ed Unix socket. My daemon application will hang and hog CPU 100% for ever. Yet nothing is written to the clients end. Am sure the client (a very typical PHP socket application) hasn't closed the connection whenever this is happening - no error(s) detected either. Any ideas? I cannot figure-out whatever is wrong even with Valgrind, GDB or much logging. Kindly help where you can.

    Read the article

  • Setting Windows 7's Recycle Bin to automatically have a default disk space allocation for deleted files from newly mounted drives

    - by galacticninja
    How do I set Windows 7's Recycle Bin to automatically have a default disk space allocation for deleted files from external hard drives and TrueCrypt-mounted volumes? I remember in Windows XP, I can set a percentage of total disk space that will automatically be used as storage capacity for deleted files by the Recycle Bin, and this will be applied to all external HDs or TC-mounted volumes. Windows 7 defaults to the 'Don't move files to the Recycle Bin. Remove files immediately when deleted' setting for newly mounted external HDs and TC mounted volumes. Since I am expecting deleted files to go to the Recycle Bin, sometimes this causes an 'Oops' when I delete files in external hard drives or TC mounted volumes, as Windows does not move deleted files to the Recycle Bin, but just deletes the files permanently. I have to remember to manually set a custom Recycle Bin storage space for each new drive that is mounted by Windows to avoid this issue. I only use and mount TrueCrypt file containers, not drives. I also don't mount TrueCrypt file containers as removable drives. ('Mount volume as removable medium' is unchecked in Mount Options.) In my $Recycle.Bin > Properties > Security settings, 'System' and 'Administrators' are already set to 'Full Control', while 'Users' only have 'Special Permissions' checked in gray. There are no other groups. I haven't changed or edited anything in these settings. I am using Windows 7 Ultimate.

    Read the article

  • Setup proxy with Apache 2.4 on Mac 10.8

    - by Aptos
    I have 1 application (Java) that running on my local machine (localhost:9000). I want to setup Apache as a front end proxy thus I used following configuration in the httpd.conf: <Directory /> #Options FollowSymLinks Options Indexes FollowSymLinks Includes ExecCGI AllowOverride All Order deny,allow Allow from all </Directory> Listen 57173 LoadModule proxy_module modules/mod_proxy.so <VirtualHost *:9999> ProxyPreserveHost On ServerName project.play ProxyPass / http://127.0.0.1:9000/Login ProxyPassReverse / http://127.0.0.1:9000/Login LogLevel debug </VirtualHost> ServerName localhost:57173 I change my vim /private/etc/hosts to: ## # Host Database # # localhost is used to configure the loopback interface # when the system is booting. Do not change this entry. ## 127.0.0.1 localhost 255.255.255.255 broadcasthost ::1 localhost fe80::1%lo0 localhost 127.0.0.1:9999 project.play and use dscacheutil -flushcache. The problem is that I can only access to localhost:57173, when I tried accessing http://project.play:9999, Chrome returns "Oops! Google Chrome could not find project.play:9999". Can somebody show me where I were wrong? Thank you very much P/S: When accessing localhost:9999 it returns The server made a boo boo.

    Read the article

  • Not able to access Silverlight.net and ONLY Silverlight.net - All other domains work!

    - by Sootah
    Alrighty folks, I have an extremely odd problem. I am able to surf the web fine with one odd (and really annoying at the moment) exception: Microsoft's Silverlight.net. Every other site that I go to works just fine. This is quite frustrating because I'm in the middle of programming a web app in Silverlight 4.0, and whenever I do a search for any code examples, tutorials, or whatnot at least 50% of the results are hosted in the silverlight.net forums. The error message that I get is: Oops! Google Chrome could not find www.silverlight.net It doesn't work in my other browsers either (both IE and FireFox). What's odd, is that while the error message would lead me to assume it's a DNS error, I can ping the URL just fine. C:\Users\The Doot>ping silverlight.net Reply from 206.72.125.201: bytes=32 time=106ms TTL=106 Reply from 206.72.125.201: bytes=32 time=106ms TTL=106 Reply from 206.72.125.201: bytes=32 time=106ms TTL=106 Reply from 206.72.125.201: bytes=32 time=106ms TTL=106 Ping statistics for 206.72.125.201: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 103ms, Maximum = 110ms, Average = 106ms I've checked my HOSTS file, and there's nothing that refers to ANY Microsoft URL in there. What could be causing this!?? More importantly, how do I fix it? Just for kicks, I've even included the results of a traceroute here for your enjoyment. OS: Windows 7 Ultimate Thanks in advance! -Sootah

    Read the article

  • Nginx Server Block Not Working? - Already running other vhosts just this one not working

    - by daveaspinall
    Im running a Debian 6 LEMP server with multiple virtual hosts and everything has been fine for 5 or so sites. But I've just tried adding another but for some reason it's just not working. By not working I mean in Chrome I get the "Oops! Google Chrome could not connect to subdomain.domain.net" error. I've changed the domain for security to subdomain.example.com and the IP is masked. Hosts file (I have multiple sub domains): xxx.xxx.xx.xxx *.example.com *.example Server Block: server { listen 80; server_name subdomain.example.com; access_log /srv/www/subdomain.example.com/logs/access.log; error_log /srv/www/subdomain.example.com/logs/error.log; root /srv/www/subdomain.example.com/public_html; location / { index index.html index.htm index.php; } location ~ \.php$ { include fastcgi_params; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } } I've created the system link to the file in the /etc/nginx/sites-enabled/ directory and restarted/reloaded nginx. DNS seems fine: # ping -c 2 subdomain PING subdomain.example.com (xxx.xxx.xx.xxx) 56(84) bytes of data. 64 bytes from www.example.com (xxx.xxx.xx.xxx): icmp_req=1 ttl=64 time=0.035 ms 64 bytes from www.example.com (xxx.xxx.xx.xxx): icmp_req=2 ttl=64 time=0.048 ms Checking the file with cURL works: # curl http://subdomain.example.com HTML - OK Emptied browser cache but still no dice. Anything I'm missing? Like I mentioned, I have a few sites running fine on the server currently so php-fpm etc etc are working. Any help would be much appreciated! Cheers, Dave

    Read the article

  • Same script, different behavior [migrated]

    - by Antoine_935
    I just stumbled upon an interesting bug... Still trying to figure out what is exactly happening. Maybe you can help. First, the context. I'm currently building yet another man to html converter (for some reasons I won't motivate here, but I need it). So, have a look at the screenshot below (see the link), more precisely at the outlined spots. See? On the upper shell, I have &lt ; and &gt ;, that is, escaped html. While on the shell below I have < and directly. But as you can see (or do I seriously need looking glass ?), the command man 2 semget | webmanneris the same on both sides, as is the which webmanner. The two are executed roughly at the same moment, with no modification made to the script between. [Oops, cannot post pictures just yet... Here comes the link] http://aspyct.org/media/webmanner-bug.png But the shell below is older (open about 1 hour ago). Newer shells all print out &lt ;. So my first guess was that it somehow had a cached reference to the old inode of the file, or old blocks or whatever. So I modified parts of the script, at the start and then at the end, to print different messages. And, surprise, the message shown up on both terminals. But still, same difference between &lt ; and <. I'm confused... How to explain that behavior? I'm working on a OSX 10.8 (Mountain Lion) EDIT: OK, there is one big difference: the shell below uses ruby 1.9.3, while above is 1.8.7. Is there any known difference in string handling between the two versions ?

    Read the article

  • Can't access apache from outsite my local network

    - by valter
    UPDATED: Now, when I type my external ip like xxx.xxx.xxx.xxx:8079, i can access xampp defaults page. But the strange is that when someone else from outside my network, try to access it using the same ip, it doesnt work. I Think it should, because its the external ip. I'm getting crazy. I have tried for hours to access xampp defaults page from outside my local network. My ISP blocks port 80 and 8080. So I changed apache to listen to port 8079 Listen 8079 My local computer ip is 10.1.1.2 I can access the webserver, from any computer on my local network when I type http://10.1.1.2:8079 I also oppended the port 8079 on my modem, as the image shows bellow. (I think i did it right) When apache is running on my computer, if I test the port 8079 at http://canyouseeme.org/ i get the message "Success: I can see your service on xxx.xxx.xxx.xxx on port (8079) Your ISP is not blocking port 8079" If apache is not running I get "Error: I could not see your service on xxx.xxx.xxx.xxx on port (8079) Reason: Connection refused". So, it's clear that the port 8079 is oppened. But when I type xxx.xxx.xxx.xxx:8079 on google chrome for example, I get Oops! Google Chrome could not connect to xxx.xxx.xxx.xxx:8079 What can I do to solve this, to allow apache to server the pages? I don't know what else I shoud configure. Please, help me. Thanks.

    Read the article

  • Can't reach server without proxy (website down from my home)

    - by user2128576
    I have a website hosted on Hostinger However I am experiencing problems with my wordpress site. This is really annoying. If I understood the situation right, The server is blocking me or denying access to my own website. When I visit the site with google chrome, it returns: Oops! Google Chrome could not find Same thing happens to firefox! Firefox can't find the server but when I do a check if my site is online and working through http://www.downforeveryoneorjustme.com/ it says that the site is working and up. Another thing, I access the website through a proxy, both on chrome and in firefox, and t works. Why is this? I have also recently installed the plugin Better Wp Security 5 days ago. Could the plugin have caused it? but I don't remember setting any IP's to be blocked. Also, this happens at random times, sometimes I can access it, sometimes it fails to reach the server. I am currently developing the site live. Was I blocked by the server for frequently refreshing the page? (duh, I'm a developer and I need to refresh to see changes.) or is this a problem with my ISP's DNS server? How can I resolve? and what are the possible fixes? Thanks in advance! -Jomar

    Read the article

  • Website is not accessible from server which is using proxy

    - by Bhoot
    I hosted a website in a win 2008 R2 server which runs in private domain. I set up bindings for port 80 and 443 for http & https respectively. Created inbound rule for port 80 and 443 also in windows firewall. After doing all this, i am still not able to access my website from remote machine. IE : Internet Explorer cannot display the webpage. Chrome : Oops! Google Chrome could not find xxxxxx Tried accessing website by ip address but no luck. I tried to ping that server but it says TTL expired in Transit. Now i found some more information over internet to check if the server is using any kind of proxy in between. I found my IP address at www.getip.com, but ipconfig/all gives me a different IP address. Is it really a problem if we use proxy ? I am not sure if i have concluded it correctly. But is there any way out to resolve this issue? Update ::: I figured it out. I have to call that website with external IP address. due to the proxy settings i was not able to call that website by the server's IP or name of that machine.

    Read the article

  • Screen Scraping Twitter

    - by BRADINO
    I got an email today asking for help to scrape Twitter. In particular, to be able to login. So I am going to show everyone, NOT to encourage anyone to violate Twitters terms of use but as an educational blog post about how PHP and cURL can be used to post variables and store cookies. Again, I am using the cScrape class I wrote, which you can download. Step 1 First go to twitter.com and look at the source code of the login to get the form field names and the form post location. You will see that the form posts to https://twitter.com/session and the username and password fields are session[username_or_email] and session[password] respectively. Step 2 Now you are ready to login. So using the fetch function in the Scrape class you create an associative array to contain the form values you want to post. The other thing you will need to do is uncomment the lines for CURLOPT_COOKIEFILE and CURLOPT_COOKIEJAR. Cookies will be required to stay logged in and scrape around. The paths to the cookie files need to be writable by your app. Also you will need to uncomment the line about CURLOPT_FOLLOWLOCATION. $data = array('session[username_or_email]' => "bradino", 'session[password]' => "secret"); $scrape->fetch('https://twitter.com/sessions',$data); Step 1.5 Oops that didn't work. All I got back was 403 Forbidden: The server understood the request, but is refusing to fulfill it. Ahhh I see another variable called authenticity_token I bet Twitter was looking for that. So let's back up and first hit twitter.com to get the authenticity_token variable, and then make the login post request with that variable included in our array of parameters. $scrape->fetch('https://twitter.com'); $data = array('session[username_or_email]' => "bradino", 'session[password]' => "secret"); $data['authenticity_token'] = $scrape->fetchBetween('name="authenticity_token" type="hidden" value="','"',$scrape->result); $scrape->fetch('https://twitter.com/sessions',$data); echo $scrape->result; So that's basically it. Now you are logged in and can scrape around and request other pages as you normally would. Sorry it wasn't a longer post. I really do enjoy this kind of stuff so if anyone has a request, hit me up. Errors? 1) Make sure that you are properly parsing the token variable 2) Make sure that you uncommented the lines about CURLOPT_COOKIEFILE and CURLOPT_COOKIEJAR, those options need to be enabled and be sure the path set is writable by your application 3) Make sure that the path to the cookie file is writable and that it is getting data written to it 4) If you get a message about being redirected you need to uncomment the line about CURLOPT_FOLLOWLOCATION, that option needs to be enabled true

    Read the article

  • IPS Facets and Info files

    - by mkupfer
    One of the unusual things about IPS is its "facet" feature. For example, if you're a developer using the foo library, you don't install a libfoo-dev package to get the header files. Intead, you install the libfoo package, and your facet.devel setting controls whether you get header files. I was reminded of this recently when I tried to look at some documentation for Emacs Org mode. I was surprised when Emacs's Info browser said it couldn't find the top-level Info directory. I poked around in /usr/share but couldn't find any info files. $ ls -l /usr/share/info ls: cannot access /usr/share/info: No such file or directory Was I was missing a package? $ pkg list -a | egrep "info|emacs" editor/gnu-emacs 23.1-0.175.0.0.0.2.537 i-- editor/gnu-emacs/gnu-emacs-gtk 23.1-0.175.0.0.0.2.537 i-- editor/gnu-emacs/gnu-emacs-lisp 23.1-0.175.0.0.0.2.537 --- editor/gnu-emacs/gnu-emacs-no-x11 23.1-0.175.0.0.0.2.537 --- editor/gnu-emacs/gnu-emacs-x11 23.1-0.175.0.0.0.2.537 i-- system/data/terminfo 0.5.11-0.175.0.0.0.2.1 i-- system/data/terminfo/terminfo-core 0.5.11-0.175.0.0.0.2.1 i-- text/texinfo 4.7-0.175.0.0.0.2.537 i-- x11/diagnostic/x11-info-clients 7.6-0.175.0.0.0.0.1215 i-- $ Hmm. I didn't have the gnu-emacs-lisp package. That seemed an unlikely place to stick the Info files, and pkg(1) confirmed that the info files were not there: $ pkg contents -r gnu-emacs-lisp | grep info usr/share/emacs/23.1/lisp/info-look.el.gz usr/share/emacs/23.1/lisp/info-xref.el.gz usr/share/emacs/23.1/lisp/info.el.gz usr/share/emacs/23.1/lisp/informat.el.gz usr/share/emacs/23.1/lisp/org/org-info.el.gz usr/share/emacs/23.1/lisp/org/org-jsinfo.el.gz usr/share/emacs/23.1/lisp/pcvs-info.el.gz usr/share/emacs/23.1/lisp/textmodes/makeinfo.el.gz usr/share/emacs/23.1/lisp/textmodes/texinfo.el.gz $ Well, if I have what look like the right packages but don't have the right files, the next thing to check are the facets. The first check is whether there is a facet associated with the Info files: $ pkg contents -m gnu-emacs | grep usr/share/info dir facet.doc.info=true group=bin mode=0755 owner=root path=usr/share/info file [...] chash=[...] facet.doc.info=true group=bin mode=0444 owner=root path=usr/share/info/mh-e-1 [...] file [...] chash=[...] facet.doc.info=true group=bin mode=0444 owner=root path=usr/share/info/mh-e-2 [...] [...] Yes, they're associated with facet.doc.info. Now let's look at the facet settings on my desktop: $ pkg facet FACETS VALUE facet.locale.en* True facet.locale* False facet.doc.man True facet.doc* False $ Oops. I've got man pages and various English documentation files, but not the Info files. Let's fix that: # pkg change-facet facet.doc.info=True Packages to update: 970 Variants/Facets to change: 1 Create boot environment: No Create backup boot environment: Yes Services to change: 1 DOWNLOAD PKGS FILES XFER (MB) Completed 970/970 181/181 9.2/9.2 PHASE ACTIONS Install Phase 226/226 PHASE ITEMS Image State Update Phase 2/2 PHASE ITEMS Reading Existing Index 8/8 Indexing Packages 970/970 # Now we have the info files: $ ls -F /usr/share/info a2ps.info dir@ flex.info groff-2 regex.info aalib.info dired-x flex.info-1 groff-3 remember ...

    Read the article

  • Transform your Oracle Tutor Documents to Your Corporate Standard

    - by mary.keane
    You have all of your company's processes documented in Oracle Tutor, and now you want to get the HTML files to reflect your company's corporate look and feel. How are you going to do this without having an HTML guru to change every HTML page? The good news is you do not need to be an HTML expert to make minor changes to your documents. All Tutor HTML files are attached to a group of style sheets, so any changes you make to the style sheets will immediately be reflected in all of your HTML documents. If you want to give it a try, here's what you do (please note that these tips are applicable to release Oracle Tutor 12.2 and greater): Navigate to your Tutor HTML directory, and copy into a draft folder a representative group of HTML files (don't forget the flowchart image files that are associated with the procedures). You'll also need to copy the following files: tutor.css tutor_notabs.css tutor_scripts.js tutor_tabs.css flow_icon.gif Here's the default look to the Oracle Tutor desk manual. Let's say I want to use my company's corporate style in the HTML documents. At Oracle, we use Oracle Red (FF0000), Oracle Black (000000), and Oracle Gray (666666). So I want to incorporate those colors into the Tutor HTML files. I open tutor.css from the draft folder in a text editor. My preference is to use Notepad, but there are others. Make sure, however, that it is a text editor, and not a word processing program. I want to change the headings to Oracle Red. The desk manual title is listed as the DMPAGETITLE, so I find that in tutor.css. The style names in the style sheets are descriptive, but sometimes you may have to experiment to find the right style (this is why you're working in a draft folder). I change the color attribute to FF00000, and then I save the document. Now I look at one of the desk manuals in my draft folder. I've successfully changed the title of the desk manual, so, now that I have more confidence that I can do this, I start changing other styles. I need to make changes in the tutor_tabs.css file as well, so I open that document. Then I look at one of the procedures. Oops! All that red is distracting, and the users may not be able to follow their procedures. So I go back to the corporate style guide, and I find some shades of gray that have been approved. So I use that, and it is now more readable. It's good enough for a first draft, and I would show it to my colleagues at this point to get their input. On my next blog, I'll discuss how to change the flowchart colors to match your corporate look and feel. Have you used the cascading styles sheets to change the look of your Tutor documents? If so, let us know what you've done in your post. Mary R. Keane Senior Development Manager, Oracle Tutor & UPK Content

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14  | Next Page >