Search Results

Search found 4426 results on 178 pages for 'bunch'.

Page 156/178 | < Previous Page | 152 153 154 155 156 157 158 159 160 161 162 163  | Next Page >

  • Indesign Import XML into Automatic Page generation, data merge

    - by taudep
    I've created some InDesign Pages that I want to use as templates. I've created an XML file with all the appropriate data. I want to merge the XML data with the InDesign page and have a few hundred pages automatically generated. I've been reading online and working with InDesign's "Import XML" features without any luck. The documentation has been pretty poor for me. And Google searches haven't returned much fruitful. Here are my present steps: I create a Master Page of my template I add a bunch of text frames where I want the imported data from the XML file to be places I open the "Tags" window and Import and XML file I mark my text frames in the Master Document with the appropriate tags I then add a lot of pages (like 200) to the document Then I use "Import XML" to try and get the data brought in and filled across all 200 pages. This is where I fail. There's something I'm missing. It might be that InDesign doesn't work as I'm expecting... Does anyone have any good tips for mail-merge like functionality with an XML document and auto-generation of InDesign pages? By the way, here's an example of Adobe's great documentation for merging repeated XML elements. There's got to be more... InDesign CS4 Docs: XML-Importing XML-Working with Repeating Data Here's some of the sample XML, notice the ITEM will repeat. I've also truncated the data in the "desc" tag: <output> <item> <user_name>taude</user_name> <date>2009-02-21</date> <title>Wishful Thinking</title> <desc>Skiing up in Vermont on a beautiful day. This photo of</desc> <thumbnail>http://www.blipfoto.com/thumbs/5371/2009/big/color/96104200949a162672e1996.15963073.jpeg</thumbnail> </item> <item> <user_name>taude</user_name> <date>2009-02-22</date> <title>Skiing Self Portrait</title> <desc>I was inspired by ML's self-portrait while </desc> <thumbnail>http://www.blipfoto.com/thumbs/5371/2009/big/color/36547696749a2c5782308e0.91477014.jpeg</thumbnail> </item> </output> Here's what my imported XML looks like with the InDesign Structure:

    Read the article

  • Tuning Linux IP routing parameters -- secret_interval and tcp_mem

    - by Jeff Atwood
    We had a little failover problem with one of our HAProxy VMs today. When we dug into it, we found this: Jan 26 07:41:45 haproxy2 kernel: [226818.070059] __ratelimit: 10 callbacks suppressed Jan 26 07:41:45 haproxy2 kernel: [226818.070064] Out of socket memory Jan 26 07:41:47 haproxy2 kernel: [226819.560048] Out of socket memory Jan 26 07:41:49 haproxy2 kernel: [226822.030044] Out of socket memory Which, per this link, apparently has to do with low default settings for net.ipv4.tcp_mem. So we increased them by 4x from their defaults (this is Ubuntu Server, not sure if the Linux flavor matters): current values are: 45984 61312 91968 new values are: 183936 245248 367872 After that, we started seeing a bizarre error message: Jan 26 08:18:49 haproxy1 kernel: [ 2291.579726] Route hash chain too long! Jan 26 08:18:49 haproxy1 kernel: [ 2291.579732] Adjust your secret_interval! Shh.. it's a secret!! This apparently has to do with /proc/sys/net/ipv4/route/secret_interval which defaults to 600 and controls periodic flushing of the route cache The secret_interval instructs the kernel how often to blow away ALL route hash entries regardless of how new/old they are. In our environment this is generally bad. The CPU will be busy rebuilding thousands of entries per second every time the cache is cleared. However we set this to run once a day to keep memory leaks at bay (though we've never had one). While we are happy to reduce this, it seems odd to recommend dropping the entire route cache at regular intervals, rather than simply pushing old values out of the route cache faster. After some investigation, we found /proc/sys/net/ipv4/route/gc_elasticity which seems to be a better option for keeping the route table size in check: gc_elasticity can best be described as the average bucket depth the kernel will accept before it starts expiring route hash entries. This will help maintain the upper limit of active routes. We adjusted elasticity from 8 to 4, in the hopes of the route cache pruning itself more aggressively. The secret_interval does not feel correct to us. But there are a bunch of settings and it's unclear which are really the right way to go here. /proc/sys/net/ipv4/route/gc_elasticity (8) /proc/sys/net/ipv4/route/gc_interval (60) /proc/sys/net/ipv4/route/gc_min_interval (0) /proc/sys/net/ipv4/route/gc_timeout (300) /proc/sys/net/ipv4/route/secret_interval (600) /proc/sys/net/ipv4/route/gc_thresh (?) rhash_entries (kernel parameter, default unknown?) We don't want to make the Linux routing worse, so we're kind of afraid to mess with some of these settings. Can anyone advise which routing parameters are best to tune, for a high traffic HAProxy instance?

    Read the article

  • Why are people trying to connect to me network on TCP port 445?

    - by Solignis
    I was playing with my new syslog server and had my m0n0wall firewall logs forwarded as a test, I noticed a bunch of recent firewall log entries that say that it blocked other WAN IPs from my ISP (I checked) from connecting to me on TCP port 445. Why would a random computer be trying to connect to me on a port apperently used for Windows SMB shares? Just internet garbage? A port scan? I am just curious. here is what I am seeing Mar 15 23:38:41 gateway/gateway ipmon[121]: 23:38:40.614422 fxp0 @0:19 b 98.82.198.238,60653 -> 98.103.xxx.xxx,445 PR tcp len 20 48 -S IN broadcast Mar 15 23:38:42 gateway/gateway ipmon[121]: 23:38:41.665571 fxp0 @0:19 b 98.82.198.238,60665 -> 98.103.xxx.xxx,445 PR tcp len 20 48 -S IN Mar 15 23:38:43 gateway/gateway ipmon[121]: 23:38:43.165622 fxp0 @0:19 b 98.82.198.238,60670 -> 98.103.xxx.xxx,445 PR tcp len 20 48 -S IN broadcast Mar 15 23:38:44 gateway/gateway ipmon[121]: 23:38:43.614524 fxp0 @0:19 b 98.82.198.238,60653 -> 98.103.xxx.xxx,445 PR tcp len 20 48 -S IN broadcast Mar 15 23:38:44 gateway/gateway ipmon[121]: 23:38:43.808856 fxp0 @0:19 b 98.82.198.238,60665 -> 98.103.xxx.xxx,445 PR tcp len 20 48 -S IN Mar 15 23:38:44 gateway/gateway ipmon[121]: 23:38:43.836313 fxp0 @0:19 b 98.82.198.238,60670 -> 98.103.xxx,xxx,445 PR tcp len 20 48 -S IN broadcast Mar 15 23:38:48 gateway/gateway ipmon[121]: 23:38:48.305633 fxp0 @0:19 b 98.103.22.25 -> 98.103.xxx.xxx PR icmp len 20 92 icmp echo/0 IN broadcast Mar 15 23:38:48 gateway/gateway ipmon[121]: 23:38:48.490778 fxp0 @0:19 b 98.103.22.25 -> 98.103.xxx.xxx PR icmp len 20 92 icmp echo/0 IN Mar 15 23:38:48 gateway/gateway ipmon[121]: 23:38:48.550230 fxp0 @0:19 b 98.103.22.25 -> 98.103.xxx.xxx PR icmp len 20 92 icmp echo/0 IN broadcast Mar 15 23:43:33 gateway/gateway ipmon[121]: 23:43:33.185836 fxp0 @0:19 b 98.86.34.225,64060 -> 98.103.xxx.xxx,445 PR tcp len 20 48 -S IN broadcast Mar 15 23:43:34 gateway/gateway ipmon[121]: 23:43:33.405137 fxp0 @0:19 b 98.86.34.225,64081 -> 98.103.xxx.xxx,445 PR tcp len 20 48 -S IN Mar 15 23:43:34 gateway/gateway ipmon[121]: 23:43:33.454384 fxp0 @0:19 b 98.86.34.225,64089 -> 98.103.xxx.xxx,445 PR tcp len 20 48 -S IN broadcast I blacked out part of my IP address for my own safety.

    Read the article

  • Effect of HOME on libreoffice to convert to pdf as non-root user

    - by user1032531
    I installed libreoffice-headless and can convert documents when logged on as root. I then tried doing so as another user, and it didn't show an error, but didn't convert the file. I then found that if I get rid of the HOME=/tmp/ayb, it works with the other user. Doesn't HOME=/tmp/ayb just allow files to default to this directory if not specified? (Sorry, I tried to search "Linux HOME", but as you probably expect, received a bunch of non-relevant results). If not, what is the purpose of specifying HOME? Why does setting HOME prevent it from converting on non-root users? Note that /tmp and /tmp/ayb or both 0777. Thank you [root@desktop ~]# yum install libreoffice-headless [root@desktop ~]# yum install libreoffice-writer [root@desktop ~]# ls -l total 48 -rwxrwxrwx. 1 NotionCommotion NotionCommotion 48128 Jul 30 02:38 document_34.doc [root@desktop ~]# HOME=/tmp/ayb; /usr/bin/libreoffice --headless -convert-to pdf --outdir /tmp/ayb /tmp/ayb/document_34.doc convert /tmp/ayb/document_34.doc -> /tmp/ayb/document_34.pdf using writer_pdf_Export [root@desktop ~]# rm d*.pdf rm: remove regular file `document_34.pdf'? y [root@desktop ~]# /usr/bin/libreoffice --headless -convert-to pdf --outdir /tmp/ayb /tmp/ayb/document_34.doc convert /tmp/ayb/document_34.doc -> /tmp/ayb/document_34.pdf using writer_pdf_Export [root@desktop ~]# rm d*.pdf rm: remove regular file `document_34.pdf'? y [root@desktop ~]# su NotionCommotion sh-4.1$ HOME=/tmp/ayb; /usr/bin/libreoffice --headless -convert-to pdf --outdir /tmp/ayb /tmp/ayb/document_34.doc sh-4.1$ rm d*.pdf rm: cannot remove `d*.pdf': No such file or directory sh-4.1$ /usr/bin/libreoffice --headless -convert-to pdf --outdir /tmp/ayb /tmp/ayb/document_34.doc sh-4.1$ rm d*.pdf rm: cannot remove `d*.pdf': No such file or directory sh-4.1$ exit exit [root@desktop ~]# su NotionCommotion sh-4.1$ /usr/bin/libreoffice --headless -convert-to pdf --outdir /tmp/ayb /tmp/ayb/document_34.doc convert /tmp/ayb/document_34.doc -> /tmp/ayb/document_34.pdf using writer_pdf_Export sh-4.1$ rm d*.pdf sh-4.1$ HOME=/tmp/ayb; /usr/bin/libreoffice --headless -convert-to pdf --outdir /tmp/ayb /tmp/ayb/document_34.doc sh-4.1$ rm d*.pdf rm: cannot remove `d*.pdf': No such file or directory sh-4.1$ /usr/bin/libreoffice --headless -convert-to pdf --outdir /tmp/ayb /tmp/ayb/document_34.doc sh-4.1$ rm d*.pdf rm: cannot remove `d*.pdf': No such file or directory sh-4.1$

    Read the article

  • Getting Classic ASP to work in .js files under IIS 7

    - by Abdullah Ahmed
    I am moving a clients classic asp webapp to a new IIS7 based server. The site contains some .js files which have javascript but also classic asp in <% % tags which contains a bunch of conditional statements designed to spit out pieces of javascript based on session state variables. Here's a brief example of what the file could be like.... var arrHOFFSET = -1; var arrLeft ="<"; var arrRight = ">"; <% If ((Session("dashInv") = "True") And ((Session("systemLevelStaff") = "4") Or (Session("systemLevelCompany") = "4"))) Then %> addMainItem("/MgmtTools/WelcomeInventory.asp?wherefrom=salesMan","",81,"center","","",0,0,"","","","",""); <% Else %> <% If (Session("dashInv") = "False") And ((Session("systemLevelStaff") = "4") Or (Session("systemLevelCompany") = "4")) Then %> <% Else %> addMainItem("/calendar/welcome.asp","",81,"center","","",0,0,"","","","",""); <% End If %> <% End If %> defineSubmenuProperties(135,"center","center",-3,0,"","","","","","",""); Currently this file (named custom.js for example) will start throwing js errors, because the server doesnt seem to recognize the asp code in it and therefore does not parse it. I know I need to somehow specify that a .js file should also be treated like an .asp file and run through parsing it. However I am not sure how to go about doing this. Here is what I've tried so far... Under the Server node in IIS under HANDLER MAPPINGS I created a new Script Map with the following settings. Request Path: *.js Executable: C:\Windows\System32\inetsrv\asp.dll Name: ASPClassicInJSFiles Mapping: Invoke Handler only if request is mapped to : File Verbs: All verbs Access: Script I also created a similar handler under the site node itself. Under MIME Types .js is defined as application/x-javascript None of these work. If I simply rename the file to have .asp extension then things work, however this app is poorly coded and has literally 100's of files with the .js files included in them under various names and locations, so rename, search and replace is the last option I have.

    Read the article

  • ZFS/Btrfs/LVM2-like storage with advanced features on Linux?

    - by Easter Sunshine
    I have 3 identical internal 7200 RPM SATA hard disk drives on a Linux machine. I'm looking for a storage set-up that will give me all of this: Different data sets (filesystems or subtrees) can have different RAID levels so I can choose performance, space overhead, and risk trade-offs differently for different data sets while having a few number of physical disks (very important data can be 3xRAID1, important data can be 3xRAID5, unimportant reproducible data can be 3xRAID0). If each data set has an explicit size or size limit, then the ability to grow and shrink the size limit (offline if need be) Avoid out-of-kernel modules R/W or read-only COW snapshots. If it's a block-level snapshots, the filesystem should be synced and quiesced during a snapshot. Ability to add physical disks and then grow/redistribute RAID1, RAID5, and RAID0 volumes to take advantage of the new spindle and make sure no spindle is hotter than the rest (e.g., in NetApp, growing a RAID-DP raid group by a few disks will not balance the I/O across them without an explicit redistribution) Not required but nice-to-haves: Transparent compression, per-file or subtree. Even better if, like NetApps, analyzes the data first for compressibility and only compresses compressible data Deduplication that doesn't have huge performance penalties or require obscene amounts of memory (NetApp does scheduled deduplication on weekends, which is good) Resistance to silent data corruption like ZFS (this is not required because I have never seen ZFS report any data corruption on these specific disks) Storage tiering, either automatic (based on caching rules) or user-defined rules (yes, I have all-identical disks now but this will let me add a read/write SSD cache in the future). If it's user-defined rules, these rules should have the ability to promote to SSD on a file level and not a block level. Space-efficient packing of small files I tried ZFS on Linux but the limitations were: Upgrading is additional work because the package is in an external repository and is tied to specific kernel versions; it is not integrated with the package manager Write IOPS does not scale with number of devices in a raidz vdev. Cannot add disks to raidz vdevs Cannot have select data on RAID0 to reduce overhead and improve performance without additional physical disks or giving ZFS a single partition of the disks ext4 on LVM2 looks like an option except I can't tell whether I can shrink, extend, and redistribute onto new spindles RAID-type logical volumes (of course, I can experiment with LVM on a bunch of files). As far as I can tell, it doesn't have any of the nice-to-haves so I was wondering if there is something better out there. I did look at LVM dangers and caveats but then again, no system is perfect.

    Read the article

  • Where / how does Apache generate the HTML code used in the default directory listing?

    - by Ellen B
    I am looking to modify the HTML that apache generates for its default directory listing. I already know how to create a HEADER.html file that gets included for every directory listing. I am attempting to change the actual html that Apache generates for the file listing itself; right now my MacOS apache generates this for example: <table><tr><th><img src="/icons/blank.gif" alt="[ICO]"></th><th><a href="?C=N;O=D">Name</a></th><th><a href="?C=M;O=A">Last modified</a></th><th><a href="?C=S;O=A">Size</a></th><th><a href="?C=D;O=A">Description</a></th></tr><tr><th colspan="5"><hr></th></tr> <tr><td valign="top"><img src="/icons/folder.gif" alt="[DIR]"></td><td><a href="ios-prototype/">ios-prototype/</a> </td><td align="right">07-Dec-2012 16:47 </td><td align="right"> - </td><td>&nbsp;</td></tr> <tr><td valign="top"><img src="/icons/folder.gif" alt="[DIR]"></td><td><a href="magneto-git/">magneto-git/</a> </td><td align="right">07-Dec-2012 16:46 </td><td align="right"> - </td><td>&nbsp;</td></tr> <tr><th colspan="5"><hr></th></tr> </table> I want a different HTML structure (like, say, an OL) generated when my server spits back directory listings. (FYI I'm doing a bunch of mobile browser prototyping with my local webserver & need to make it not totally horrible to browse with fingers to the right test directory — the table structure sucks, and while I can mod a lot of it with CSS it's still going to be ganky.)

    Read the article

  • Windows Media Player Object with Video_TS Files - No Sound Anymore

    - by user1624184
    I copied a bunch of my DVDs (yes, owned) to a drive and created a basic webpage to be able to search and play them. I am using the code at the bottom to create a Windows Media Player object and then play the video. The video files are pulled off the DVD in the original format so each movie has files like this: VTS_01_0.BUP VTS_01_0.IFO VTS_01_1.VOB VTS_01_2.VOB VTS_01_3.VOB VTS_01_4.VOB VTS_01_5.VOB VTS_01_6.VOB This all worked great until just recently. On my dev machine, I can play the videos but now, all of a sudden, there is no sound. I have tried the following but no luck: The video is not muted and the sound is at 50% Under the sound icon, under mixer, I checked IE and it is fine Sound works fine using another program, says WinAmp Opening Windows Media Player directly from the Start Menu and then playing the same movie works fine and I get sound I ensured that the option in IE to play sound on websites is checked I can play sound on other people's websites where they have embedded WMP files I tried resetting all of the IE settings on the Advanced tab in IE I tried my website, with my code below on another computer, AND IT WORKS FINE! I tried copying the media files listed above from the computer that works fine to my dev computer and it still doesn't work. If I try using a ".wmv" file, the sound does work By the way, I am using Win7 with IE8. As you can see, this is driving me crazy! Why would it stop working on my one computer and the same code and files work fine on another computer? Any help would be greatly appreciated. <OBJECT id="mediaPlayer" width="640px" height="480px" style="position:absolute; left:0px; top:0px;" CLASSID="CLSID:6BF52A52-394A-11d3-B153-00C04F79FAA6" type="application/x-oleobject"> <PARAM NAME="URL" VALUE="E:\test\VIDEO_TS\VIDEO_TS.IFO" /> <PARAM NAME="SendPlayStateChangeEvents" VALUE="True"/> <PARAM NAME="AutoStart" VALUE="True"/> <PARAM NAME="uiMode" VALUE="full"/> <PARAM NAME="volume" VALUE="50" /> <PARAM NAME="PlayCount" VALUE="9999"/> <PARAM NAME="fullScreen" VALUE="true"/> <PARAM NAME="enableContextMenu" VALUE="true"/> </OBJECT>

    Read the article

  • git, egit, submodules, and symlinks -- how should shared sub-projects be handled in eclipse?

    - by Autophil
    Question: what's the best way to handle sub-projects in eclipse when using git for SCM? Here's the situation. I have a few git projects with a directory structure layed out more or less like this: simpleproj app www admin demo lib model orm view model user blah ... storeproj app www about mobile fbapp lib model orm view model user message cart product merchant Each directory in "lib" contains a separate project, either created in-house or forked, all of which use git for source control. So I figured I should make them submodules of my projects, right? Well, we've been moving toward eclipse + egit, because some of our windows guys not used to a CLI need something they can use without being scared of screwing things up. Anyway, the problem is, egit doesn't support submodules. So, my solution has been a rather crude one involving symlinks... lets say my directory structure on my dev box is generally layed out like this: ~/projects/ bigproj .git app lib model (- ~/lib/model/src/) orm (- ~/lib/orm/src/) neatproj .git app lib view (- ~/lib/view/src/) oldproj .git app lib orm (- ~/lib/orm/src/) ~/lib/ model .git src README.md orm .git src COPYING view .git src ...the symlinks link to a subdirectory of the directory containing the git repo, so eclipse doesn't get confused, and everything sort of works. On my machine, I can update the libs from anywhere and all projects will be updated (needing to be committed again of course). Each project stores a separate copy of the contents of the symlinked directories within "lib" -- but only when staged from within eclipse. After committing from eclipse and moving back to the CLI, git sees that a bunch of files have been removed and a few symlinks have been created. Of course this is acceptable also, probably more so than keeping a separate history of the libs for each project... but eclipse and CLI git obviously need to be on the same page so tons of files aren't vanishing and reappearing. So this brings me to my question. I'd like to know how to either: get eclipse+egit to see the symlinks as symlinks if git will somehow handle them properly*, or get the CLI git to treat them as non-symlinks. Or, if there's a better way to do this, I'm all ears. Hope this all made sense! :D Note: tried to tag this as git-submodules, but was not allowed :( * should I make them relative or absolute? Either way it's a mess. Also will symlinks will work on windows? i know there's something similar but you need a 3rd party tool to manage them AFAIK, i doubt these would translate well.

    Read the article

  • Sysadmin 101: How can I figure out why my server crashes and monitor performance?

    - by bflora
    I have a Drupal-powered site that seems to have neverending performance problems. It was butt-slow about 5 months ago. I brought in some guys who installed nginx for anonymous visitors, ajaxified a few queries so they wouldn't fire during page load, and helped me find a few bottlenecks in the code. For about a month, the site was significantly faster, though not "fast" by any stretch of the word. Meanwhile, I'm now shelling out $400/month to Slicehost to host a site that gets less than 5,000/uniques a day. Yes, you read that right. Go Drupal. Recently the site started crashing again and is slow again. I can't afford to hire people to come in, study my code from top to bottom, and make changes that may or may not help anymore. And I can't afford to throw more hardware at the problem. So I need to figure out what the problem is myself. Questions: When apache crashes, is it possible to find out what caused it to crash? There has to be a way, right? If so, how can I do this? Is there software I can use that will tell me which process caused my server to die? (e.g. "Apache crashed because someone visited page X." or "Apache crashed because you were importing too many RSS items from feed X.") There's got to be a way to learn this, right? What's a good, noob-friendly way to monitor my current apache performance? My developer friends tell me to "just use Top, dude," but Top shows me a bunch of numbers without any context. I have no clue what qualifies as a bad number or a good number in Top, or which processes are relevant and which aren't. Are there any noob-friendly server monitoring tools out there? Ideally, I could have a page that would give me a color-coded indicator about how apache is performing and then show me a list of processes or pages that are sucking right now. This way, I could know when performance is bad and then what's causing it to be so bad. Why does PHP memory matter? My apparently has a 30MB memory foot print. Will it run faster if I bring that number down? Thanks for any advice. I spent a year or so trying to boost my advertising income so I could hire a contractor to solve my performance woes. I didn't want to have to learn all this sysadmin voodoo. I'm now resigned to the fact that might not have a choice.

    Read the article

  • Wake On Lan only works on first boot, not sequent ones

    - by sp3ctum
    I have converted my old Dell Latitude D410 laptop to a server for tinkering. It is running an updated Debian Squeeze (6) with a Xen enabled kernel (I want to toy with virtual machines later on). I am running it 'headless' via an ethernet connection. I am struggling to enable Wake On Lan for the box. I have enabled the setting in the BIOS, and it works nicely, but only for the first time after the power cord is plugged in. Here is my test: Plug in power cord, don't boot yet Send magic Wake On Lan packet from test machine (Ubuntu) using the wakeonlan program Server expected to start (does every time) Once server has booted, log in via ssh and shut it down via the operating system After shutdown, wake server up via WOL again (fails every time) Some observations: Right after step 1 I can see the integrated NIC has a light on. I deduce this means the NIC gets adequate power and that the ethernet cable is connected to my switch. This light is not on after step 4 (the shutdown stage). The light becomes back on after I disconnect and reconnect the power cord, after which WOL works as well. After step 4 I can verify that wake on lan is enabled via the ethtool program (repeatable each time) This blog post suggested the problem may lay in the fact the motherboard might not be giving adequate power to the NIC after shutdown, so I copied an acpitool script that supposedly should signal the system to give the needed power to the card when shut down. Obviously it did not fix my issue. I have included the relevant power settings in the paste below. I have tried different combinations of parameters of shutdown (the program) options, as well as the poweroff program. I even tried "telinit 0", which I figured would do the most direct boot via software. If I keep the laptop's power button pressed down and do a hard boot this way, the light on the ethernet port stays lit and a WOL is possible. I copied a bunch of hopefully useful information in this paste I have tried this with the laptop battery connected and without it. I get the same result. Promptly pressing the power button causes the system to shut down with the message "The system is going down for system halt NOW!", and WOL is still unsuccessful.

    Read the article

  • configuring slime in emacs

    - by CodeKingPlusPlus
    I am in the process of configuring slime for emacs. So far I have read about basic functionality for common lisp such as C-c C-q which invokes the command slime-close-parens-at-point which places the proper number of parens where your mouse is. Another command that seemed cool was invoked by C-c C-c and it would pass the code you are editing in a buffer to the REPL, and "compile" it. Why won't these commands work for me? Anyway, I have downloaded slime via M-x list-packages and do not seem to have this functionality (C-h w and then any of these commands tells me that these commands do note exist). So, I saw a bunch of other slime extensions such as slime-repl', 'slime-fuzzy' and 'hippie-expand-slime'. So I again usedM-x list-packages` and downloaded them. Still I did not have these commands. Here is the content of my emacs file relevant to slime: ;;;Common Lisp and Slime (add-to-list 'load-path "/home/s2s2/.emacs.d/elpa/slime-20130626.1151") (add-to-list 'load-path "/home/s2s2/.emacs.d/elpa/slime-repl-201000404") (add-to-list 'load-path "/home/s2s2/.emacs.d/elpa/hippie-expand-slime-20130226.1656") (add-to-list 'load-path "/home/s2s2/.emacs.d/elpa/slime-fuzzy-20100404") (require 'slime) (setq slime-lisp-implementations `((sbcl ("/usr/bin/sbcl")) (ecl ("/usr/bin/ecl")) (clisp ("/usr/bin/clisp" "-q -I")))) (require 'slime-repl) (require 'slime-fuzzy) (require 'hippie-expand-slime) When I execute M-x slime I get the following message in the inferior-lisp buffer where I can execute common lisp code (however, shouldn't this be the slime-repl since I required it?): STYLE-WARNING: redefining EMACS-INSPECT (#<BUILT-IN-CLASS T>) in DEFMETHOD STYLE-WARNING: Implicitly creating new generic function STREAM-READ-CHAR-WILL-HANG-P. WARNING: These Swank interfaces are unimplemented: (DISASSEMBLE-FRAME SLDB-BREAK-AT-START SLDB-BREAK-ON-RETURN) ;; Swank started at port: 46533. Then a slime-error buffer is created with the contents: Invalid protocol message: Symbol "CREATE-REPL" not found in the SWANK package. Line: 1, Column: 28, File-Position: 28 Stream: #<SB-IMPL::STRING-INPUT-STREAM {10056B9C33}> (:emacs-rex (swank:create-repl nil) "COMMON-LISP-USER" t 5) How should I modify my emacs file to give me the functionality of those commands? In my emacs file am I not loading the necessary files? Do I need to install an additional package? If you need more information let me know! All help is much appreciated!

    Read the article

  • Getting Classic ASP to work in .js files under IIS 7

    - by Abdullah Ahmed
    I am moving a clients classic asp webapp to a new IIS7 based server. The site contains some .js files which have javascript but also classic asp in <% % tags which contains a bunch of conditional statements designed to spit out pieces of javascript based on session state variables. Here's a brief example of what the file could be like.... var arrHOFFSET = -1; var arrLeft ="<"; var arrRight = ">"; <% If ((Session("dashInv") = "True") And ((Session("systemLevelStaff") = "4") Or (Session("systemLevelCompany") = "4"))) Then %> addMainItem("/MgmtTools/WelcomeInventory.asp?wherefrom=salesMan","",81,"center","","",0,0,"","","","",""); <% Else %> <% If (Session("dashInv") = "False") And ((Session("systemLevelStaff") = "4") Or (Session("systemLevelCompany") = "4")) Then %> <% Else %> addMainItem("/calendar/welcome.asp","",81,"center","","",0,0,"","","","",""); <% End If %> <% End If %> defineSubmenuProperties(135,"center","center",-3,0,"","","","","","",""); Currently this file (named custom.js for example) will start throwing js errors, because the server doesnt seem to recognize the asp code in it and therefore does not parse it. I know I need to somehow specify that a .js file should also be treated like an .asp file and run through parsing it. However I am not sure how to go about doing this. Here is what I've tried so far... Under the Server node in IIS under HANDLER MAPPINGS I created a new Script Map with the following settings. Request Path: *.js Executable: C:\Windows\System32\inetsrv\asp.dll Name: ASPClassicInJSFiles Mapping: Invoke Handler only if request is mapped to : File Verbs: All verbs Access: Script I also created a similar handler under the site node itself. Under MIME Types .js is defined as application/x-javascript None of these work. If I simply rename the file to have .asp extension then things work, however this app is poorly coded and has literally 100's of files with the .js files included in them under various names and locations, so rename, search and replace is the last option I have.

    Read the article

  • Apache server completely freezes until it gets restarted

    - by nbv4
    My server does this every few days. What sucks is that it always seems to do this right after I go to bed, so when I wake up, I'm greeted with the fact that my server has been down for the past 6 or 7 hours. When I first noticed this, I added a cronjob that tries to restart the server every 15 minutes, but I guess that didn't fix it. Once I noticed the server was down, I can this command: /etc/init.d/apache2 restart * Restarting web server apache2 apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.0.1 for ServerName ... waiting ...........................................................apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.0.1 for ServerName httpd (pid 17597) already running ...which is odd, because a restart should restart the server, even if it's already running, correct? I eventually had to "stop" then "start" to get it working again. I then looked through the logs, and found something very weird. It seems that around the time the server crashed, the logs have entries that are wildly out of order. It looks a little like this: xx.xxx.xxx.x - - [21/Apr/2010:06:32:05 -0400] "GET / blah" xx.xxx.xxx.x - - [21/Apr/2010:06:51:25 -0400] "GET / blah" x.xx.xxx.xxx - - [21/Apr/2010:06:38:23 -0400] "GET / blah" xxx.xx.xx.xx - - [21/Apr/2010:06:31:56 -0400] "GET / blah" xxx.xx.xx.xx - - [21/Apr/2010:06:51:49 -0400] "GET / blah" xx.xx.xxx.xx - - [21/Apr/2010:06:33:20 -0400] "GET / blah" I don't think the problem is memory, because this: tells me that right before the crash, memory usage is fine. I'm running apache with the worker mpm, here are the settings for that: <IfModule mpm_worker_module> StartServers 1 MaxClients 100 MinSpareThreads 5 MaxSpareThreads 10 ThreadsPerChild 10 MaxRequestsPerChild 3000 </IfModule> This apache server is running a bunch of stuff, but most of the traffic comes from a django project I'm hosting, that uses mod_wsgi. There also is a simple machines forum that is running off of mod_fcgid. Those setting are below: <IfModule mod_fcgid.c> MaxRequestsPerProcess 500 MaxProcessCount 3 AddHandler fcgid-script .php .fcgi AddHandler cgi-script .cgi .pl FCGIWrapper "/usr/bin/php-cgi" .php </IfModule> Anyone know of anything else I can check? I've just about tweaked every single setting I can think of, yet these freezes still happen.

    Read the article

  • Is it safe to enable forced ASLR via EMET on Windows?

    - by D.W.
    I'd like to enable forced ASLR for all DLLs on Windows. Is this safe? Background: ASLR is an important security mechanism that helps defend against code injection attacks. DLLs can opt into ASLR, and most do, but some DLLs have not opted into ASLR. If a program loads even a single non-ASLRized DLL, then the program doesn't get the benefit/protection of ASLR. This is a problem, because there are a non-trivial number of DLLs that haven't opted into ASLR. For instance, it was recently revealed that Dropbox injects a DLL into a bunch of processes, and the Dropbox DLL doesn't have ASLR turned on, which negates any ASLR protection they otherwise would have had. Unfortunately, there are many other widely used DLLs that haven't opted into ASLR. This is bad for system security. Microsoft provides several ways to turn on ASLR for all DLLs, even ones that haven't opted into ASLR: On Windows 7 and Windows Server 2008, you can enable "Force ASLR" in the registry. On all Windows versions, you can use Microsoft's EMET tool and enable EMET's "Mandatory ASLR" option. These methods are possible because all DLLs are compiled as position-independent code and they can be relocated to a random location even if they haven't opted into ASLR. These options will ensure that ASLR is turned on, even if the developers of the DLL forgot to opt into ASLR. Thus, forcing on ASLR systemwide may help system security. In principle, turning on forced ASLR could potentially break a poorly-written DLL, so there is some risk of breakage. I'm interested in finding out just significant this risk is. I have the suspicion that this kind of breakage might be extremely rare. Here's what I've been able to find: Microsoft has done compatibility testing with several dozen widely used applications. The only one they found where Mandatory ASLR causes problems is Windows Media Player. All the other applications continue working fine. (See pp.39-41 of this document.) I've seen some anecdotal reports that enabling "Mandatory ASLR"/"Force ASLR" is fine and unlikely to cause problems. CERT reports that AMD and ATI video drivers used to crash if you enabled forced ASLR, but their latest drivers have now fixed this problem. They don't show any other drivers with this problem. A forum post from Microsoft shows no other applications with compatibility problems if ASLR is forced on, as of 2011. A user reports that borderlands.exe, a video game by Gearbox Software, crashes if you turn on mandatory ASLR. What else should I know? Is it relatively safe to turn on Force ASLR / Mandatory ASLR systemwide to harden the secuity of my system, or will I be in for a world of pain and broken applications? How significant is the risk of compatibility problems and broken applications?

    Read the article

  • Windows 2008 IIS 7.0 HTTP to HTTPS Redirect -- Versus IIS 6.0 Mechanism

    - by Dan7el
    This topic, creating a mechanism for redirection from HTTP to HTTPS on a Windows 2008 server running IIS 7.0 is a much written-about topic on the Internet. How this is done is really not so much my issue. My issue is more of explaining why this can't be done with the standard HTTP Redirect module that ships with Windows 2008 IIS 7.0. Instead, there are other methods needed that are more arduous. First, the IIS 6.0 method requires no externally available modules nor does it require any additional modifications to the web.config or any type of other development effort. It's outlined here: http://blogs.microsoft.co.il/blogs/dorr/archive/2009/01/13/how-to-force-redirection-from-http-to-https-on-iis-6-0.aspx And, you can see the basic steps are to run the snap-in, get the properties on the site, and do some modifications. Presto, you have the HTTP -- HTTP redirect setup. Now, on the IIS 7.0 platform, it doesn't seem this simple. An initial search found the following site: http://www.sslshopper.com/iis7-redirect-http-to-https.html Which has two separate approcates: 1. Involves installing a separately available Microsoft module -- URL Rewrite Module, and then adding XML to the web.config. 2. Custom Error Page. ...there might be other methods, but these are the basic ones and the first is listed as the primary method. But wait...There exists on the IIS 7.0 an HTTP Redirect Module. So...why can't I use the HTTP Redirect Module to do this very thing? This is really my big question. I need to know this because my management is going to insist I use the HTTP Redirect Module and set up the HTTP to HTTPS redirect in a similar fashion to how we do in IIS 6.0. Can someone please explain to me, in clean, simple, easy to understand, terms that both I and my management can understand as to why I need to go get the URL Rewrite Module and install that on the server and make the web.config changes suggested by the article instead of simply using the HTTP Redirect module that's already installed on the site? Thanks a bunch.

    Read the article

  • Bypassing "Found New Hardware Wizard" / Setting Windows to Install Drivers Automatically

    - by Synetech inc.
    Hi, My motherboard finally died after the better part of a decade, so I bought a used system. I put my old hard-drive and sound-card in the new system, and connected my old keyboard and mouse (the rest of the components—CPU, RAM, mobo, video card—are from the new system). I knew beforehand that it would be a challenge to get Windows to boot and install drivers for the new hardware (particularly since the foundational components are new), but I am completely unable to even attempt to get through the work of installing drivers for things like the video card because the keyboard and mouse won't work (they do work, in the BIOS screen, in DOS mode, in Windows 7, in XP's boot menu, etc., just not in Windows XP itself). Whenever I try to boot XP (in normal or safe mode), I get a bunch of balloons popping up for all the new hardware detected, and a New Hardware Found Wizard for Processor (obviously it has to install drivers for the lowest-level components on up). Unfortunately I cannot click Next since the keyboard and mouse won't work yet because the motherboard drivers (for the PS/2 or USB ports) are not yet installed. I even tried a serial mouse, but to no avail—again, it does work in DOS, 7, etc., but not XP because it doesn't have the serial port driver installed. I tried mounting the SOFTWARE and SYSTEM hives under Windows 7 in order to manually set the "unsigned drivers warning" to ignore (using both of the driver-signing policy settings that I found references to). That didn't work; I still get the wizard. They are not even fancy, proprietary, third-party, or unsigned drivers. They are drivers that come with Windows—as the drivers for CPU, RAM, IDE controller, etc. tend to be. And the keyboard and mouse drivers are the generic ones at that (but like I said, those are irrelevant since the drivers for the ports that they are connected to are not yet installed). Obviously at some point in time over the past several years, a setting got changed to make Windows always prompt me when it detects new hardware. (It was also configured to show the Shutdown Event Tracker on abnormal shutdowns, so I had to turn that off so that I could even see the desktop.) Oh, and I tried deleting all of the PNF files so that they get regenerated, but that too did not help. Does anyone know how I can reset Windows to at least try to automatically install drivers for new hardware before prompting me if it fails? Conversely, does anyone know how exactly one turns off automatic driver installation (and prompt with the wizard)? Thanks a lot.

    Read the article

  • Google Apps For Business, SSO, AD FS 2.0 and AD

    - by Dominique dutra
    We are a small company with 22 people in the office. We had a lot of problems with e-mail in the past so I decided to change over to Google Apps for Business. It is the perfect solution for us, except for one thing: I need to be able to control the access to the mailboxes. Only users inside the office, authenticated to AD, or users authenticated to our VPN can connect to gmail. From what I've read it is possible using the SSO (Single Sign On) solution provided by Google - but i am having some trouble finding consistent information about it. First of all, our infrastructure: Windows Server 2008 R2 Active Directory, one domain only. Kerio Control for QoS and VPN. That's about it on our side. On Google Apps' side, I have one account, and 03 domains that my users use to log in. The main domain has most of the users, but the are a couple of people that login using one of the subdomains. I have a 03 domains because I run mail for 03 companies and wanted all to be in within the same control panel. Well, I found some guides on the internet but none of them cover the AD FS installation part. I've read somewhere that I needed to download AD FS 2.0 directly from Microsoft.com, because the one that came with Windows Server was a old version. I downloaded it (adfsSetup.exe) and tried to install but got an error, saying that I needed a Windows Server 2008 Sp2 for that program. My Windows Server 2008 is R2. I really need some help here, this is very importand, I dont want to have to pay $1000 for a SSO solution when i have an AD set up. Can someone please point me out to the right direction? Where can I find an AD FS 2.0 setup compatible with R2 would be a good start, or the one that came with r2 is already the 2.0 version. After the initial setup, there are some guides on the internet about the Google Apps part. It seems to be really easy. I also tried adding AD FS role, but there are a bunch of options wich I have no idea what means, and I coudn't find any guide covering that on the internet. I dont have a lot of experience with Windows Server, but I have a company wich is certificated and provide us with support. I can ask for their help in the later setup, but I dont think ADFS is a very common thing to deal with.

    Read the article

  • Operative systems on SD cards

    - by HisDudeness
    I was getting some wild ideas the last days, like putting some operative systems into SD cards rather than on my hard drive. I'll go further into details now and explain what lead me to consider this probably abominable decision. I am on a laptop (that means I have a native SD-card reader) which is currently running a cross-distro setup, with a bunch of Linux systems (placed in dedicated ext4 logical partitions into a huge extended one) regulated by an unique GRUB. Since today, my laptop haven't even seen any Windows system with binoculars. I was thinking about placing all the os part of my setup into a Secure Digital to save all my 500 Gb Hard Drive for documents, music, videos and so on, and being able to just remove the SD and boot my system into another computer too, as well as having the possibility of booting other systems into mine by just plugging in another SD, without having to keep it constantly placed in my PC. Also, in the remote case in the near future I just wanted to boot Windows 8 in it, I read it causes major boot incompatibility issues with other systems by needing a digital signature in order for them to start. By having it in a removable drive, I could just get rid of it when I'm needing him and switch its card with Linux one, and so not having any obstacles to their boot. Now, my questions are: I know unlikely traditional rotating disk drives, integrated circuits ones have a limited lifespan in terms of cluster rewriting. Is it an obstacle to that kind of usage? I mean, some Ultrabooks are using SSD now, is it the same issue, or there are some differences between Solid State Drives and Secure Digitals in that sense? Maybe having them to store system files which are in fixed positions (making the even-usage of cluster technology useless) constantly being re-read and updated and similar things just gets them soon unserviceable, do it? Second question: are all motherboards and BIOSes able to boot from SDs just like they are from USB pen drives (I mean, provided card reader is USB-connected, isn't it)? Or can't bootloaders like GRUB be installed on SDs working? If they can't, is it a solution installing GRUB to MBR and making boot option pointing to SD? Will it work? Are there any other problems to installing OSs on a Secure Digital?

    Read the article

  • Why Photoshop CS5's photomerge's result immediately disappear?

    - by koiyu
    I have a bunch of JPG-files which I want to stitch together with Photoshop's Photomerge function. I choose File → Automate → Photomerge... and browse for the files. Photoshop opens the files and starts analyzing. I see the process bar filling and different phases are mentioned on the process bar. Nothing weird there. When the merging is done (and if I don't blink my eyes), I can see layers-palette is populated with the chosen files and, by quickly judging from the layer thumbnails, they're properly aligned. Sometimes the image window itself can be seen, but not always. Problem is that the layers and the image disappear in a flash. There is no error message. Everything is like prior starting the photomerge. No file has been changed. I could continue to use Photoshop normally. This is what I've tried so far: Loaded folder which has 38 JPG images, 4272 x 2848 and ˜ 5 megabytes per file Loaded the same files, but chose Use Files instead of Use Folder in the photomerge's window Loaded 19 JPG images, 4272 x 2848 and ˜ 5 megabytes per file Loaded 10 JPG images, ⇑ see above Loaded 5 JPG images, see above Loaded 3 JPG images, see above Scaled the images to 2256 x 1504 and ˜< 1 megabytes per file Loaded in a set of 38, 19, 10, 5, 3 Following steps are tested with these smaller files and with a set of 5 images Read Adobe's forums and reduced the amount of RAM Photoshop uses gradually from ˜ 80 % to 50 % (though I didn't understand the logic behind this) Would've reduced cache tile size to 128K, but it was set so already Disabled OpenGL Scaled the images to 800 x 533 and ˜ 100 kilobytes per file, loaded a set of 5 Read more unanswered threads around the internet In between each test I closed and reopened Photoshop. This is the first time I've even tried using photomerge. Am I doing something wrong? How can I locate what is the problem? How do I fix this? Photoshop is 64 bit Extended CS5 version. I'm on a mid-2010 quad-core (i5) iMac with up-to-date Mac OS X 10.6.6. Edit: Weird. First loading the images into one file via File → Scripts → Load Files into Stack… and then using Edit → Auto-Align Layers…, which, effectively, is the same as photomerge (even the dialog looks kind of the same), works! Even with the original JPGs without any issues. This doesn't fix photomerge, though.

    Read the article

  • How do I (robustly) remotely execute tasks on Windows workstations in a domain?

    - by Zac B
    I'm not even sure if "robustly" is a word. Anyway. Context: We have a few hundred Windows 7 workstations on a LAN. We use AD/GPO management pretty heavily, but there are a lot of periodic and/or manual maintenance tasks we need to do that can't be done via GPO/scheduled task. For example, say I want to execute program X (which runs silently, in the background, and doesn't bother the user) on workstation Y, or say I want to execute task A on a workstation group B either on a schedule or on demand. Kicking the users off of their computers to do this (i.e. using RDP) is a no-no, and doesn't work on groups anyway. Question: What's the best way to do this that is robust enough that, after setup, I could give it to beginner support people (read: people who are phobic of the command line, and get confused with GUI interfaces more complicated than Firefox)? I'm a competent programmer, and, if there is a robust set of tools or framework out there for this type of task, I'd consider hacking something together myself if it didn't take too long. If there's some combination of tools or techniques that others use to make remote-workstation-administration doable by beginners, I have yet to find it. For those who care about the "why": I'm midlevel IT, and was told to implement a remote management solution that allows arbitrary/scheduled remote execution, with confirmation that programs actually ran remotely, and the ability to view what they returned. "Why?" I asked, "Can't I just use PsExec and the task scheduler on a dispatcher machine?" "No," I was told, "'Joe' the second-week tech is going to be in charge of this one, and he needs something simple with a GUI." What I've tried: I've played with making a bunch of one-clickable "transfer files to remote computer and run them with PsExec" batch/VB scrips, but those tend to break down and don't easily support running on customizable groups. I've played a little bit with the Windows version of Puppet, but it doesn't support arbitrary-time remote execution (it's ability to group computers into a tree/node structure is really nice though). I've used an older version of Altiris, and, while it does a lot of what I want, it's interface is awful, it's slow, crashes a lot, and is probably too expensive for management. SwiftWater's DMS solution does some of what I want, but it's very underdeveloped, closed-source (not a deal breaker but not ideal), and I get the impression that support and reliability are lacking.

    Read the article

  • How can I create an appointment on a shared Outlook calendar

    - by roryhewitt
    This isn't as basic a question as it may seem. Hence all the descriptive text... I and my team use Outlook 2007. I have my own personal calendar and also I share a calendar with others in my team (which is mainly used to notify everyone of vacations etc.). The others in my team are NOT technical people. I would like to create a shortcut or template that any of us can use to create an appointment in the shared calendar. My initial though was to create a new appointment, but rather than actually put it in the calendar, I would save it as an Outlook Template (.oft) file. Once created, I would send this to my team and tell them to put it in their Templates file and put a shortcut on their desktop. Then, if they want to put a vacation in the shared calendar, they just double-click on the shortcut, change the dates etc. and then save & close it. However, when I do that, it doesn't save the fact that it's an appointment on the shared calendar - it just adds the appointment to the team member's personal calendar. There doesn't seem to be a way to specify a calendar in the template. I've also tried this by saving the template as a .ics or .vcs file, with no better luck. Additionally, if a team member adds an appointment to the shared calendar, other 'sharees' aren't notified, unless the appointment is actually created as a meeting and the other sharees are explicitly invited. I found this online (http://office.microsoft.com/en-us/outlook-help/keep-everyone-informed-about-time-away-from-the-office-HA010209819.aspx) which APPEARS to say that what I want to do isn't built in functionality (since it shows a bunch of steps to go through. I'd PREFER not to have to add this stuff to everyone else's personal calendar directly. So... Is this possible to do, natively (i.e. directly in Outlook)? Would a Sharepoint calendar make more sense and allow this functionality? Is there a way to do what I want which will allow the other team members to be notified? Like I said, I'm looking for as simple an interface as possible - these people aren't going to want to do much more than open something and change dates. Additionally, they're probably not going to have any fancy software on their PC's, although they will be up to date with Java and (maybe) .NET frameworks. Also, before anyone gets funny, yes, this has to work with Outlook 2007, as it's a corporate standard - we're not able to change that, even though e.g. Google Calendar would do this wonderfully. Obviously if this functionality is available in Outlook 2010, then fantastic - we might be able to upgrade. Thanks!

    Read the article

  • git, egit, submodules, and symlinks -- how should shared sub-projects be handled in eclipse?

    - by Autophil
    Here's the situation. I have a few git projects with a directory structure layed out more or less like this: simpleproj app www admin demo lib model orm view model user blah ... storeproj app www about mobile fbapp lib model orm view model user message cart product merchant Each directory in "lib" contains a separate project, either created in-house or forked, all of which use git for source control. So I figured I should make them submodules of my projects, right? Well, we've been moving toward eclipse + egit, because some of our windows guys not used to a CLI need something they can use without being scared of screwing things up. Anyway, the problem is, egit doesn't support submodules. So, my solution has been a rather crude one involving symlinks... lets say my directory structure on my dev box is generally layed out like this: ~/projects/ bigproj .git app lib model (- ~/lib/model/src/) orm (- ~/lib/orm/src/) neatproj .git app lib view (- ~/lib/view/src/) oldproj .git app lib orm (- ~/lib/orm/src/) ~/lib/ model .git src README.md orm .git src COPYING view .git src ...the symlinks link inside of directory with the git repo, so eclipse doesn't get confused, and everything sort of works. On my machine, I can update the libs from anywhere and all projects will be updated (needing to be committed again of course). Each project stores a separate copy of the contents of the symlinked directories within "lib" -- but only when staged from within eclipse. After committing from eclipse and moving back to the CLI, git sees that a bunch of files have been removed and a few symlinks have been created. Of course this is acceptable also, probably more so than keeping a separate history of the libs for each project... but eclipse and CLI git obviously need to be on the same page so tons of files aren't vanishing and reappearing. So this brings me to my question. I'd like to know how to either: get eclipse+egit to see the symlinks as symlinks if git will somehow handle them properly*, or get the CLI git to treat them as non-symlinks. Or, if there's a better way to do this, I'm all ears. Hope this all made sense! :D Note: tried to tag this as git-submodules, but was not allowed :( * should I make them relative or absolute? Either way it's a mess. Also will symlinks will work on windows? i know there's something similar but you need a 3rd party tool to manage them AFAIK, i doubt these would translate well.

    Read the article

  • Network Services disabled (not starting) on Windows XP

    - by Rickesh John
    I am currently running Windows XP Service Pack 3 on my system. But today, when I failed to connect to the internet, via a LAN cable, I realized that almost all of the vital network services had stopped functioning. Any attempts to start it through services.msc gives me the following message: Could not start the DNS Client Service on Local Computer Error 1068: The dependency service group failed to start All my software or services that are related to networking have stopped functioning, for example, Windows Firewall is turned off permanently, so is my Avast Anti-Virus' service of Real Time Shields and Web Shield. When I insert the LAN wire into my laptop, it registers itself, but this is what I get when I do a ping localhost C:>ping localhost Unable to contact IP driver, error code 2 Moveover, with ipconfig I get this : Windows IP Configuration An internal error occurred: The request is not supported. Please contact Microsoft Product Support Services for further help. Additional Information: Unable to query host name On some further poking around, I saw that none of the "NETWORK SERVICE" process in task manager, except svchost.exe were running. Also, when I first opened the task manager, I saw some 20 processes running with username column empty for most of them. With some search in Google, I found out that these services were important, DHCP DNS Net logon Network connection Network location Awareness TCP/IP Net BIOS Helper none of them, except Network Connections are working, they do not start. The event viewer of my system shows a bunch of 7000 and 7001 event errors. I have tried re installing the network driver, booting in safe mode with networking and tried to enable those services mentioned above. I had disabled System Restore some time back, so I have no restore points for my system. I tried a lot of things from Google searches but none of them worked. Also, with such a long list of issue, I am a little confused as to what should I search on the internet. :( One more thing I would like to mention, previous morning, my anti-virus Avast detected a RootKit buried deep in my system folders. It was removed, but maybe this was a problem caused by the root kit. I did run a boot-time scan but no viruses were found. Please please please advice. Is formatting and re-installation of Windows my only option?

    Read the article

  • Server 2012 intermittently fails to respond to pings from single host, even with firewall disabled, but responds to non-ICMP requests fine

    - by James Westbury
    This one is kind of weird. I've got the following machines involved: DC01 - 10.1.2.42, Server 2012, domain controller & DNS server, physical machine nagiosv - 10.1.2.35, CentOS 6.4, Nagios, virtual machine CB01 - 10.1.3.81, Ubuntu 12.04 LTS, couchbase server, virtual machine So, I noticed something was wrong while configuring this new Nagios VM. I started seeing DC01's state flapping. I logged into nagiosv when I saw this happening, and attempted to ping DC01, both by FQDN and its IP address. Neither worked. I tried pinging the machine from CB01, which is another VM on the same virtual switch/physical NIC as nagiosv, and that worked fine. Pings still failing from nagiosv at this time. DC01 is also an internal DNS server, so I ran dig google.com from nagiosv, and was able to run a query against DC01 just fine: ;; Query time: 1 msec ;; SERVER: 10.1.2.42#53(10.1.2.42) ;; WHEN: Fri Nov 1 07:53:51 2013 ;; MSG SIZE rcvd: 204 Pings still failing from nagiosv, though. I can ping from DC01 to nagiosv, and that works, and I can still ping from other VMs on the same physical NIC into DC01, and that works. I should mention at this point that I've disabled the firewall on DC01 for testing purposes, and it doesn't make a damned bit of difference. (Even with the firewall enabled, I have a blanket exception for ICMP from the local subnet, so it shouldn't make a difference, but I figured I should test it anyway.) I loaded up Wireshark on DC01 and pinged it from nagiosv again. What I see is a bunch of echo requests coming in and not a single reply going back out. Filtered results here, showing all ICMP traffic during a 15-second period. A few more bits of info: There are no IP conflicts on the network. MAC addresses on the incoming pings match the MAC on the VM. There are no duplicate MACs on the network, as far as I can see. I have absolutely no idea why DC01 is failing to respond, here. Any ideas?

    Read the article

< Previous Page | 152 153 154 155 156 157 158 159 160 161 162 163  | Next Page >