Search Results

Search found 7322 results on 293 pages for 'shub lahiri a team'.

Page 243/293 | < Previous Page | 239 240 241 242 243 244 245 246 247 248 249 250  | Next Page >

  • NGINX Remove index.php /index.php/something/more/ to /something/more

    - by Gaston
    I'm trying to clean urls in NGINX using framework DooPHP. This = - http://example.com/index.php/something/more/ To This = - http://example.com/something/more/ I want to remove (clean url) the "index.php" from the url if someone try to enter in the first form. Like a permanent redirect. How to do this config on NGINX? Thanks. [Update: Actual nginx config] server { listen 80; server_name vip.example.com; rewrite ^/(.*) https://vip.example.com/$1 permanent; } server { listen 443; server_name vip.example.com; error_page 404 /vip.example.com/404.html; error_page 403 /vip.example.com/403.html; error_page 401 /vip.example.com/401.html; location /vip.example.com { root /sites/errors; } ssl on; ssl_certificate /etc/nginx/config/server.csr; ssl_certificate_key /etc/nginx/config/server.sky; if (!-e $request_filename){ rewrite /.* /index.php; } location / { auth_basic "example Team Access"; auth_basic_user_file config/htpasswd; root /sites/vip.example.com; index index.php; } location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /sites/vip.example.com$fastcgi_script_name; include fastcgi_params; fastcgi_param PATH_INFO $fastcgi_script_name; } }

    Read the article

  • ESX hosts lose connectivity with iSCSI SAN LUNs

    - by Themist
    I've been experiencing this issue for a couple of months now where my ESX hosts lose connectivity with my iSCSI SAN vmfs volumes. As a results the ESX hosts enter a nonresponsive mode the associated VMs disconnect and the only remedy is to reboot the host. This issue happens randomly . I have escalated this issue with VMWare but I haven't had any solution to the issue yet. I see no errors on my switches and there are no hardware issues as well. My SAN infrastucture is solid and there are 2 paths for every vmfs volume. Did anybody else experienced a similar issue? edit: Here are some more details: The iSCSI SAN software is Datacore Sanmelody 2.0.4.2 running on 2 HP Proliant G5 servers. The storage attached to each of the servers is an HP MSA70 and all the iSCSI SAN Volumes that are presented to my 4 ESX hosts are mirrored. I have two iSCSI swithces HP Procurve 1800G-24 that are trunked together. My SANLELODY servers are using NC360T NICs. I team two NICs and have one cable connecting to each iSCSi switch. Each ESX server uses two NICs as well for the iSCSI Network.

    Read the article

  • Resolve another domain from current AD domain

    - by faulty
    We have 2 AD domain setup in our office. First is the primary domain for our office and exchange. The 2nd one is for development use to simulate production environment of our clients. Both domain are hosted on Windows 2008 R2 Enterprise. We, the development team has no access to the office domain other than login and email purpose. DNS is running on PDC of both domain. Both domain does not use public domain name. Now, our machines are joined to the development domain and we use outlook to access our office's exchange. We've added DNS entries for both the domain. From time to time we are having problem resolving our office domain (i.e. during outlook login), which we need to edit our NIC's DNS to have only DNS server from our office and then flush DNS. After that switch back once it's able to resolve. Is there a permanent solution for this scenario like specifying that the office domain be resolve with another DNS server when requested from the development domain? Thanks

    Read the article

  • Error when starting .Net-Application from ThinApp-Application

    - by user50209
    one of our customers uses SAP through VMWare ThinApp. In SAP there is a button that launches an .Net application from a server. When starting the .Net-application directly, there is no error. If the user tries to start the application by clicking the button in the ThinApp-Application, it displays the following errors: Microsoft Visual C++ Runtime Library R6034 An application has made an attempt to load the C runtime library incorrectly. Please contact the application's support team for more information. After clicking "OK" it displays: Microsoft Visual C++ Runtime Library Runtime Error! R6030 - CRT not initialized So, does the customer have to install some components into his ThinApp (if yes, which?) to get things working? Regards, inno ----- [EDIT] ----- @Sean: It's installed the following way: The .exe of the .Net-Application is on a mapped drive on a server. All clients have the requirements installed (.Net-framework for example) and start the .exe from the mapped drive. The ThinApp-Application tries to start this application and throws the mentioned exceptions. AFAIK there are no entry points for this application configured. What I should also mention is: The .Net-Application crashes during execution. That means, we have a debug mode implemented that shows what the application is doing. The application shows what it's doing and after some steps it crashes. The interesting point is: It's a .Net-application, not a C++ Application.

    Read the article

  • CloudFlare dashboards empty, or performance issues

    - by Katafalkas
    I wanted to test CloudFlare performance so I set my image gallery domain on it and started testing. I have added PageRules for caching. And chose the Security: Essentially Off. I checked NS check tools and they say that my domain name is propagated with CloudFlare. For testing purpose I created a link that loads 200 images from that server, and was using loads.in website to determine how much it is faster. After trying few regions, I noticed that there were no improvement in loading speed. So I looked up the dashboards, and it was empty. I am not sure if I am doing something wrong, or made some error in my setup, or it takes few days to start caching or working properly, but at the moment - after a day of testing - dashboards are empty. Also the NS check tools sais that all name servers are propagated to CloudFlare and working fine. So I assume I got a bad performance because it is simply not working. I sent a letter to CloudFlare support team, but did not get any straight answer. So essentially my question is: Anyone has any experience with CloudFlare ? How long does it take for it to start caching static content to CDN ? Or there is simply something I am doing wrong ?

    Read the article

  • Is the sysadmin/netadmin the defacto project planner at your organization?

    - by user31459
    At my company it has somehow over the past few years slowly become my job to come up with a project plan, milestones and time lines for deployment of developer applications. Typical scenario: My team receives a request for a new website/db combo and date for deployment. I send back a questionnaire for the developer to fill out on all the reqs for the site (ssl? db? growth projections etc.) After I get back all the information, the head of development wants a well developed document of what servers will it live on why those servers what is the time line for creating the resources step-by-step SOP for getting the application on the server and all related resources created (dns, firewall, load balancer etc.) I maybe just whining but it feels like this is something better suited to our Project Management staff (which we have) or to the developer. I understand that I need to give them a time-line on creating the resources, but still feel like this is overkill. We already produce documentation on where everything lives and track configuration changes to equipment. How do other sysadmin folks handle this?

    Read the article

  • Disable the user of Internet explorer through policies when called from HTML help

    - by Stephane
    Hello, I have a locked down environment where users are prohibited from doing, well, basically anything but run the specific programs we specify. We just switched a program from using the venerable "WinHELP" help format to HTML help (CHM) but that seem to have an unwanted and rather dangerous side effect: when a user click on a hyperlink inside the HTML help, a new internet explorer window is opened and the user is free to browse and do terrible things to my server (well, not that much, but still...) I have checked the session in this case and the IE window is actually hosted within the help engine: there is no iexplore.exe process running in the user session (and it cannot: it's explicitly prohibited). We have disable all help right now until we find a solution. I'm working with the help team to have all external URLs removed from the help file but that is going to be a long and error-prone task. Meanwhile, I've checked all the group policies option but I have to say that I was unable to find anything that would prevent a standalone IE window hosted in a random process from running. I don't want to disable WinHTTP or the IE rendering engine or anything of the sort. But I need to prevent all users members of a specific AD user group from ever having an IE window displayed to them. The servers are running Windows 2003 and Citrix metaframe 4.5. Thanks in advance

    Read the article

  • Have only read access to Samba

    - by Tahir Malik
    Hi I've been struggling a lot with Samba on Centos 5.5 lately. I develop in Windows 7 and send files through scp (ant task), but it's to slow and wanted to setup thoroughly samba. After installing and following some guides I've done the following: Disable firewall (iptables) Disable SelLinux (didn't do that at the start, but didn't help either) setup my smbusers file to map my windows user to root (root = "Tahir Malik" -- works) added a current user mitco to the sambapassdb with the command smbpasswd -a mitco , because the windows user had only read access So both the users have read access to my share. Here is my smb.conf snippit: [global] workgroup = MITCO server string = Samba Server Version %v netbios name = centos ; interfaces = lo eth0 192.168.12.2/24 192.168.13.2/24 ; hosts allow = 127. 192.168.12. 192.168.13. [alf4] comment = Alfresco 4 path = /opt read only = no valid users = mitco, mitco force user = root force group = root admin users = mitco , mitco writeable = yes ; browseable = yes What also maybe important is that the /opt is only writable by root, but that shouldn't matter because I use the force user and group or admin users. The log file : [2012/09/29 07:43:44, 0] smbd/server.c:main(958) smbd version 3.0.33-3.39.el5_8 started. Copyright Andrew Tridgell and the Samba Team 1992-2008 [2012/09/29 07:43:59, 1] smbd/service.c:make_connection_snum(1085) mitco-tahir (192.168.13.1) connect to service alf4 initially as user root (uid=0, gid=0) (pid 5228)

    Read the article

  • Are These Parts compatible?

    - by ell
    I have never assembled a PC before, although I have taken an old one apart and replaced a few parts in others here and there so I have (very) limited experience. I have been looking to make a pc and here are the parts I might buy: Foxconn P45AL Intel P45 (Socket 775) DDR2 Motherboard (with onboard sound I believe) Gigabyte GeForce GTX 460 OC 768MB GDDR5 PCI-Express Graphics Card Already have 2 1gb sticks of dual channel DDR2 memory Intel Core 2 Quad Q8400 LGA775 'Yorkfield' 2.66GHz 4MB-cache Processor Samsung SpinPoint F3 1TB SATA-II 32MB Cache Hard Drive Antec Dark Fleet Series DF10 Gaming Enclosure – Black I already have monitor, mouse, keyboard and DVD/CD drive Akasa Freedom Power 1000W Modular Power Supply I have never done this before so feel free to laugh at me for getting something obvious wrong, forgetting a vital component etc. but is all of this compatible? And have I gone overkill on the PSU, if so, please recommend one. Thanks in advance, ell. EDIT: Added PSU which I forgot to mention EDIT: I would be using this to surf the internet, write e-mails, chat, word process, play games such as team fortress 2 & spring rts (at highest graphics hopefully), some 3d modelling in blender, some opengl programming, and image editing in GIMP.

    Read the article

  • Windows Server NTFS volume list file name encodings and any illegal file names

    - by benbradley
    I'm having to deal with a Windows Server (NTFS) file server and our backup application appears to be failing with certain files. According to this https://en.wikipedia.org/wiki/NTFS#Internals NTFS apparently supports file names encoded in UTF-16 but according to their support team, our backup application only supports UTF-8. I'd like to confirm whether this is actually the problem by seeing the file name encoding for myself. The files that are failing appear to be using plain English A-Z letters and other ASCII characters. No accents or non-English letters etc. I suppose even though the letters appear to be plain A-Z the file name could still be encoded in UTF-16. Does anyone know of a utility or script that can recursively go through all files in a directory and show the encoding of the file name? Then I could try renaming to UTF-8 to see if the backup can proceed. I'm not a Windows developer so can't write this up myself. Presumably the encoding of the file name should be stored in the FS somewhere and therefore it should be possible to expose this.

    Read the article

  • Virtual machines with failover setup

    - by kimmmo
    We have three servers and our plan is to run a number of virtual machines on them in such manner, that if one of the nodes blow up, we can either quickly or seamlessly get a spare running on another node. In addition to the normal networking, they're interconnected via dual 10Gbit NIC's, so networked raid/mirroring shouldn't be a problem. The guest VM's are mostly going to be running text mode linux, but of course it wouldn't hurt to be able to spin up a non-mission critical windows guest for running Visual Studio or checking IE compatibility of a web app. We've spent some time trying to get some magical cloud setup running using Stackops and Crowbar but it started to look like they were offering way too much and were too complicated for our needs. The next candidate, I think, is Ubuntu 11.04 server + KVM + Ganeti + Drbd, unless you can come up with a suggestion for a better solution that we have missed. Requirements: Installation should be simple or at least understandable without being in the dev team A browser interface for creating and managing VM's is a nice bonus Single node's hardware failure should cause minimal downtime for VM's that were running on that node Adding more nodes should be possible without shutting down the VM's.

    Read the article

  • 502 Bad Gateway error after failed requests using Passenger

    - by Nicolas Buduroi
    I've got a staging server running nginx 1.0.5 using Rails 3.1 under Passenger 3.0.9. The problem is that a request sent just after one where there's an application error return 502 Bad Gateway. To test it I've set up a simple controller with an action that just raise a dummy exception. One request will show the Rails error message and the next one will show nginx 502 Bad Gateway error, then it goes back to the Rails application error, etc. While investigating this problem I've found out that load testing the application (which increase the number of application processes) make that issue disapear. That is until the extra processes are shutdown, then it reappear. I've tried setting the passenger_min_instances option, but doing so doesn't change anything and in this case each time an application error happen one instance is killed while after load testing all instances are kept alive. P.S.: Some people on my team told me that they've seen the 502 error even when there's no application error but I've not been able to reproduce that. Update: Just found out how to display the responses status codes using ab and most of them are 502!

    Read the article

  • IIS 7 rewriting subdomain to point at a specific port.

    - by Tommy Jakobsen
    Having installed Team Foundation Server 2010 on Windows Server 2008, I need an easy URL for our developers to access their repositories. The default URL for the TFS repositories is http://localhost:8080/tfs Now I want the subdomain domain tfs.server.domain.com to point at http://localhost:8080/tfs. And when you write access tfs.server.domain.com/repos_name it should redirect to http://localhost:8080/tfs/repos_name. How can I do this in IIS 7? I already tried using the following rule, but it does not work. I get a 404. <rewrite> <globalRules> <rule name="TFS" stopProcessing="true"> <match url="^(?:tfs/)(.*)" /> <conditions> <add input="{HTTP_HOST}" pattern="^tfs.server.domain.com$" /> </conditions> <action type="Rewrite" url="http://localhost:8080/tfs/{R:1}" /> </rule> </globalRules> </rewrite>

    Read the article

  • Is it a good idea to run Redmine using Webrick through Nginx?

    - by Rohit
    The task here is to get Redmine setup for a small (<20) team. There may be a few users who would access the setup as business clients. I am familiar with setting up PHP for Apache, and recently, Nginx. I am not familiar with Ruby, Ruby-On-Rails, etc. I prefer to use the OS's (Ubuntu Linux LTS) package manager to install the different components as it takes care of dependencies and updates. I have setup Nginx with PHP-FPM successfully and am struggling with Redmine. As suggested here, I got Redmine running on port 3000. # /etc/init/redmine.conf # Redmine description "Redmine" start on runlevel [2345] stop on runlevel [!2345] expect daemon exec ruby /usr/share/redmine/script/server webrick -e production -b 0.0.0.0 -d And using the Nginx config on this page, I used Nginx to proxy requests to Webrick. server { listen 80; server_name myredmine.example.com; location / { proxy_pass http://127.0.0.1:3000; } } This works well locally. I wanted some opinions before trying this out on the live box (a 256 MB VPS). Further, should I use something like monit to monitor webrick for failure?

    Read the article

  • Windows Domain Chaos - Any Solving Approach

    - by Chake
    we are running an old Window 2003 Server as Domain Controller (DC2003). To safely migrate to Windows 2008 R2 we added a 2008 R2 (DC2008R2) to the domain as domain controller (adprep etc.). After dcpromo on DC2008R2 everything seemed to be ok. The new DC appeared under the "Domain Controlelrs" node. It wasn't checked at this time, if DC2008R2 can REALLY act as domain controller. Later we tried to shutdown DC2003 and ran into a total mess with non functional Exchange and Team Foundation Services. After that I got the job to fix... First i thought it could be an Problem with DC2008R2. So I removed it as Domain Controller and installed a new Windows 2008 R8 Server DC2008R2-2. I ran into similar Problems. I tried a bunch of stuff, but nothign helped. I won't list it, maybe I made an mistake, so I'm willing to redo it with your suggestions. To have a starting point I tried the best practise analyser whicht ended up with 24 "Compatible" and 26 "Not Compatible" tests. From these 26 tests 19 read the same. (I'm translating from german, so that may to be the exact wording) Problem: Using the Best Practise Analyser for Active Directory Domain Services (Active Directory Domain Services Best Practices Analyzer, AD DS BPA) no data can be be gathered using the name of the forest and the domain controller DC2008R2-2. I appreciate any suggestions, this really bothers me.

    Read the article

  • Recommendation for robust, customizable, open source, Java servlet-based forum software?

    - by Erik Hermansen
    There is a lot of forum software out there, but it seems to me that a lot of the popular choices are PHP-based. And for my project, I'd like something based on Java servlets so my team can make customizations to it. Another important feature is that I can completely change the pages to hide unwanted elements without too much work. So I'm looking either for a template system or easily editable scripts (i.e. JSPs) that have a clean view separation. Just having skin changes or CSS customization is not enough. I understand that if I have open source, I can change anything I want, but my point is that it should be easy and not requiring mastery of a complex code base. Finally, I want something that has been around for at least a year and deployed on some high-traffic sites. Clustering support (one database, multiple web servers) is highly desirable. Up-time is crucial since I have an SLA to support. What do you think?

    Read the article

  • Windows 7 mapped drive kicking off OS X users

    - by Collin White
    I've mapped a network drive on my Windows 7 PC at my office. The windows machine has a few TB of storage that is being accessed by my development team (all running mac os 10.7). The share seems to work fine for a little while but will timeout and kick the mac users off and sometimes disallows a connection on the next attempt. Restarting the windows machine fixes the problem. I've tried this tutorial as well as setting the maximum session length in the Local Security Policy section to 99999 (I discovered 0 did not mean unlimited, only a 'reasonable ammount of time') anyway, the setting is now for ~208 days which is sufficient (see attached). I'm having trouble debugging this in general so if anyone has some pointers I'm all ears. This is a intermittent issue which in my opinion are the hardest kinds to debug. If anyone knows of how I might monitor connections from the PC that would also be pretty cool. Previously the files were hosted on a mac mini and everything was working just fine (the mini just didn't have the ability for the storage capacity we needed) so I believe it is some windows setting that is kicking users off. Anyway, thanks for reading.

    Read the article

  • Affordable combined Ruby/Rails/Redmine + Subversion hosting?

    - by Pekka
    I'm a self employed web developer and after nine years of hard work, I'm looking to become a bit more "vagrant" starting next year, do some much-needed traveling and a bit and work off and on, making use of one of the greatest advantages of a programming job: The ability to work virtually from everywhere. For that, I am looking for a reliable hosting company I can entrust my code to in the form of a number of Subversion repositories, and an installation of the Redmine project management tool. As my financial situation may vary during traveling, I am looking for something I can pay up front for a year or two, and is obviously not too pricey. I don't care where the company is located, as long as it's trustworthy and solid, meaning it's not likely to go out of business next month. Does anybody know good recommendations? Preferably from own, personal, good experience. I have looked at CVSDude / Codesion and while they are certainly great, they don't offer Redmine of course, and seem to be aiming toward bigger organizations mainly. What I would need: 2-5 Gigs of space minimum, freely distributable between SVN, and Redmine attachments Unlimited number of Subversion projects Access control (team members / checkout-only accounts / etc.) I don't mind configuring the svn settings on file basis myself I need the possibility to map a custom domain to the package that is hosted elsewhere Frequent backups and access to those backups through FTP or other means I have been running my own virtual server for this until now, but I don't want the hassle, especially on the security side, while I may not always have the internet connection to fix problems that may come up.

    Read the article

  • mplayer (mplayerhq.hu) repeats ending audio frames

    - by kamikatze
    mplayer (from mplayerhq.hu) on windows repeats the last few audio frames upon exit. When the video ends, before you can see Exiting... (End of file) in the command prompt, you will hear the last 1/2 second or so of the audio track again. This behavior is the same for multiple containers/codecs/soundcards Vista or Windows 7. Is there a workaround for this? My playback specs: MPlayer Sherpya-MT-SVN-r31027-4.2.5 (C) 2000-2010 MPlayer Team 150 audio & 343 video codecs Playing splash_final.wmv. ASF file format detected. [asfheader] Audio stream found, -aid 1 [asfheader] Video stream found, -vid 2 VIDEO: [WMV3] 1280x720 24bpp 1000.000 fps 6291.5 kbps (768.0 kbyte/s) ========================================================================== Opening video decoder: [dmo] DMO video codecs DMO dll supports VO Optimizations 0 1 DMO dll might use previous sample when requested Decoder supports the following formats: YV12 YUY2 UYVY YVYU RGB8 [..] Decoder is capable of YUV output (flags 0x1b) Movie-Aspect is undefined - no prescaling applied. VO: [directx] 1280x720 = 1280x720 Planar YV12 Selected video codec: [wmv9dmo] vfm: dmo (Windows Media Video 9 DMO) ========================================================================== ========================================================================== Opening audio decoder: [ffmpeg] FFmpeg/libavcodec audio decoders AUDIO: 44100 Hz, 2 ch, s16le, 329.8 kbit/23.37% (ratio: 41221-176400) Selected audio codec: [ffwmav2] afm: ffmpeg (DivX audio v2 (FFmpeg)) ========================================================================== AO: [dsound] 44100Hz 2ch s16le (2 bytes per sample) Starting playback...

    Read the article

  • SVN hangs on commit - any suggestions for troubleshooting?

    - by Richard Beier
    We're having a problem with SVN... Subversion clients such as TortoiseSVN hang when we commit any more than a few files at a time to our server. Everything appears to actually be committed successfully to the repository; but the client hangs after all the data has been transmitted. We're using version 1.4.4 of the SVN server. We use the svn:// protocol rather than http to connect. We've reproduced this problem with several clients: TortoiseSVN (1.6.10), AnkhSVN (2.1), and the Silk command-line client (1.6.12). This is happening for everyone on the team, though some people seem to be more affected than others. If someone commits only a few files, it often works; but with more than half a dozen files, it usually hangs. Does anyone have troubleshooting suggestions? This has been happening sporadically for a while, but it's become pretty consistent lately. We've been working around the issue by killing the hung SVN client, doing "svn cleanup", and then doing "svn up"; but sometimes that causes tree conflicts. Another workaround is to blow away the workspace and check it out again after every commit; but of course that's pretty annoying. Are there any diagnostics that could help us troubleshoot this? We're considering upgrading to SVN 1.6 server, and installing the server on a new machine; but we're wondering if there's an easier solution. Thanks for your help, Richard

    Read the article

  • Better logging for cronjob output using /usr/bin/logger

    - by Stefan Lasiewski
    I am looking for a better way to log cronjobs. Most cronjobs tend to spam email or the console, get ignored, or create yet another logfile. In this case, I have a Nagios NSCA script which sends data to a central Nagios sever. This send_nsca script also prints a single status line to STDOUT, indicating success or failure. 0 * * * * root /usr/local/nagios/sbin/nsca_check_disk This emails the following message to root@localhost, which is then forwarded to my team of sysadmins. Spam. forwarded nsca_check_disk: 1 data packet(s) sent to host successfully. I'm looking for a log method which: Doesn't spam the messages to email or the console Don't create yet another krufty logfile which requires cleanup months or years later. Capture the log information somewhere, so it can be viewed later if desired. Works on most unixes Fits into an existing log infrastructure. Uses common syslog conventions like 'facility' Some of these are third party scripts, and don't always do logging internally. UPDATE 2010-04-30 In the process of writing this question, I think I have answered myself. So I'll answer myself "Jeopardy-style". Is there any problem with this method? The following will send any Cron output to /usr/bin//logger, which will send to syslog, with a 'tag' of 'nsca_check_disk'. Syslog handles it from there. My systems (CentOS and FreeBSD) already handle log rotation. */5 * * * * root /usr/local/nagios/sbin/nsca_check_disk 2>&1 |/usr/bin/logger -t nsca_check_disk /var/log/messages now has one additional message which says this: Apr 29, 17:40:00 192.168.6.19 nsca_check_disk: 1 data packet(s) sent to host successfully. I like /usr/bin/logger , because it works well with an existing syslog configuration and infrastructure, and is included with most Unix distros. Most *nix distributions already do logrotation, and do it well.

    Read the article

  • Copy UNC network path (not drive letter) for paths on mapped drives from Windows Explorer

    - by Ernest Mueller
    I frequently want to share network paths to files with other folks on my team via email or chat. We have a lot of mapped drives here, both ones we set up ourselves and ones set up by our IT overlords. What I'd like to be able to do is to copy the full real path (not the drive letter) from Windows Explorer to send to folks. Example: I have a file in my "Q:" drive, \cartman\users\emueller, I want to send a link to file foo.doc to everyone. When I copy the file path (shift+right click, "copy as path") it gets the file name "Q:\foo.doc". This is unhelpful to others, who would like to see \cartman\users\emueller\foo.doc, obviously. In Explorer it clearly knows it - in the address bar I see "Computer - emueller (\cartman\users) (Q:) -". Is there a way to say "hey man copy that path as text with the \cartman\users\emueller not the Q: in it?" I know I could just set up mapped network locations instead of the mapped drives for the ones that I set up personally and avoid this problem, but most of the mapped drives like the "users" share come from our IT policy. I could just make a separate network location and then ignore my Q: drive but that's inconvenient (and they do it so they can move accounts across servers). Sure my emailed path might eventually break because I'm losing the drive letter indirection but that's OK with me.

    Read the article

  • Most scalable way of serving a small set of static HTTP content

    - by Ekevoo
    The story: Hi guys. I'm among the people responsible for serving the results of the most anticipated (by number of people participating) annual entrance exam in my state. As such, when our results are published, the interest is overwhelming. In the past we delegated the responsibility of serving the results to the media, but that spoils a little the officialness of these results. This year we went with a little (long overdue) experiment of using lighttpd instead of Apache as well as other physical network optimizations I wasn't directly involved with. The results were very satisfactory. The server didn't choke even once, nor we saw any of the usual Twitter complaints on unavailability and/or slowness that were previously common. However, because we still delegated the first publication of the results to the media I'm still not 100% sure we can handle the load of actually publishing the results first. The question: Now because these files are like 14MB in total and a true lightweight Linux distribution isn't that big either, I'm thinking: what if next year we run full RAMdrive? Is there any? Is that useful? Is that worth it for a team that uses Debian almost exclusively? Are there other optimizations that I should be focusing on instead?

    Read the article

  • SharePoint 2010 Enterprise wiki - [New page] missing

    - by icelava
    I am trying to ramp up knowledge on SharePoint deployment and usage (never did before), due to a direction to use SharePoint 2010 as a repository platform (wiki format) for our customer's infrastructure documentation. In my test virtual server, a new site of Enterprise wiki template was setup. Went into Site Actions Manage Site Features to activate Wiki Page Home Page. The default sub-web then went from /Pages to /SitePages and looks like the default Team template. The odd thing is the Site Actions is missing the New Page option. My colleague does not understand why this is the case, as it ought to be there. The original /Pages sub-web does have the option. What conditions are in play that influences the appearance of that option? UPDATE Another phenomenon observed is in the Site Actions View All Site Content view, the wiki document libraries listed in the grid will have their hyperlink (e.g. "Site Pages") lead straight to the direct default page. It would not show its own table listing of pages under that document library, unlike the original Pages document library, which expectedly show up as a listing. I wonder if this hints to any problems.

    Read the article

  • WSUS KB978338 Chain of Supersession Incorrect?

    - by Kasius
    The chain appears to be KB978338 to KB978886 to KB2563894 to KB2588516 (newest). All four of these updates are approved on our WSUS server. KB978338 is listing as Not Applicable on all machines, because it has been superseded. This is the behavior I would expect. However, our security office is reporting that KB978338 should still be installed on all machines because its actual effect is not replicated by any of the updates that follow it. Here is the analysis I was sent: KB978886 applies to Vista SP1 only. The rollout of SP2 did not address the ISATAP vulnerability and reintroduces it. KB2563894 only updates two files (Tcpip.sys and Tcpipreg.sys). It does not update the 12 other affected ISATAP, UDP, and NUD .sys and .dll files. (MS11-064) KB2588516 addresses malformed continuous UDP packet overflow. But does not address the ISATAP related NUD and TCP .sys and .dll files. (MS11-083) So yes, many IP vulnerabilities. But each KB addresses specific issues that do not cross over to other KBs. We can install KB978338 by manually running the .MSU file, but we aren't certain if that will overwrite the couple files that get updated by later patches since we would be installing the patch out of order. Is the above analysis correct? Is the chain of supersession incorrectly defined? If it is, what is the proper way to report it so that it can be changed by the correct Microsoft team? We are currently using 32-bit and 64-bit installations of Vista SP2. Note: I should mention that I posted this on Technet as well. I will keep this up-to-date with any information I get on there.

    Read the article

< Previous Page | 239 240 241 242 243 244 245 246 247 248 249 250  | Next Page >