Search Results

Search found 5286 results on 212 pages for 'logs'.

Page 68/212 | < Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >

  • Windows Azure Tools for Microsoft Visual Studio 1.2 (June 2010)

    - by Eric Nelson
    Yey – we have a public release of the Windows Azure Tools which fully supports Visual Studio 2010 RTM and the .NET 4 Framework. And the biggy I have been waiting for – IntelliTrace support to debug your cloud deployed services (Requires  VS2010 Ultimate) Download today http://bit.ly/azuretoolsjune New for version 1.2: Visual Studio 2010 RTM Support: Full support for Visual Studio 2010 RTM. .NET 4 support: Choose to build services targeting either the .NET 3.5 or .NET 4 framework. Cloud storage explorer: Displays a read-only view of Windows Azure tables and blob containers through Server Explorer. Integrated deployment: Deploy services directly from Visual Studio by selecting ‘Publish’ from Solution Explorer. Service monitoring: Keep track of the state of your services through the ‘compute’ node in Server Explorer. IntelliTrace support for services running in the cloud: Adds support for debugging services in the cloud by using the Visual Studio 2010 IntelliTrace feature. This is enabled by using the deployment feature, and logs are retrieved through Server Explorer. Related Links: http://ukazure.ning.com for UK fans of Windows Azure IntelliTrace explained

    Read the article

  • What to choose API based server or Socket based server for data driven application

    - by Imdad
    I am working on a project which has a Desktop Application for MAC/COCOA, a native application for iPhone another native application in iPad. All the application do almost same thing. The applications are data driven applications. Every communication to server is made via a restful API developed in PHP. When a user logs in a lot of data is fetched from server. And to remain in sync with server pooling is done. As there are lot of data to pool it makes application slower and un-reliable. A possible solution that comes into my mind is to use Socket based server. My question is that will it reasonably improve the performance? And which technology (of sockets) will be good as a server side solution for data driven application? I have heard a lot about Node.js. Please give your suggestions.

    Read the article

  • Login timeout disable?

    - by Sk606
    How can I disable automatic timeout entries in the login screen? Allow me to explain. My typical experience goes something like this: I put my laptop to sleep, then wake it. When it wakes, it comes up with the login screen. I type in my password and it will usually reject it, with a message indicating the login timer has expired. I then enter my password a second time and it logs in. Not the end of the world, but annoying nonetheless. Any suggestions?

    Read the article

  • 301 redirecting a blog's RSS feed URL?

    - by Marc Charbonneau
    I moved my personal blog from Wordpress to Ghost this weekend, which changes the RSS feed URL from /feed/ to /rss/. By default Ghost returns a 301 redirect for /feed/, which I've verified by checking the response header and looking at the logs: In Feedly though, new posts aren't being picked up (at least after 24 hours. I'm not sure if they might have a waiting period before updating the URL). What's the correct thing to do in this situation? Do I need to keep /feed/ alive instead of returning a 301? If so, is there a rewrite rule that would let me do this in nginx instead of having to modify the Ghost source code?

    Read the article

  • Log oddities: 404s for client-garbled image URLs

    - by Chris Adams
    I've noticed some odd 404s which appear to be broken URL rewriting code: Our deep zoom view generates images URLs like this: /media/204/service/dzi/1/1_files/7/0_0.jpg I see some - well under <1% - requests for slightly altered URLs: /media/204/s/rvice/d/i/1/1_files/7/0_0.jpg These requests come from IP addresses all over the world (US, Canada, China, Russia, India, Israel, etc.), desktop and mobile users with multiple user-agents (Chrome, IE, Firefox, Mobile Safari, etc.), and there is plenty of normal activity in the same session so I'm assuming this is either widespread malware or some broken proxy service. I have not seen them from anything other than images, which suggests that this may be some sort of content filter. Has anyone else seen this? My CDN logs show the first request on June 8th ramping up from several dozen to several hundred per day.

    Read the article

  • Opinions on logging in multiprocess applications

    - by chkorn
    We have written an application that spawns at least 9 parallel processes. All processes generate a lot of logging information. Currently we are using Pythons QueueHandler to consolidate all logs into one file. Unfortunately this sometimes results in very messy files which make them hard to read (e.g. Track what exactly is going on in one thread). Do you think it is a viable option to separate all messages into dedicated files, or is this going to make things even more messy due to the high number of files? What are your general experiences when writing log files for multiprocessed/multithreaded applications?

    Read the article

  • Symbolic link not allowed or link target not accessible: /var/www on Ubuntu 11.04

    - by Jamie Hutber
    I am getting a 403 when i access http://mayfieldafc.local/ upon looking in the apache logs i am getting [Wed Nov 16 12:32:59 2011] [error] [client 127.0.0.1] Symbolic link not allowed or link target not accessible: /var/www I have what i believe to be the correct permissions set on /var/www. hutber can create and delete files, hutber being my user. I can also execute as program on this folder. in mayfields vhost its: <Directory /var/www/mayfieldafc/docroot> Options +FollowSymLinks AllowOverride None Order allow,deny Allow from all </Directory> I am pulling my hair out not being able to work on my sites with my work ubuntu install. I know of nothing else that could be effecting this. So any ideas?

    Read the article

  • my wiki site using mediawiki - databases not found error

    - by Jayapal Chandran
    I had been using mediawiki opensource in my website to display programming articles. Today i tried to access my site but it showed database not found. Mediawiki uses many databases. When i logged in my control panel and checked i can see that most of the databases created by mediawiki is missing so is the reason i am getting this error. I have used mediawiki for two different purposes. It is like two modules. For one the databases are missing and for the other i think the data is corrupt. Do anybody know any issue with mediawiki security or would this be a problem with the webhosting cause we have faced several problems with them initially like before three years and recently the were good. Yet this happened. I have requested to hosting company to look into it and meanwhile i am expecting the help from stackexchange users. How do i check the logs for table deletion?

    Read the article

  • APIs that deal with logins

    - by Brandon Still
    I have been asked to make a mobile app for a friends website. The website is a Multi level marketing site that sells products and franchises. A client logs in in to the website and can view his or her dashboard ( user can view team members, business volume, commissions, invoices, etc.) The app is supposed to bring the dashboard to user's mobile devices (w/ some added features). The company does not have any APIs that deal with interaction or authentication, and I am new to the whole secure login side of app development. My questions is this, how do I let the users gain access to their information via my app from the secure website when there is no API?

    Read the article

  • How do I programatically determine which port a SQL Server is running on?

    - by Ralph Willgoss
    How do I programatically determine which port a SQL Server is running on?/*Wrapper script for xp_readerrorlogAuthor: Ralph WillgossDate: 2nd Oct 2012This script cycles through all logs files, looking for the listening port.Normally you have to specify the log file one by one, the script removes the need for that.Param ref for: xp_readerrorlog1. Value of error log file you want to read: 0 = current, 1 = Archive #1, 2 = Archive #2, etc...2. Log file type: 1 or NULL = error log, 2 = SQL Agent log3. Search string 1: String one you want to search for4. Search string 2: String two you want to search for to further refine the results5. Search from start time6. Search to end time7. Sort order for results: N'asc' = ascending, N'desc' = descending*/USE MasterGO--  Get log countDECLARE @logcount intDROP TABLE #ResultCREATE TABLE #Result (ArchiveNo int, Date datetime, Size int)INSERT INTO #ResultEXEC xp_enumerrorlogsSET @logcount = (SELECT COUNT(*) FROM #Result)-- Search the available logsDECLARE @counter intSET @counter = 0WHILE @counter <= @logcountBEGIN   EXEC xp_readerrorlog @counter, 1, N'Server is listening on', 'any', NULL, NULL, N'asc'   SET @counter = @counter + 1ENDGO

    Read the article

  • Ubuntu 11.10 and Atheros AR8131 ethernet card

    - by nivcaner
    I have an Atheros AR8131 Ethernet card on a Lenovo b560 laptop. Sometime in the past, probably at some upgrade or other my wired connection stopped functioning. I know the card is ok because it works with windows. I tried upgrading to Ubuntu 11.10 hoping the problem would go away. It didn't... Tried to google the problem, but all of the solutions I found didn't work for me. This is probably a driver problem. Any help will be appreciated. Please note that I'm a linux newbie so I don't really know what logs to post... Niv

    Read the article

  • Sharepoint Expiration Policy not working

    - by spano
    I have a Sharepoint 2010 list with a custom content type with an associated retention policy. The policy consists of a custom formula and then the Send to Recycle Bin action. However, I realized that the items were not being deleted. I verified the list settings and the retention policy was configured: I run the Expiration Policy job several times but no items were deleted and no errors found in the logs. I also added logging to the custom formula but no logging was found neither. Finally, I found...(read more)

    Read the article

  • Auto Login not working unless keyboard is plugged in

    - by Palani
    I want to use Ubuntu+Unity(11.10) in a kiosk computer (touchscreen all-in-one pc). I want to use touchscreen for all user input (like ATM). During Installation I have created only one account and enabled auto Login. Auto Login works perfectely when the keyboard connected. I don't have to press any key or use mouse, it logs in automatically. But Its not working when keyboard is not connected. OS boots and stops on login screen. I can't connect keyboard in final kiosk hardware installation.

    Read the article

  • Blank desktop when logging in via xrdp

    - by nitefrog
    I am trying to access Ubuntu 11.10 using Remote Desktop from a Win 7 machine. I installed xrdp. I launch the Windows remote desktop client and login in. I then get prompted for the user name and password. It then logs in, but all I see is the background, no menus, nothing. I have to kill remote desktop by closing it. Even if I right click , nothing. Any ideas??? The only reason I even went down the RDP road was that VNC would not work either, even after I enabled desktop sharing. I am in a bind as I need to connect to Ubuntu via Windows. In version 8 Ubuntu this was not an issue and it just worked.

    Read the article

  • Why google is not crawling my website

    - by Aman Virk
    I am running a design and development blog http://www.thetutlage.com/ . From last couple of days my search traffic have been reduced from 70% to 10%. I myself is against black hat seo and all it do is write my own unique content almost everyday. Last week my search traffic was really good but now is dropping like heck. I have checked my webmasters dashboard and no message there from google. When i checked server logs i came to know last time google crawled my website was on 27 september 2012. Really i have no idea what i am doing wrong. I follow all google guidelines like bible, please help me

    Read the article

  • How to encrypt php folder under /var/www?

    - by sirchaos
    I need to encrypt the folder /var/www/test. The folder contains PHP files. The goal it to prevent any user to read the php content AND if the HD is mounted on another computer, the /var/www/test should be encrypted AND if computer booted up without any user logged I would like anyone to be able to access data in /var/www/tests. What is the correct approach for this? I've tried "ecryptfs-setup-private" as advised in How to encrypt /var/www? yet it didn't work for me. I've might missed something - I've tested the folders while booting with ubuntu 12.04 installation disk and mounted the drive, than I was able to access /var/www/test content.. yet this is what I want to prevent. The gnome-encfs isn't the way to go since its decryption happens when users logs on to the system & I would like the system to be working after power failure etc' without any one logged in. Please advice.

    Read the article

  • How can I make the Ubuntu Software Center load?

    - by Kieran
    I launch it and it goes grey for almost immediately. Closing it prompts me to "force close" and no error report is given. I launched it in Terminal and this was the resulting log: 2012-11-23 22:39:25,175 - softwarecenter.ui.gtk3.app - INFO - setting up proxy 'None' 2012-11-23 22:39:25,179 - softwarecenter.db.database - INFO - open() database: path=None use_axi=True use_agent=True 2012-11-23 22:39:25,409 - softwarecenter.backend.reviews - WARNING - Could not get usefulness from server, no username in config file 2012-11-23 22:39:25,412 - softwarecenter.fixme - WARNING - logs to the root logger: '('/usr/lib/python2.7/dist-packages/gi/importer.py', 51, 'find_module')' 2012-11-23 22:39:25,412 - root - ERROR - Could not find any typelib for LaunchpadIntegration 2012-11-23 22:39:25,474 - softwarecenter.ui.gtk3.app - INFO - show_available_packages: search_text is '', app is None

    Read the article

  • 403 error on index file

    - by John L.
    When I try to access index.py in my server root through http://domain/, I get a 403 Forbidden error, but when I can access it through http://domain/index.py. In my server logs it says "Options ExecCGI is off in this directory: /var/www/index.py". However, my httpd.conf entry for that directory is the same as the ones for other directories, and getting to index.py works fine. My permissions are set to 755 for index.py. I also tried making a php file and naming it index.php, and it works from both domain/ and domain/index.php. Here is my httpd.conf entry: <Directory /var/www> Options Indexes Includes FollowSymLinks MultiViews AllowOverride All Order allow,deny Allow from all AddHandler cgi-script .cgi AddHandler cgi-script .pl AddHandler cgi-script .py Options +ExecCGI DirectoryIndex index.html index.php index.py </Directory> Thanks

    Read the article

  • why would Remmina stop working?

    - by Chris Curvey
    Until sometime last night, I had remmina working fine. I could run RDP through an SSH tunnel and all was well. Then it stopped working. I can get as far as the password dialog for my work machine, but then it just says "Cannot connect to RDP server localhost". I can't even find any logs that look interesting. I've re-installed remmina, cleared my .remmina directory, restarted my machine, and even restarted my gateway. Just to make it really weird, my laptop (which has the same setup -- latest Ubuntu and Remmina) can make the connection just fine. It is even going through the same router, albeit wirelessly. Any thoughts?

    Read the article

  • How do I recover from upgrading while using bad version of gcc/binutils?

    - by Shawn J. Goff
    I upgraded from 9.04 to 10.10 a couple of days ago, and things are really messed up - X is crashing constantly. Since then, I had an application segfault for no reason, when I was debugging, I found that it was strlen() that was causing the segfault (pointing to libc being the problem)! Upon investigation, I found that it was because I had a bad version of gcc and binutils installed in /usr/bin/local; I removed it, recompiled the application, and it no longer crashes. Now, looking at my logs, I see that X is also crashing due to libc. Backtrace: 0: /usr/bin/X11/X (xorg_backtrace+0x3b) [0x80ef31b] 1: /usr/bin/X11/X (0x8048000+0x5d00d) [0x80a500d] 2: (vdso) (__kernel_rt_sigreturn+0x0) [0xb77e240c] 3: /usr/bin/X11/X (0x8048000+0xbb0b6) [0x81030b6] 4: /usr/bin/X11/X (0x8048000+0xbc3ef) [0x81043ef] 5: /usr/bin/X11/X (0x8048000+0x26ee7) [0x806eee7] 6: /usr/bin/X11/X (0x8048000+0x1a5da) [0x80625da] 7: /lib/libc.so.6 (__libc_start_main+0xe7) [0xb750ace7] 8: /usr/bin/X11/X (0x8048000+0x1a1b1) [0x80621b1] Segmentation fault at address 0x32156654 Caught signal 11 (Segmentation fault). Server aborting So, how can I recover from this?

    Read the article

  • High Usage of RAM by wxPython's GUI and need some advice to reduce it

    - by user67024
    I've recently developed a GUI in wxPython for windows platform. It contains a five tabs, 4 of them are just richTextCtrl boxes and the other one has controls for uploading files, buttons, textctrls, a slider etc.. As I was new to GUI development in Python, I used wxFormBuilder to generate some of the code using a good amount of sizers. So, now the problem is that the GUI starts off with a initial memory of around 40MB which is too much for such a simple application (Or so I think) . Also, when the functions handling the functions use huge lists as the program is for debugging large data logs and identifying the problems in'em implying that I can't afford memory for GUI. So, how can I reduce that start working memory size? Is it a general issue in wxPython? And currently trying use profilers but not sure if it's going to help.

    Read the article

  • What to do after a servicing fails on TFS 2010

    - by Martin Hinshelwood
    What do you do if you run a couple of hotfixes against your TFS 2010 server and you start to see seem odd behaviour? A customer of mine encountered that very problem, but they could not just, or at least not easily, go back a version.   You see, around the time of the TFS 2010 launch this company decided to upgrade their entire 250+ development team from TFS 2008 to TFS 2010. They encountered a few problems, owing mainly to the size of their TFS deployment, and the way they were using TFS. They were not doing anything wrong, but when you have the largest deployment of TFS outside of Microsoft you tend to run into problems that most people will never encounter. We are talking half a terabyte of source control in TFS with over 80 proxy servers. Its certainly the largest deployment I have ever heard of. When they did their upgrade way back in April, they found two major flaws in the product that meant that they had to back out of the upgrade and wait for a couple of hotfixes. KB983504 – Hotfix KB983578 – Patch KB2401992 -Hotfix In the time since they got the hotfixes they have run 6 successful trial migrations, but we are not talking minutes or hours here. When you have 400+ GB of data it takes time to copy it around. It takes time to do the upgrade and it takes time to do a backup. Well, last week it was crunch time with their developers off for Christmas they had a window of opportunity to complete the upgrade. Now these guys are good, but they wanted Northwest Cadence to be available “just in case”. They did not expect any problems as they already had 6 successful trial upgrades. The problems surfaced around 20 hours in after the first set of hotfixes had been applied. The new Team Project Collection, the only thing of importance, had disappeared from the Team Foundation Server Administration console. The collection would not reattach either. It would not even list the new collection as attachable! Figure: We know there is a database there, but it does not This was a dire situation as 20+ hours to repeat would leave the customer over time with 250+ developers sitting around doing nothing. We tried everything, and then we stumbled upon the command of last resort. TFSConfig Recover /ConfigurationDB:SQLServer\InstanceName;TFS_ConfigurationDBName /CollectionDB:SQLServer\instanceName;"Collection Name" -http://msdn.microsoft.com/en-us/library/ff407077.aspx WARNING: Never run this command! Now this command does something a little nasty. It assumes that there really should not be anything wrong and sets about fixing it. It ignores any servicing levels in the Team Project Collection database and forcibly applies the latest version of the schema. I am sure you can imagine the types of problems this may cause when the schema is updated leaving the data behind. That said, as far as we could see this collection looked good, and we were even able to find and attach the team project collection to the Configuration database. Figure: After attaching the TPC it enters a servicing mode After reattaching the team project collection we found the message “Re-Attaching”. Well, fair enough that sounds like something that may need to happen, and after checking that there was disk IO we left it to it. 14+ hours later, it was still not done so the customer raised a priority support call with MSFT and an engineer helped them out. Figure: Everything looks good, it is just offline. Tip: Did you know that these logs are not represented in the ~/Logs/* folder until they are opened once? The engineer dug around a bit and listened to our situation. He knew that we had run the dreaded “tfsconfig restore”, but was not phased. Figure: This message looks suspiciously like the wrong servicing version As it turns out, the servicing version was slightly out of sync with the schema. KB Schema Successful           KB983504 341 Yes   KB983578 344 sort of   KB2401992 360 nope   Figure: KB, Schema table with notation to its success The Schema version above represents the final end of run version for that hotfix or patch. The only way forward The problem was that the version was somewhere between 341 and 344. This is not a nice place to be in and the engineer give us the  only way forward as the removal of the servicing number from the database so that the re-attach process would apply the latest schema. if his sounds a little like the “tfsconfig recover” command then you are exactly right. Figure: Sneakily changing that 3 to a 1 should do the trick Figure: Changing the status and dropping the version should do it Now that we have done that we should be able to safely reattach and enable the Team Project Collection. Figure: The TPC is now all attached and running You may think that this is the end of the story, but it is not. After a while of mulling and seeking expert advice we came to the opinion that the database was, for want of a better term, “hosed”. There could well be orphaned data in there and the likelihood that we would have problems later down the line is pretty high. We contacted the customer back and made them aware that in all likelihood the repaired database was more like a “cut and shut” than anything else, and at the first sign of trouble later down the line was likely to split in two. So with 40+ hours invested in getting this new database ready the customer threw it away and started again. What would you do? Would you take the “cut and shut” to production and hope for the best?

    Read the article

  • Why is Google Webmaster Tools crawling invalid URLS and showing 500 errors?

    - by Amos Kane
    Google Webmaster tools is reporting 12k+ 500 errors. Eeek! None of the URLS are valid- they all contain www.youtube.com. First, why is Google crawling these URLS if they don't exist? I supplied a sitemap, and they are of course not in the sitemap. I don't have a robots.txt blocking anything. I've checked for invalid redirects--none, and checked for unclosed tags or something that would throw www.youtube.com into the URL by accident--none. In every 'linked from', the referring URL is also a bad URL, with www.youtube.com in it. The Google Tools report no malware, and I can't check the server logs because the host won't give me access. Really stuck!! Any ideas appreciated!

    Read the article

  • Forcing a Service to Restart at Boot

    - by pjtatlow
    So I use winbind and samba to connect to an AD domain at work, and one of my ubuntu machines has been having issues. At boot, I cannot log in as an AD user, but if I log in as the local user and do a sudo service winbind restart it works fine. Yes, winbind does start at boot, although for some reason incorrectly I think. I can't tell anything from the logs, and I'm just wondering how to force winbind to restart after it starts the first time, at boot. Thanks!

    Read the article

  • Unable to use TL-WN821N

    - by udiw
    Hi, I got a TP-LINK USB wireless module - TL-WN821N, using Ubuntu 10.4 (same problems were also seen in 10.10). From everything I've read online, the usb should work just fine, since the Atheros ar9170 is built into the kernel. However, when I plug it in, it is detected as a USB device, but there is no wlan associated with it, and basically - nothing happens. Am I doing something wrong? what should I do so that the Atheros driver is associated with this device? btw, on Windows it works fine (with the drivers). Some logs: $ uname -mr 2.6.32-28-generic i686 $ lsb_release -d Description: Ubuntu 10.04.2 LTS $ lsusb ... (trimmed) Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 017: ID 0cf3:7015 Atheros Communications, Inc. Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub $ lsmod |grep ar9 ar9170usb 51296 0 ath 7611 1 ar9170usb mac80211 205402 3 ar9170usb,iwl3945,iwlcore cfg80211 126528 5 ar9170usb,ath,iwl3945,iwlcore,mac80211 led_class 2864 4 ar9170usb,iwl3945,iwlcore,sdhci

    Read the article

< Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >