Search Results

Search found 5157 results on 207 pages for 'checking'.

Page 34/207 | < Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >

  • HP Wireless Printer not working

    - by Omri Spector
    I have installed an HP DeskJet 4620 driver on a win 7 machine. All works perfectly for several days, and than printing is not longer possible. Instead I get the message: "Unable to communicate with printer". This happened on every Win 7 PC I tried, and none of the HP/MS sites contain any relevant info... (Posting this so that the answer appears online, as I did solve it after much work) Solution: It appears that HP installation puts a unique "port" called "HP Network re-discovery". It stops working after some time (possibly after the first time the printer/pc enter sleep mode). BUT, the standard MS TCP port works just fine. So: Go to "Printers" Right click Printer Click "Printer properties" and then "Printer" or "Fax" (for both - do all this twice) Click "Add Port..." Select "Standard TCP Port" Fill in details Move printer to use the new port by un-checking the old one and checking the new one Happy printing.

    Read the article

  • Cannot reformat flash drive

    - by user933531
    I have tried to reformat on Ubuntu using gparted, in Windows using their tool, and OSX using Disk Utility. I have also attempted by using the terminal but also failed there. When I verify disk using Disk Utility, I get the following output: Verifying volume “REDSTRIPE” ** /dev/disk2s1 ** Phase 1 - Preparing FAT ** Phase 2 - Checking Directories ** Phase 3 - Checking for Orphan Clusters 168 files, 4507316 KiB free (1126829 clusters) MARK FILE SYSTEM CLEAN? no ***** FILE SYSTEM IS LEFT MARKED AS DIRTY ***** Error: This disk needs to be repaired. Click Repair Disk. But I am unable to repair disk. See OSX examples below:

    Read the article

  • Windows 7 - cannot access my own external disk

    - by Tomas
    I use Windows 7 Home Premium and external USB disk with NTFS partition. I cannot write-access the my own files on it, even as a member of Admnistrators group! Is there any way how to go around this permission checking, without actually writing some permission information to every folder on it? I have 3 external disks (up to 1TB), and I have thousands hundreds of files on each!!! Doing some permission change, that will actually go recursivelly through all folders on all my disks is plain brain damage!! 1) Is there any way how to change it somehow globally? (like mount options...) .. Or how to go around this annoying permission checking? It was working in Win XP normally! 2) if not, and I must do the recursive operation on all folders, how to do it PERMANENTLY, so that I don't need to do it again on another Windows 7 computer!

    Read the article

  • How does the "Full Control" permission differ from manually giving all other permissions?

    - by Lord Torgamus
    On Windows Server 2003, and some other versions of Windows, the Properties > Security tab of a folder's or file's context menu provides "Allow" and "Deny" options for "Full Control," "Modify," "Read" and other permissions (graphic provided). After clicking "Full Control," all boxes in the column — except for "Special Permissions" — get automatically checked. What's the difference between checking "Full Control" and just checking all the other boxes individually? Are there hidden/advanced permissions toggled by "Full Control" that aren't listed in the main permissions window? Is "Full Control" just a convenience shortcut?

    Read the article

  • Linux command line based spam checker?

    - by anonymous-one
    Does a command line based spam checker exist? We have created a mailbox at a 3rd party, and unfortunately decided on spam checking 'disabled' in the initial setup. There is no way to re-enable spam checking, the mailbox must be delete (and thus all contents lost) and re-created. Does anything exist where we can pump in either: A) Subject + from + to + body + all other fields. OR B) Raw message dump (headers + body). And the command line will let us know weather the email is possibly spam? Thanks.

    Read the article

  • SSH Public Key - No supported authentication methods available (server sent public key)

    - by F21
    I have a 12.10 server setup in a virtual machine with its network set to bridged (essentially will be seen as a computer connected to my switch). I installed opensshd via apt-get and was able to connect to the server using putty with my username and password. I then set about trying to get it to use public/private key authentication. I did the following: Generated the keys using PuttyGen. Moved the public key to /etc/ssh/myusername/authorized_keys (I am using encrypted home directories). Set up sshd_config like so: PubkeyAuthentication yes AuthorizedKeysFile /etc/ssh/%u/authorized_keys StrictModes no PasswordAuthentication no UsePAM yes When I connect using putty or WinSCP, I get an error saying No supported authentication methods available (server sent public key). If I run sshd in debug mode, I see: PAM: initializing for "username" PAM: setting PAM_RHOST to "192.168.1.7" PAM: setting PAM_TTY to "ssh" userauth-request for user username service ssh-connection method publickey [preauth] attempt 1 failures 0 [preauth] test whether pkalg/pkblob are acceptable [preauth[ Checking blacklist file /usr/share/ssh/blacklist.RSA-1023 Checking blacklist file /etc/ssh/blacklist.RSA-1023 temporarily_use_uid: 1000/1000 (e=0/0) trying public key file /etc/ssh/username/authorized_keys fd4 clearing O_NONBLOCK restore_uid: 0/0 Failed publickey for username from 192.168.1.7 port 14343 ssh2 Received disconnect from 192.168.1.7: 14: No supported authentication methods available [preauth] do_cleanup [preauth] monitor_read_log: child log fd closed do_cleanup PAM: cleanup Why is this happening and how can I fix this?

    Read the article

  • Shared Development Space

    - by PatrickWalker
    Currently the company I work in gives each developer their own development virtual machine. On this machine (Windows 7) they install the entire stack of the product (minus database) this stack is normally spread amongst multiple machines of differing OS (although moving towards windows 2008 and 2008r2) So when a developer has a new project they are likely to be updating only a small piece of their stack and as such the rest of it can become out of date with the latest production code. The isolation from others means some issues won't be found until the code goes into shared test environments/production. I'm suggesting a move from functional testing on these isolated machines to plugging machines into a shared environment. The goal being to move towards a deployment thats closer to production in mechanism and server type. Developers would still make code changes on their Win7 vm and run unit/component testing locally but for functionally testing they would leverage a shared enviornment. Does anyone else use a shared development environment like this? Are there many reasons against this sort of sandbox environment? The biggest drawback is a move away from only checking in code when you've done local functional testing to checking in after static testing. I'm hoping an intelligent git branching strategy can take care of this for us.

    Read the article

  • Stream Media and Live TV Across the Internet with Orb

    - by DigitalGeekery
    Looking for a way to stream your media collection across the Internet? Or perhaps watch and record TV remotely? Today we are going to look at how to do all that and more with Orb. Requirements Windows XP / Vista / 7 or Intel based Mac w/ OS X 10.5 or later. 1 GB RAM or more Pentium 4 2.4 GHz or higher / AMD Athlon 3200+ Broadband connections TV Tuner for streaming and recording live TV (optional) Note: Slower internet connections may result in stuttering during playback. Installation and Setup Download and install Orb on your home computer. (Download link below) You’ll want to take the defaults for the initial portion of the install. When we get to the Orb Account setup portion of the install is when we will have to enter information and make some decisions. Choose your language and click Next. We’ll need to create and user account and password. A valid email address is required as we’ll need to confirm the account later. Click Next.   Now you’ll want to choose your media sources. Orb will automatically look for folders that may contain media files. You can add or remove folders click on the (+) or (-) buttons. To remove a folder, click on it once to select it from the list and then click the minus (-) button. To add a folder, click the plus (+) button and browse for the folder. You can add local folders as well as shared folders from networked computers and USB attached storage. Note: Both the host computer running Orb and the networked computer will need to be running to access shared network folders remotely. When you’ve selected all your media files, click Next. Orb will proceed to index your media files… When the indexing is complete, click Next. Orb TV Setup Note: Streaming Live TV to Macs is not currently supported. If you have a TV tuner card connected to your PC, you can opt to configure Orb to stream live or recorded TV. Click Next  to configure TV. Or, choose Skip if you don’t wish to configure Orb for TV.   If you have a Digital tuner card, type in your Zip Code and click Get List to pull your channel listings. Select a TV provider from the list and click Next. If not, click Skip.   You can select or deselect any channels by checking or un-checking the box to each channel. Select Auto Scan to let Orb find more channels or disable the ones with no reception. Click Next when finished.   Next choose an analog provider, if necessary, and click Next.   Select “Yes” or “No” for a set top box and click Next. Just as we did with the Digital tuner, select or deselect any channels by checking or un-checking the box to each channel. Select Auto Scan to let Orb find more channels or disable the ones with no reception. Click Next when finished.   Now we’re finished with the setup. Click Close. Accessing your Media Remotely Media files are accessed through a web-based interface. Before we go any further, however, we’ll need to confirm our username and password. Check your inbox for an email from Orb Networks. Click the enclosed confirmation link. You’ll be prompted to enter the username and password you selected in your browser then click Next.   Your account will be confirmed. Now, we’re ready to enjoy our media remotely. To get started, point your browser to the MyCast website from your remote computer. (See link below) Enter your credentials and click Log In. Once logged in, you’ll be presented with the MyCast Home screen. By default you’ll see a handful of “channels” such as a TV program guide, random audio and photos, video favorites, and weather. You can add, remove, or customize channels. To add additional channels, click on Add Channels at the top right…   …and select from the dropdown list. To access your full media libraries, click Open Application at the top left and select from one of the options. Live and Recorded TV If you have a TV tuner card you configured for Orb, you’ll see your program guide on the TV / Webcams screen. To watch or record a show, click on the program listing to bring up a detail box. Then click the red button to record, or the green button to play. When recording a show, you’ll see a pulsating red icon at the top right of the listing in the program guide. If you want to watch Live TV, you may be prompted to choose your media player, depending on your browser and settings. Playback should begin shortly.   Note for Windows Media Center Users If you try to stream live TV in Orb while Windows Media Center is running on your PC, you’ll get an error message. Click the Stop MediaCenter button and then try again.   Audio On the Audio screen, you’ll find your music files indexed by genre, artist, and album. You can play a selection by clicking once and then clicking the green play button, or by simply double-clicking.   Playback will begin in the default media player for the streaming format.   Video Video works essentially the same as audio. Click on a selection and press the green play button, or double-click on the video title. Video playback will begin in the default media player for the streaming format.   Streaming Formats You can change the default streaming format in the control panel settings. To access the Control Panel, click on Open Applications  and select Control Panel. You can also click Settings at the top right.   Select General from the drop down list and then click on the Streaming Formats tab. You are provided four options. Flash, Windows Media, .SDP, and .PLS.   Creating Playlists To create playlists, drag and drop your media title to the playlist work area on the right, or click Add to playlist on the top menu. Click Save when finished.    Sharing your Media Orb allows you to share media playlists across the Internet with friends and family. There are a few ways to accomplish this. We’ll start by click the Share button at the bottom of the playlist work area after you’ve compiled your playlist. You’ll be prompted to choose a method by which to share your playlist. You’ll have the option to share your playlist publicly or privately. You can share publically through links, blogs, or on your Orb public profile.  By choosing the Public Profile option, Orb will automatically create a profile page for you with a URL like http://public.orb.com/username that anyone can easily access on the Internet. The private sharing option allows you to invite friends by email and requires recipients to register with Orb. You can also give your playlist a custom name, or accept the auto-generated title. Click OK when finished. Users who visit your public profile will be able to view and stream any of your shared playlists to their computer or supported device.   Portable Media Devices and Smartphones Orb can stream media to many portable devices and 3G phones. Streaming audio is supported on the iPhone and iPod Touch through the Safari browser. However, video and live TV streaming requires the Orb Live iPhone App.  Orb Live is available in the App store for $9.99. To stream media to your portable device, go to the MyCast website in your mobile browser and login. Browse for your media or playlist. Make a selection and play the media. Playback will begin. We found streaming music to both the Droid and the iPhone to work quite nicely. Video playback on the Droid, however, left a bit to be desired. The video looked good, but the audio tended to be out of sync. System Tray Control Panel By default Orb runs in the system tray on start up. To access the System Tray Control Panel, right-click on the Orb icon in the system tray and select Control Panel. Login with your Orb username and  password and click OK.   From here you can add or remove media sources, add manage accounts, change your password, and more. If you’d rather not run Orb on Startup, click the General icon.   Unselect the checkbox next to Start Orb when the system starts. Conclusion It may seem like a lot of steps, but getting Orb up and running isn’t terribly difficult. Orb is available for both Windows and Intel based Macs. It also supports streaming to many Game Consoles such as the Wii, PS3, and XBox 360. If you are running Windows 7 on multiple computers, you may want to check out our write-up on how to stream music and video over the Internet with Windows Media Player 12. Downloads Download Orb Logon to MyCast Similar Articles Productive Geek Tips Stream Music and Video Over the Internet with Windows Media Player 12Enable Media Streaming in Windows Home Server to Windows Media PlayerStream Media from Windows 7 to XP with VLC Media PlayerShare Digital Media With Other Computers on a Home Network with Windows 7Automatically Start Windows 7 Media Center in Live TV Mode TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Looking for Good Windows Media Player 12 Plug-ins? Find Out the Celebrity You Resemble With FaceDouble Whoa ! Use Printflush to Solve Printing Problems Icelandic Volcano Webcams Open Multiple Links At One Go

    Read the article

  • Incrementing Assembly Version in TFS Builds and its affect over Other Build Definitions

    - by ssmantha
    A very common scenario while performing TFS builds is to increment version number of the assemblies. There are quite a few approaches of which I would like to share two links: Ewald Hofman’s Approach: http://www.ewaldhofman.nl/post/2010/05/13/Customize-Team-Build-2010-e28093-Part-5-Increase-AssemblyVersion.aspx#id_02e7b082-ce95-49a9-92e9-7dc88887b377 Richard Bank’s Approach : http://www.richard-banks.org/2010/07/how-to-versioning-builds-with-tfs-2010.html   Both these approaches work well, however there are scenarios where Editing and Checking–in the Assembly version information can create problems with Build Definitions meant for Continuous Integration, or gated Check-ins. You can suppress the Continuous Integration Builds while checking in the Assembly info file by just putting a comment “***NO_CI***” as specified by Ewald in his blog. However, if you have Gated Checkin in place, this can turn out to be difficult to suppress, I myself tried to suppress the Build Trigger during the check in process but things doesn’t turn out well. That’s where Richard’s solution comes as handy. Both the solutions have their own pros and cons, which I believe can only be experienced over a period of time. In case of Richard’s solution I believe that we don’t have any history of the Assembly Version Info file and when you take latest of the solution the information will be lost. If you notice closely, that suppressing the Continuous Integration (the NO_CI approach in check in comments) is a workaround provided by Microsoft, however I didn’t find anything to suppress the gated Checkin so far. Suggestions or Findings are most welcome.

    Read the article

  • Trying to run Compiz but it won't work!

    - by Ben Deslauriers
    ben@ben-ThinkCentre-XXXX:~$ compiz --replace Checking if settings need to be migrated ...no Checking if internal files need to be migrated ...yes [LOG]: Moving Internal Files [LOG]: Copying subdirectory from /home/ben/.compiz/session to /home/ben/.compiz-1/session [LOG]: Copied file /home/ben/.compiz/session/10cd9233ce225949613394716379921200000016160046 to /home/ben/.compiz-1/session/10cd9233ce225949613394716379921200000016160046 [LOG]: Successfully moved internal files Backend : gconf Integration : true Profile : default Adding plugins Initializing core options...done Initializing composite options...done nvfx_screen_get_param:95 - Warning: unknown PIPE_CAP 30 nvfx_screen_get_param:95 - Warning: unknown PIPE_CAP 30 nvfx_screen_get_param:95 - Warning: unknown PIPE_CAP 55 nvfx_screen_get_param:95 - Warning: unknown PIPE_CAP 56 nvfx_screen_get_param:95 - Warning: unknown PIPE_CAP 59 nvfx_screen_get_param:95 - Warning: unknown PIPE_CAP 58 nvfx_screen_get_param:95 - Warning: unknown PIPE_CAP 30 Initializing opengl options...done Initializing decor options...done Initializing grid options...done Initializing gnomecompat options...done Initializing place options...done Initializing session options...done Initializing move options...done Initializing mousepoll options...done Initializing resize options...done Initializing snap options...done Initializing vpswitch options...done Initializing animation options...done Initializing workarounds options...done Initializing fade options...done Initializing cube options...done Initializing scale options...done compiz (expo) - Warn: failed to bind image to texture Initializing expo options...done Initializing rotate options...done Initializing ezoom options...done Setting Update "main_menu_key" Setting Update "run_key" Starting gtk-window-decorator compiz (decor) - Warn: No default decoration found, placement will not be correct compiz (decor) - Warn: No default decoration found, placement will not be correct I have NO CLUE what I am doing wrong :( Please help. All I did was type in compiz --replace and it made my screen flicker and it showed this message in the terminal. I HAVE NO CLUE :(

    Read the article

  • Cannot apply unity --reset after modifying files

    - by Alex Cline
    So I have an idea of what I did wrong, I am just not sure how to fix it. I used the Unity Glass mod: http://www.omgubuntu.co.uk/2012/07/unity-glass-offers-refined-new-look-for-the-unity-launcher After removing it, I cannot reset unity and it does not work. Even after purging Unity and reinstalling it, I cannot seem to replace the missing files. $unity --reset WARNING: Unity currently default profile, so switching to metacity while resetting the values unity-panel-service: no process found Checking if settings need to be migrated ...no Checking if internal files need to be migrated ...no Backend : gconf Integration : true Profile : unity Adding plugins Initializing core options...done compiz (core) - Warn: failed to receive ConfigureNotify event on 0x1c00027 Initializing composite options...done Initializing opengl options...done Initializing decor options...done Initializing vpswitch options...done Initializing snap options...done Initializing mousepoll options...done Initializing resize options...done Initializing place options...done Initializing move options...done Initializing wall options...done Initializing grid options...done I/O warning : failed to load external entity "/home/arcline/.compiz/session/10b624e5c8f98c5325134625607758338300000051770001" Initializing session options...done Initializing gnomecompat options...done Initializing animation options...done Initializing fade options...done Initializing unitymtgrabhandles options...done Initializing workarounds options...done Initializing scale options...done compiz (expo) - Warn: failed to bind image to texture Initializing expo options...done Initializing ezoom options...done (compiz:7038): Gtk-WARNING **: Theme parsing error: gnome-panel.css:28:11: Not using units is deprecated. Assuming 'px'. (compiz:7038): GConf-CRITICAL **: gconf_client_add_dir: assertion `gconf_valid_key (dirname, NULL)' failed Segmentation fault (core dumped)

    Read the article

  • Operator of the week - Assert

    - by Fabiano Amorim
    Well my friends, I was wondering how to help you in a practical way to understand execution plans. So I think I'll talk about the Showplan Operators. Showplan Operators are used by the Query Optimizer (QO) to build the query plan in order to perform a specified operation. A query plan will consist of many physical operators. The Query Optimizer uses a simple language that represents each physical operation by an operator, and each operator is represented in the graphical execution plan by an icon. I'll try to talk about one operator every week, but so as to avoid having to continue to write about these operators for years, I'll mention only of those that are more common: The first being the Assert. The Assert is used to verify a certain condition, it validates a Constraint on every row to ensure that the condition was met. If, for example, our DDL includes a check constraint which specifies only two valid values for a column, the Assert will, for every row, validate the value passed to the column to ensure that input is consistent with the check constraint. Assert  and Check Constraints: Let's see where the SQL Server uses that information in practice. Take the following T-SQL: IF OBJECT_ID('Tab1') IS NOT NULL   DROP TABLE Tab1 GO CREATE TABLE Tab1(ID Integer, Gender CHAR(1))  GO  ALTER TABLE TAB1 ADD CONSTRAINT ck_Gender_M_F CHECK(Gender IN('M','F'))  GO INSERT INTO Tab1(ID, Gender) VALUES(1,'X') GO To the command above the SQL Server has generated the following execution plan: As we can see, the execution plan uses the Assert operator to check that the inserted value doesn't violate the Check Constraint. In this specific case, the Assert applies the rule, 'if the value is different to "F" and different to "M" than return 0 otherwise returns NULL'. The Assert operator is programmed to show an error if the returned value is not NULL; in other words, the returned value is not a "M" or "F". Assert checking Foreign Keys Now let's take a look at an example where the Assert is used to validate a foreign key constraint. Suppose we have this  query: ALTER TABLE Tab1 ADD ID_Genders INT GO  IF OBJECT_ID('Tab2') IS NOT NULL   DROP TABLE Tab2 GO CREATE TABLE Tab2(ID Integer PRIMARY KEY, Gender CHAR(1))  GO  INSERT INTO Tab2(ID, Gender) VALUES(1, 'F') INSERT INTO Tab2(ID, Gender) VALUES(2, 'M') INSERT INTO Tab2(ID, Gender) VALUES(3, 'N') GO  ALTER TABLE Tab1 ADD CONSTRAINT fk_Tab2 FOREIGN KEY (ID_Genders) REFERENCES Tab2(ID) GO  INSERT INTO Tab1(ID, ID_Genders, Gender) VALUES(1, 4, 'X') Let's look at the text execution plan to see what these Assert operators were doing. To see the text execution plan just execute SET SHOWPLAN_TEXT ON before run the insert command. |--Assert(WHERE:(CASE WHEN NOT [Pass1008] AND [Expr1007] IS NULL THEN (0) ELSE NULL END))      |--Nested Loops(Left Semi Join, PASSTHRU:([Tab1].[ID_Genders] IS NULL), OUTER REFERENCES:([Tab1].[ID_Genders]), DEFINE:([Expr1007] = [PROBE VALUE]))           |--Assert(WHERE:(CASE WHEN [Tab1].[Gender]<>'F' AND [Tab1].[Gender]<>'M' THEN (0) ELSE NULL END))           |    |--Clustered Index Insert(OBJECT:([Tab1].[PK]), SET:([Tab1].[ID] = RaiseIfNullInsert([@1]),[Tab1].[ID_Genders] = [@2],[Tab1].[Gender] = [Expr1003]), DEFINE:([Expr1003]=CONVERT_IMPLICIT(char(1),[@3],0)))           |--Clustered Index Seek(OBJECT:([Tab2].[PK]), SEEK:([Tab2].[ID]=[Tab1].[ID_Genders]) ORDERED FORWARD) Here we can see the Assert operator twice, first (looking down to up in the text plan and the right to left in the graphical plan) validating the Check Constraint. The same concept showed above is used, if the exit value is "0" than keep running the query, but if NULL is returned shows an exception. The second Assert is validating the result of the Tab1 and Tab2 join. It is interesting to see the "[Expr1007] IS NULL". To understand that you need to know what this Expr1007 is, look at the Probe Value (green text) in the text plan and you will see that it is the result of the join. If the value passed to the INSERT at the column ID_Gender exists in the table Tab2, then that probe will return the join value; otherwise it will return NULL. So the Assert is checking the value of the search at the Tab2; if the value that is passed to the INSERT is not found  then Assert will show one exception. If the value passed to the column ID_Genders is NULL than the SQL can't show a exception, in that case it returns "0" and keeps running the query. If you run the INSERT above, the SQL will show an exception because of the "X" value, but if you change the "X" to "F" and run again, it will show an exception because of the value "4". If you change the value "4" to NULL, 1, 2 or 3 the insert will be executed without any error. Assert checking a SubQuery: The Assert operator is also used to check one subquery. As we know, one scalar subquery can't validly return more than one value: Sometimes, however, a  mistake happens, and a subquery attempts to return more than one value . Here the Assert comes into play by validating the condition that a scalar subquery returns just one value. Take the following query: INSERT INTO Tab1(ID_TipoSexo, Sexo) VALUES((SELECT ID_TipoSexo FROM Tab1), 'F')    INSERT INTO Tab1(ID_TipoSexo, Sexo) VALUES((SELECT ID_TipoSexo FROM Tab1), 'F')    |--Assert(WHERE:(CASE WHEN NOT [Pass1016] AND [Expr1015] IS NULL THEN (0) ELSE NULL END))        |--Nested Loops(Left Semi Join, PASSTHRU:([tempdb].[dbo].[Tab1].[ID_TipoSexo] IS NULL), OUTER REFERENCES:([tempdb].[dbo].[Tab1].[ID_TipoSexo]), DEFINE:([Expr1015] = [PROBE VALUE]))              |--Assert(WHERE:([Expr1017]))             |    |--Compute Scalar(DEFINE:([Expr1017]=CASE WHEN [tempdb].[dbo].[Tab1].[Sexo]<>'F' AND [tempdb].[dbo].[Tab1].[Sexo]<>'M' THEN (0) ELSE NULL END))              |         |--Clustered Index Insert(OBJECT:([tempdb].[dbo].[Tab1].[PK__Tab1__3214EC277097A3C8]), SET:([tempdb].[dbo].[Tab1].[ID_TipoSexo] = [Expr1008],[tempdb].[dbo].[Tab1].[Sexo] = [Expr1009],[tempdb].[dbo].[Tab1].[ID] = [Expr1003]))              |              |--Top(TOP EXPRESSION:((1)))              |                   |--Compute Scalar(DEFINE:([Expr1008]=[Expr1014], [Expr1009]='F'))              |                        |--Nested Loops(Left Outer Join)              |                             |--Compute Scalar(DEFINE:([Expr1003]=getidentity((1856985942),(2),NULL)))              |                             |    |--Constant Scan              |                             |--Assert(WHERE:(CASE WHEN [Expr1013]>(1) THEN (0) ELSE NULL END))              |                                  |--Stream Aggregate(DEFINE:([Expr1013]=Count(*), [Expr1014]=ANY([tempdb].[dbo].[Tab1].[ID_TipoSexo])))             |                                       |--Clustered Index Scan(OBJECT:([tempdb].[dbo].[Tab1].[PK__Tab1__3214EC277097A3C8]))              |--Clustered Index Seek(OBJECT:([tempdb].[dbo].[Tab2].[PK__Tab2__3214EC27755C58E5]), SEEK:([tempdb].[dbo].[Tab2].[ID]=[tempdb].[dbo].[Tab1].[ID_TipoSexo]) ORDERED FORWARD)  You can see from this text showplan that SQL Server as generated a Stream Aggregate to count how many rows the SubQuery will return, This value is then passed to the Assert which then does its job by checking its validity. Is very interesting to see that  the Query Optimizer is smart enough be able to avoid using assert operators when they are not necessary. For instance: INSERT INTO Tab1(ID_TipoSexo, Sexo) VALUES((SELECT ID_TipoSexo FROM Tab1 WHERE ID = 1), 'F') INSERT INTO Tab1(ID_TipoSexo, Sexo) VALUES((SELECT TOP 1 ID_TipoSexo FROM Tab1), 'F')  For both these INSERTs, the Query Optimiser is smart enough to know that only one row will ever be returned, so there is no need to use the Assert. Well, that's all folks, I see you next week with more "Operators". Cheers, Fabiano

    Read the article

  • unable to format usb 1204 [daemon inhibited ]

    - by santosamaru
    i try to format my usb 1st time its work all data gone but i can't save any file at this usb . then i try to check is it working or broken here the report santos@santos:~$ sudo badblocks -v /dev/sdb [sudo] password for santos: Sorry, try again. [sudo] password for santos: Checking blocks 0 to 7824383 Checking for bad blocks (read-only test): 0.00% done, 0:00 elapsed. (0/0/0 errdone Pass completed, 0 bad blocks found. (0/0/0 errors) santos@santos:~$ sudo badblocks -v -w /dev/sdb [sudo] password for santos: Sorry, try again. [sudo] password for santos: /dev/sdb is apparently in use by the system; it's not safe to run badblocks! santos@santos:~$ how to format and fix this issues? i have read this link Formatting Pen Drive causes 'Daemon Is Inhibited' Error and it said like this when i try to move any items from desktop " the destination is read only also in this case i use google and find this http://ubuntuforums.org/showthread.php?t=1955353 article as same its not helped following user13509 suggestion ..

    Read the article

  • Upload to PPA succeeded but packages doesn't appear

    - by lorin
    I'm trying to upload packages to my PPA for the first time. I want to use the PPA for customized versions of the OpenStack Compute (nova) project, so I tried to do a test by uploading packages corresponding to the bexar release of this project (lp:nova/bexar), with a new version number and changelog entry. I signed the source packages using my OpenGPG key, which has been uploaded to the ubuntu keyserver: $ dch -v 2011.1-0ubuntu2-isi1 -D lucid "ISI bexar build #1" $ dpkg-buildpackage -s -rfakeroot -tc -D -k4C8A14AB When I tried to upload the files to the repository, it seemed to work (real email obscured): $ dput ppa:lorinh/ppa nova_2011.2~bzr663-1isi1_source.changes Checking signature on .changes gpg: Signature made Fri 11 Feb 2011 03:52:50 PM EST using RSA key ID 4C8A14AB gpg: Good signature from "Lorin Hochstein <lorin@...>" Good signature on /home/lorin/packaging/nova_2011.2~bzr663-1isi1_source.changes. Checking signature on .dsc gpg: Signature made Fri 11 Feb 2011 03:52:44 PM EST using RSA key ID 4C8A14AB gpg: Good signature from "Lorin Hochstein <lorin@...>" Good signature on /home/lorin/packaging/nova_2011.2~bzr663-1isi1.dsc. Uploading to ppa (via ftp to ppa.launchpad.net): Uploading nova_2011.2~bzr663-1isi1.dsc: done. Uploading nova_2011.2~bzr663-1isi1.tar.gz: done. Uploading nova_2011.2~bzr663-1isi1_source.changes: done. However, the packages aren't listed on my PPA page. If I try to upload again, I get the error: $ dput ppa:lorinh/ppa nova_2011.2~bzr663-1isi1_source.changes Package has already been uploaded to ppa on ppa.launchpad.net Nothing more to do for nova_2011.2~bzr663-1isi1_source.changes Am I supposed to do something next? How do I track down what wrong? As of this writing, it's been a day and a half since I've done the upload.

    Read the article

  • Spell check web pages using Firefox plugin

    - by Gopinath
    Having spelling mistakes on a website degrades customer experience by many times. Developers and website content creators spend good amount of time in going through the content manually to make sure that there are no spelling mistakes. At times few mistakes slip in as manual process is error prone. There are few web services and tool available in the market which provide automated spell checking services but they have certain limitations like – high fees to use the service, limitation on the number of pages they scan for errors, privacy issues, etc. What about a free tool that runs locally on your PC and spell checks unlimited web pages? Here comes Spell Checker extension for  Firefox web browser. This free Firefox extension is developed by  Gaurang, a Software Test Engineer based out in India. Once the extension is installed it adds a new context menu “Check Spelling” and a small icon to Add-On bar. To check spellings of a web page just click on the icon or the context menu and it will highlight all the errors on the page. By default SpellChecker extension uses US English dictionary to find spell mistakes and supports spell checking in other languages with installation of dictionaries. Download SpellChecker Extension for Firefox. Spell check dictionaries:  US-English , GB English and  Australian English

    Read the article

  • Duck checker in Python: does one exist?

    - by elliot42
    Python uses duck-typing, rather than static type checking. But many of the same concerns ultimately apply: does an object have the desired methods and attributes? Do those attributes have valid, in-range values? Whether you're writing constraints in code, or writing test cases, or validating user input, or just debugging, inevitably somewhere you'll need to verify that an object is still in a proper state--that it still "looks like a duck" and "quacks like a duck." In statically typed languages you can simply declare "int x", and anytime you create or mutate x, it will always be a valid int. It seems feasible to decorate a Python object to ensure that it is valid under certain constraints, and that every time that object is mutated it is still valid under those constraints. Ideally there would be a simple declarative syntax to express "hasattr length and length is non-negative" (not in those words. Not unlike Rails validators, but less human-language and more programming-language). You could think of this as ad-hoc interface/type system, or you could think of it as an ever-present object-level unit test. Does such a library exist to declare and validate constraint/duck-checking on Python-objects? Is this an unreasonable tool to want? :) (Thanks!) Contrived example: rectangle = {'length': 5, 'width': 10} # We live in a fictional universe where multiplication is super expensive. # Therefore any time we multiply, we need to cache the results. def area(rect): if 'area' in rect: return rect['area'] rect['area'] = rect['length'] * rect['width'] return rect['area'] print area(rectangle) rectangle['length'] = 15 print area(rectangle) # compare expected vs. actual output! # imagine the same thing with object attributes rather than dictionary keys.

    Read the article

  • Clientside anticheating in multiplayer game 1vs1

    - by garnav
    I'm developing a simple card game, where there will be a matchmaking system that will put you against another human player. This will be the only game mode available, a 1vs1 against another human, no AI. I want to prevent cheating as much as possible. I have already read a lot of similar questions here and I already know that I cannot trust the client and I have to make all verifications server side. I intend to have a server (need one for the matchmaking anyway) and I intend to make some verifications server side but if I want to check everything server side this makes my server to be able to keep track of the state of all current games and check every action, and I don't have the money/infrastructure to support that server. My idea is to make clients check and verify some of the actions made by their opponent* and if they find some illegal action notify the possible cheating to the server and make the server verify it. This will still require my server to keep track of all current games, but it will save resources only checking some things that cannot be checked at client side(like card order in the deck) and only checking other things when they are actually wrong. *(only those they can check with out allowing themselves cheating! for example:they can't check if the played card was in hand cos that will need them to know all cards in hand) Summing up, my questions are: is this a viable approach? will I actually save resources doing this or the extra complexity in the server and client for exchanging this messages is not worth it? do you know any game that has successfully or unsuccessfully tried a similar approach? Thanks all for reading and answering

    Read the article

  • How do I fix my installation of ATI Catalyst Video Drivers in 12.04 LTS?

    - by Boris
    My graphic card is a Mobility Radeon HD 4200 Series. I tried these 2 answers from What is the correct way to install ATI Catalyst Video Drivers in 12.04 LTS? But unfortunately, it does not work for me: When running the amd script, I get this error message: $ sudo sh ./amd-driver-installer-12-4-x86.x86_64.run ... DKMS part of installation failed. Please refer to /usr/share/ati/fglrx-install.log for details When checking this log file, I get: Uninstalling any previously installed drivers. Creating symlink /var/lib/dkms/fglrx/8.961/source -> /usr/src/fglrx-8.961 DKMS: add completed. Kernel preparation unnecessary for this kernel. Skipping... Building module: cleaning build area.... cd /var/lib/dkms/fglrx/8.961/build; sh make.sh --nohints --uname_r=3.2.0-24-generic-pae --norootcheck......(bad exit status: 1) [Error] Kernel Module : Failed to build fglrx-8.961 with DKMS [Error] Kernel Module : Removing fglrx-8.961 from DKMS Deleting module version: 8.961 completely from the DKMS tree. Done. [Reboot] Kernel Module : update-initramfs When checking with fglrxinfo, I get: $ fglrxinfo X Error of failed request: BadRequest (invalid request code or no such operation) Major opcode of failed request: 138 (ATIFGLEXTENSION) Minor opcode of failed request: 66 () Serial number of failed request: 13 Current serial number in output stream: 13

    Read the article

  • .wine-pipelight folder not present

    - by DaimyoKirby
    Following the instructions on the pipelight installation page, I installed pipelight on Ubuntu 14.04. However, upon opening firefox the .wine-pipelight folder isn't present in my home folder, and I get the following errors: [PIPELIGHT:LIN:unknown] attached to process. [PIPELIGHT:LIN:unknown] checking environment variable PIPELIGHT_SILVERLIGHT5_1_CONFIG. [PIPELIGHT:LIN:unknown] searching for config file pipelight-silverlight5.1. [PIPELIGHT:LIN:unknown] trying to load config file from '/home/alden/.config/pipelight-silverlight5.1'. [PIPELIGHT:LIN:silverlight5.1] basicplugin.c:427:checkSilverlightGraphicDriver(): error in execlp command - probably silverlightGraphicDriverCheck not found or missing execute permission. [PIPELIGHT:LIN:silverlight5.1] basicplugin.c:441:checkSilverlightGraphicDriver(): GPU driver check - Your driver is not in the whitelist, hardware acceleration disabled. [PIPELIGHT:LIN:silverlight5.1] using wine prefix directory /home/alden/.wine-pipelight. [PIPELIGHT:LIN:silverlight5.1] checking plugin installation - this might take some time. [PIPELIGHT:LIN:silverlight5.1] basicplugin.c:374:checkPluginInstallation(): error in execvp command - probably dependencyInstaller/sandbox not found or missing execute permission. [PIPELIGHT:LIN:silverlight5.1] basicplugin.c:384:checkPluginInstallation(): Plugin installer did not run correctly (exitcode = 1). [PIPELIGHT:LIN:silverlight5.1] basicplugin.c:108:attach(): plugin not correctly installed - aborting. I've reinstalled quite a few times and ran through many of the common fixes offered on the pipelight Launchpad pages and here on AskUbunta and still it fails to run. Is there a reason why this folder isn't present, or why I'm getting these errors? Edit: Oddly enough, the .wine-pipelight folder is created wtih I open Nitro, although this still doesn't fix the issue.

    Read the article

  • How should I unbind and delete OpenAL buffers?

    - by Joe Wreschnig
    I'm using OpenAL to play sounds. I'm trying to implement a fire-and-forget play function that takes a buffer ID and assigns it to a source from a pool I have previously allocated, and plays it. However, there is a problem with object lifetimes. In OpenGL, delete functions either automatically unbind things (e.g. textures), or automatically deletes the thing when it eventually is unbound (e.g. shaders) and so it's usually easy to manage deletion. However alDeleteBuffers instead simply fails with AL_INVALID_OPERATION if the buffer is still bound to a source. Is there an idiomatic way to "delete" OpenAL buffers that allows them to finish playing, and then automatically unbinds and really them? Do I need to tie buffer management more deeply into the source pool (e.g. deleting a buffer requires checking all the allocated sources also)? Similarly, is there an idiomatic way to unbind (but not delete) buffers when they are finished playing? It would be nice if, when I was looking for a free source, I only needed to see if a buffer was attached at all and not bother checking the source state. (I'm using C++, although approaches for C are also fine. Approaches assuming a GCd language and using finalizers are probably not applicable.)

    Read the article

  • When i trid to install ubuntu 11.10 i get an error '"Windows Backend object has no attribute 'iso-path' - see log for details.'

    - by Raja
    I am trying to install Ubuntu 11.10 in windows XP, Everything went as before until the countdown clock reached zero, then I got "Windows Backend object has no attribute 'iso-path' - see log for details. It's done it three times now. (Formatting in between) The end of the log says 11-01 17:20 DEBUG TaskList: New task check_iso 11-01 17:20 DEBUG TaskList: ### Running check_iso... 11-01 17:20 DEBUG CommonBackend: Checking Y:\ubuntu\install\installation.iso 11-01 17:20 DEBUG Distro: checking Ubuntu ISO Y:\ubuntu\install\installation.iso 11-01 17:20 DEBUG Distro: wrong size: 8094031872 900000000 11-01 17:20 DEBUG TaskList: ### Finished check_iso 11-01 17:20 ERROR TaskList: 'WindowsBackend' object has no attribute 'iso_path' Traceback (most recent call last): File "\lib\wubi\backends\common\tasklist.py", line 197, in call File "\lib\wubi\backends\common\backend.py", line 579, in get_iso File "\lib\wubi\backends\common\backend.py", line 565, in use_iso AttributeError: 'WindowsBackend' object has no attribute 'iso_path' 11-01 17:20 DEBUG TaskList: # Cancelling tasklist 11-01 17:20 DEBUG TaskList: # Finished tasklist 11-01 17:20 ERROR root: 'WindowsBackend' object has no attribute 'iso_path' Traceback (most recent call last): File "\lib\wubi\application.py", line 58, in run File "\lib\wubi\application.py", line 130, in select_task File "\lib\wubi\application.py", line 205, in run_cd_menu File "\lib\wubi\application.py", line 120, in select_task File "\lib\wubi\application.py", line 158, in run_installer File "\lib\wubi\backends\common\tasklist.py", line 197, in call File "\lib\wubi\backends\common\backend.py", line 579, in get_iso File "\lib\wubi\backends\common\backend.py", line 565, in use_iso AttributeError: 'WindowsBackend' object has no attribute 'iso_path'

    Read the article

  • '"Windows Backend object has no attribute 'iso-path' - see log for details.' error when trying to install

    - by Raja
    I am trying to install Ubuntu 11.10 in windows XP, Everything went as before until the countdown clock reached zero, then I got "Windows Backend object has no attribute 'iso-path' - see log for details. It's done it three times now. (Formatting in between) The end of the log says ====== 11-01 17:20 DEBUG TaskList: New task check_iso 11-01 17:20 DEBUG TaskList: ### Running check_iso... 11-01 17:20 DEBUG CommonBackend: Checking Y:\ubuntu\install\installation.iso 11-01 17:20 DEBUG Distro: checking Ubuntu ISO Y:\ubuntu\install\installation.iso 11-01 17:20 DEBUG Distro: wrong size: 8094031872 > 900000000 11-01 17:20 DEBUG TaskList: ### Finished check_iso 11-01 17:20 ERROR TaskList: 'WindowsBackend' object has no attribute 'iso_path' Traceback (most recent call last): File "\lib\wubi\backends\common\tasklist.py", line 197, in __call__ File "\lib\wubi\backends\common\backend.py", line 579, in get_iso File "\lib\wubi\backends\common\backend.py", line 565, in use_iso AttributeError: 'WindowsBackend' object has no attribute 'iso_path' 11-01 17:20 DEBUG TaskList: # Cancelling tasklist 11-01 17:20 DEBUG TaskList: # Finished tasklist 11-01 17:20 ERROR root: 'WindowsBackend' object has no attribute 'iso_path' Traceback (most recent call last): File "\lib\wubi\application.py", line 58, in run File "\lib\wubi\application.py", line 130, in select_task File "\lib\wubi\application.py", line 205, in run_cd_menu File "\lib\wubi\application.py", line 120, in select_task File "\lib\wubi\application.py", line 158, in run_installer File "\lib\wubi\backends\common\tasklist.py", line 197, in __call__ File "\lib\wubi\backends\common\backend.py", line 579, in get_iso File "\lib\wubi\backends\common\backend.py", line 565, in use_iso AttributeError: 'WindowsBackend' object has no attribute 'iso_path'

    Read the article

  • Upgrade 10.04LTS to 10.10 problem

    - by Gopal
    Checking for a new ubuntu release Done Upgrade tool signature Done Upgrade tools Done downloading extracting 'maverick.tar.gz' authenticate 'maverick.tar.gz' against 'maverick.tar.gz.gpg' tar: Removing leading `/' from member names Reading cache Checking package manager Reading package lists... Done Building dependency tree Reading state information... Done Building data structures... Done Reading package lists... Done Building dependency tree Reading state information... Done Building data structures... Done Updating repository information WARNING: Failed to read mirror file A fatal error occurred Please report this as a bug and include the files /var/log/dist-upgrade/main.log and /var/log/dist-upgrade/apt.log in your report. The upgrade has aborted. Your original sources.list was saved in /etc/apt/sources.list.distUpgrade. Traceback (most recent call last): File "/tmp/tmpe_xVWd/maverick", line 7, in <module> sys.exit(main()) File "/tmp/tmpe_xVWd/DistUpgradeMain.py", line 158, in main if app.run(): File "/tmp/tmpe_xVWd/DistUpgradeController.py", line 1616, in run return self.fullUpgrade() File "/tmp/tmpe_xVWd/DistUpgradeController.py", line 1534, in fullUpgrade if not self.updateSourcesList(): File "/tmp/tmpe_xVWd/DistUpgradeController.py", line 664, in updateSourcesList if not self.rewriteSourcesList(mirror_check=True): File "/tmp/tmpe_xVWd/DistUpgradeController.py", line 486, in rewriteSourcesList distro.get_sources(self.sources) File "/tmp/tmpe_xVWd/distro.py", line 103, in get_sources source.template.official == True and AttributeError: 'Template' object has no attribute 'official' This is what i got when i tried to upgrade the desktop edition:sudo do-release-upgrade. One more info: I have kde installed.

    Read the article

  • Can't ssh to instance

    - by megas
    I have a linode instance, I was successfully connecting to it via ssh. But I've decided to rebuild my instance and then I can not connect to that instance via ssh. The linode works correctly because I can get access via Lish (lonode ssh) I've tried to clear known_hosts with: ssh-keygen -R 212.71.xxx.xx But I still getting message: ssh [email protected] -v OpenSSH_5.9p1 Debian-5ubuntu1.1, OpenSSL 1.0.1 14 Mar 2012 debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 19: Applying options for * debug1: Connecting to 212.71.238.74 [212.71.238.74] port 22. debug1: Connection established. debug1: identity file /home/megas/.ssh/id_rsa type 1 debug1: Checking blacklist file /usr/share/ssh/blacklist.RSA-2048 debug1: Checking blacklist file /etc/ssh/blacklist.RSA-2048 debug1: identity file /home/megas/.ssh/id_rsa-cert type -1 debug1: identity file /home/megas/.ssh/id_dsa type -1 debug1: identity file /home/megas/.ssh/id_dsa-cert type -1 debug1: identity file /home/megas/.ssh/id_ecdsa type -1 debug1: identity file /home/megas/.ssh/id_ecdsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.9p1 Debian-5ubuntu1.1 debug1: match: OpenSSH_5.9p1 Debian-5ubuntu1.1 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.9p1 Debian-5ubuntu1.1 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: sending SSH2_MSG_KEX_ECDH_INIT debug1: expecting SSH2_MSG_KEX_ECDH_REPLY debug1: Server host key: ECDSA c5:c3:a7:c0:5a:25:a1:64:c4:04:0c:42:bb:46:f6:96 debug1: Host '212.71.238.74' is known and matches the ECDSA host key. debug1: Found key in /home/megas/.ssh/known_hosts:1 debug1: ssh_ecdsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,password debug1: Next authentication method: publickey debug1: Offering RSA public key: /home/megas/.ssh/id_rsa debug1: Authentications that can continue: publickey,password debug1: Trying private key: /home/megas/.ssh/id_dsa debug1: Trying private key: /home/megas/.ssh/id_ecdsa debug1: No more authentication methods to try. Permission denied (publickey,password). How to resolve this problem? Thanks

    Read the article

  • Got that Friday feeling?

    - by Rebecca Amos
    Saturday is just around the corner, and we’re all starting to wrap up for the weekend. If you’re the DBA that ‘Friday feeling’ might be as much about checking and preparing your SQL Servers for the next two days, as about looking forward to spending time with friends and family. Whether you’re double-checking your disaster recovery strategy, or know that it’s your turn to be on-call this weekend, it’s likely you’re preparing for the worst, just in case. The fact that you’re making these checks, and caring about both your servers and your users, means that you might be an exceptional DBA. You’re already putting in that extra effort to make other people’s lives easier. So why not take some time for your professional development and enter the Exceptional DBA Awards? If you’re looking for some inspiration for your entry, download our Judges’ Top Tips poster for advice on what the judges are looking for from this year’s entrants. Not only will you be boosting your professional development, but you could win full conference registration for the 2011 PASS Summit in Seattle (where the awards ceremony will take place), four nights' hotel accommodation, and a copy of Red Gate’s SQL DBA Bundle. So take some time out for yourself this weekend and get started on your entry: www.exceptionaldba.com

    Read the article

< Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >