Search Results

Search found 26555 results on 1063 pages for 'active directory explorer'.

Page 621/1063 | < Previous Page | 617 618 619 620 621 622 623 624 625 626 627 628  | Next Page >

  • Sweden Windows Azure Group Meeting in November &amp; Fast with Windows Azure Competition

    - by Alan Smith
    SWAG November Meeting There will be a Sweden Windows Azure Group (SWAG) meeting in Stockholm on Monday 19th November. Chris Klug will be presenting a session on Windows Azure Mobile Services, and I will be presenting a session on Web Site Authentication with Social Identity Providers. Active Solution have been kid enough to host the event, and will be providing food and refreshments. The registration link is here: http://swag14.eventbrite.com If you would like to join SWAG the link is here: http://swagmembership.eventbrite.com Fast with Windows Azure Competition I’ve entered a 3 minute video of rendering a 3D animation using 256 Windows Azure worker roles in the “Fast with Windows Azure” competition. It’s the last week of voting this week, it would be great if you can check out the video and vote for it if you like it. I have not driven a car for about 15 years, so if I win you can expect a hilarious summery of the track day in Vegas. My preparation for the day would be to play Project Gotham Racing for a weekend, and watch a lot of Top Gear.   My video is “Rapid Massive On-Demand Scalability Makes Me Fast!”. The link is here: http://www.meetwindowsazure.com/fast/

    Read the article

  • Windows Azure v1.7 Spring Release Today&ndash;New Management Dashboard

    - by ToStringTheory
    Today, Microsoft will be publicly releasing a new version of Azure for public consumption.  The web conference, at http://www.meetwindowsazure.com will be airing at 1 PM PST.  They have already released an update to the Service Dashboard that can be accessed by going to http://manage.windowsazure.com.  I have some images of the new dashboard here that I have gathered and removed any PII from.  Let me know what you think! Images You should be able to click any of the images for a full resolution image. Tutorial The first thing you get after signing in is the tutorial: Landing After the tutorial completes, you get a screen with services that are active on your account on the left, and a list of ALL services (db/blob/SQL Azure) on the right.  I like the quick access to services across any of my subscriptions: Service Information These are images from a running web site with several roles.  I love how easy they have made many of the features: SQL Azure They have given some great quick functionality for looking at your DB information: Storage Here is the basic information that they give you for any storage accounts you have: Adding Services Super quick and easy to add services with the new UI: Conclusion I am EXCITED!  As you may have seen in the left side of my blog, I am an MCPD in Azure Development, and I must say that I am excited to see Microsoft moving forward with the technology and not letting it stagnate.  After as much as I have fought the other Azure dashboard, I like the friendliness and fluidity of this one. The important thing to note about ALL of the images above: this is HTML, not Silverlight.  The responsiveness is FAST on all of the actions I completed, and I believe that this is a big step forward for Azure… So, what do you think?

    Read the article

  • Where to place web.xml outside WAR file for secure redirect?

    - by Silverhalide
    I am running Tomcat 7 and am deploying a bunch of applications delivered to me by a third party as WAR files. I'd like to force some of those apps to always use SSL. (All the "SSL" apps are in one service; other apps outside this discussion are in another service.) I've figured out how to use conf\web.xml to redirect apps from HTTP to HTTPS, but that applies to all applications hosted by Tomcat. I've also figured out how to put web.xml in an unpacked app's web-inf directory; that does the trick for that specific app, but runs the risk of being overwritten if our vendor gives us a new war file to deploy. I've also tried placing the web.xml file in various places under conf\service\host, or under appbase, but none seem to work. Is it possible to redirect some apps to SSL without forcing all apps to redirect, or to put the web.xml file inside the extracted WAR file? Here's my server.xml: <Service name="secure"> <Connector port="80" connectionTimeout="20000" redirectPort="443" URIEncoding="UTF-8" enableLookups="false" compression="on" protocol="org.apache.coyote.http11.Http11Protocol" compressableMimeType="text/html,text/xml,text/plain,text/javascript,application/json,text/css"/> <Connector port="443" URIEncoding="UTF-8" enableLookups="false" compression="on" protocol="org.apache.coyote.http11.Http11Protocol" compressableMimeType="text/html,text/xml,text/plain,text/javascript,application/json,text/css" scheme="https" secure="true" SSLEnabled="true" sslProtocol="TLS" keystoreFile="..." keystorePass="..." keystoreType="PKCS12" truststoreFile="..." truststorePass="..." truststoreType="JKS" clientAuth="false" ciphers="SSL_RSA_WITH_RC4_128_MD5,SSL_RSA_WITH_RC4_128_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_DSS_WITH_AES_128_CBC_SHA,SSL_RSA_WITH_AES_128_CBC_SHA"/> <Engine name="secure" defaultHost="localhost"> <Realm className="org.apache.catalina.realm.UserDatabaseRealm" resourceName="UserDatabase"/> <Host name="localhost" appBase="webapps" unpackWARs="false" autoDeploy="true" xmlValidation="false" xmlNamespaceAware="false"> </Host> </Engine> </Service> <Service name="mutual-secure"> ... </Service> The content of the web.xml files I'm playing with is: <web-app xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd" version="3.0" metadata-complete="true"> <security-constraint> <web-resource-collection> <web-resource-name>All applications</web-resource-name> <url-pattern>/*</url-pattern> </web-resource-collection> <user-data-constraint> <description>Redirect all requests to HTTPS</description> <transport-guarantee>CONFIDENTIAL</transport-guarantee> </user-data-constraint> </security-constraint> </web-app> (For conf\web.xml the security-constraint is added just before the end of the existing file, rather than create a new file.) My webapps directory (currently) contains only the WAR files.

    Read the article

  • RetinaPad Enables Retina Display for iPhone Apps on the iPad

    - by Jason Fitzpatrick
    RetinaPad is an iPad application that actives the Retina Display resolution on iPhone applications to increase the clarity on the iPad. It’s a feature that should be built-in but is currently only available for jailbroken iPads. The premise is simple. Currently iPads lack support for the Retina Display level resolution that iPhone apps are capable of if displaying. RetinaPad allows you to stop using the ugly and blocky simple doubling available on the iPad and start accessing the higher resolution Retina Display mode for iPhone applications on the iPad. It’s such a trivial thing that it’s outright shameful Apple doesn’t include it by default. You should have to jailbreak your device to unlock functionality that should be there right from the factory. Check out the demo video below to see it in action: Fire up your jailbroken iPad, launch Cydia and search for RetinaPad. Retina Pad is $2.99, iPad only. How to Enable Google Chrome’s Secret Gold IconHTG Explains: What’s the Difference Between the Windows 7 HomeGroups and XP-style Networking?Internet Explorer 9 Released: Here’s What You Need To Know

    Read the article

  • C# Threading Background Process - Programming - How to?

    - by Magic
    Hello...I have been given the horrible task of doing this. Launch the website Take a screenshot Fill in the form details, click on Next Take a screenshot ... ... ... Rinse. Repeat. Now, with various combinations, this comes up to 300 screenshots. And I have to do this for 4 different browsers. Chrome, Firefox, IE 6 and IE 7. I cannot use tools which will capture the screenshot and store them, such as, SnagIT. I need to take a screenshot, copy it to a Word Document and take the second screenshot and take it to a Word Document. I thought, I will write a tiny utility which will help me do this. Here is the requirement spec that I put up for it - An executable which once launched seats itself in the System Tray. While it is active, all instances of Key Press (Print Scrn), it should write the contents to a Word Document as defined (either a default path or a user defined one). Save the document periodically. Now, my question is - if I am going to develop this using C# (Winforms application), how do I go about doing this. I can do a fair bit of C# programming and I am willing to learn. But I am not able to locate the references for how to do a background process so that it runs in the background. And while it runs, it has to capture the Print Scrn command. Can you folks point me to the right material where I can learn this? Theoretical references should suffice. But if there are practical references, then nothing like it. Thanks!

    Read the article

  • How do I remove the tool tips on the launch bar?

    - by Sephethus
    The title is the question. These tool tips stay until I try to click past them. They're annoying since they constantly pop up and block the view of what I'm trying to do. Unfortunately I need the launch bar because Ubuntu is running on VMware and the console does not allow me to use the keyboard for switching tasks (to my knowledge). How do I disable them? I'd post an image as an example, but this site will not let me. UPDATE: unity 5.16.0 UPDATE 2: I discovered that this may only be a problem with users who run Ubuntu on a full screen VMware console that is situated on the right monitor. When the mouse is moved to the left monitor the tooltips popup and remain until the mouse is clicked twice in the VMware console window to make it active. Unfortunately my problem is one involving a rare situation I think. However, I'd love to be able to disable these tool tips if possible. It would also be nice if new features were added that can allow further customization of the launch bar.

    Read the article

  • Access Control Lists for Roles

    - by Kyle Hatlestad
    Back in an earlier post, I wrote about how to enable entity security (access control lists, aka ACLs) for UCM 11g PS3.  Well, there was actually an additional security option that was included in that release but not fully supported yet (only for Fusion Applications).  It's the ability to define Roles as ACLs to entities (documents and folders).  But now in PS5, this security option is now fully supported.   The benefit of defining Roles for ACLs is that those user roles come from the enterprise security directory (e.g. OID, Active Directory, etc) and thus the WebCenter Content administrator does not need to define them like they do with ACL Groups (Aliases).  So it's a bit of best of both worlds.  Users are managed through the LDAP repository and are automatically granted/denied access through their group membership which are mapped to Roles in WCC.  A different way to think about it is being able to add multiple Accounts to content items...which I often get asked about.  Because LDAP groups can map to Accounts, there has always been this association between the LDAP groups and access to the entity in WCC.  But that mapping had to define the specific level of access (RWDA) and you could only apply one Account per content item or folder.  With Roles for ACLs, it basically takes away both of those restrictions by allowing users to define more then one Role and define the level of access on-the-fly. To turn on ACLs for Roles, there is a component to enable.  On the Component Manager page, click the 'advanced component manager' link in the description paragraph at the top.   In the list of Disabled Components, enable the RoleEntityACL component. Then restart.  This is assuming the other configuration settings have been made for the other ACLs in the earlier post.   Once enabled, a new metadata field called xClbraRoleList will be created.  If you are using OracleTextSearch as the search indexer, be sure to run a Fast Rebuild on the collection. For Users and Groups, these values are automatically picked up from the corresponding database tables.  In the case of Roles, there is an explicitly defined list of choices that are made available.  These values must match the roles that are coming from the enterprise security repository. To add these values, go to Administration -> Admin Applets -> Configuration Manager.  On the Views tab, edit the values for the ExternalRolesView.  By default, 'guest' and 'authenticated' are added.  Once added, you can assign the roles to your content or folder. If you are a user that can both access the Security Group for that item and you belong to that particular Role, you now have access to that item.  If you don't belong to that Role, you won't! [Extra] Because the selection mechanism for the list is using a type-ahead field, users may not even know the possible choices to start typing to.  To help them, one thing you can add to the form is a placeholder field which offers the entire list of roles as an option list they can scroll through (assuming its a manageable size)  and view to know what to type to.  By being a placeholder field, it won't need to be added to the custom metadata database table or search engine.  

    Read the article

  • Live CD installer gets stuck with a grayed out forward button.

    - by TRiG
    I have a CD with Ubuntu 10.10 and a laptop with Ubuntu 8.10. The laptop had all sorts of crud on it, and anything I wanted to keep was backed up on an external drive, so I was happy to do a wipe and reinstall instead of an update. So after a bit of faffing about trying to work out how to get the thing to boot from the CD drive, I did that. So the screen comes up with the choice: the options are Try Ubuntu and Install Ubuntu. I choose to install and to overwrite my current installation. So far so good. I then get a progress bar labelled something like copying files (I forget the exact wording) and further options to fill in for my location, keyboard locale, username and password. On each of these screens there are forward and back buttons. On the last screen (password), the forward button is greyed out. Well, I think to myself, no doubt it will become active when that copying files progress bar completes. The progress bar never completes. It hangs. And the label changes from copying files to the chirpy ready when you are. The forward button remains greyed out. The back button is as unhelpful as you'd expect it to be. And there's nothing else to click. We have reached an impasse. I tried restarting the laptop, to test whether it actually was properly installed. It wasn't. I tried to run Ubuntu live from the CD, to test whether the disk was damaged. That wouldn't work either, but I suspect it's just because the laptop is old and has a slow disk drive. I'm typing this question on another computer using the Ubuntu live CD and it's working fine. So there's nothing wrong with the CD.

    Read the article

  • Wireless switch on Dell XT2 - strange behaviour of rfkill

    - by DyP
    I have an Dell Latitude XT2 using an Intel WLAN card (lspci lists it as "Intel Corporation Ultimate N WiFi Link 5300") running Lubuntu 12.04 with recent updates. The laptop has a hardware WLAN switch. I have problems activating the WLAN when booting with the hardware switch set to "off". The situation is a bit confusing, unfortunately. rfkill lists two WLAN devices (though lspci only shows the Intel one). This is the situation when booting with the hardware switch set to "Off": 0: dell-wifi: Wireless LAN Soft blocked: yes Hard blocked: yes 1: dell-bluetooth: Bluetooth Soft blocked: yes Hard blocked: yes 2: phy0: Wireless LAN Soft blocked: yes Hard blocked: yes From some tests, I conclude WLAN is only activated when both, the dell-wifi and phy0, are unblocked by soft- and hardware. But I can only unblock dell-wifi after the hardware switch is set to "on". Procedure right from boot with hardware switch set to "Off": Soft-unblocking phy0 works as expected. Could be done by start-up script. sudo rfkill unblock 0: nothing happens. Soft block of dell-wifi not removed. Set the hardware switch to "on": phy0 gets its hard block removed. Still no WLAN. sudo rfkill unblock 0: both the soft and hard lock of dell-wifi are removed. WLAN is now active and works. sudo rfkill block 0: only adds the soft block as expected. WLAN goes off again. So, in order to activate WLAN, I have to use the hardware switch and afterwards (manually) run a script - that's a bit inconvenient. Does someone know a better solution? Maybe a daemon could help that listens to rfkill events to unblock dell-wifi after I have set the hardware switch to "on"? (sounds like another workaround) When booting with the hardware switch set to "On", nothing is blocked neither hard nor soft.

    Read the article

  • Do you have to recreate workspaces after upgrading a TFS 2008 server to TFS 2010?

    - by Clara Oscura
    I am just reposting this thread from a MSDN forum since it seems to be unavailable. It was very useful when I was having trouble with my folder mappings after migrating to TFS 2010. Question: I opened VS2008 and connected it to the upgraded 2010 TFS server.  Upon clicking any of our Team Projects in source control explorer I get "Team Foundation Error - The workspace MYWORKSPACE;DOMAIN\MYUsername already exists on computer MYPCNAME." Answer: The same local paths on your machine are mapped to 2 different workspaces, one on the preupgrade server and one on the postupgrade server.  It's not safe to have multiple workspaces on different servers mapped to the same local paths b/c you could pend some changes while connected to one server, and the other server would have no idea what you did.  You should either delete your conflicting workspaces from one of the servers (if you don't need them on both), or test the new TFS instance from a new workspace (on different machine). If you want to test an existing production workspace on both servers, then yes, you will have to mess around with the workspace cache. You don’t have to delete the entire cache, you just need to run "tf workspaces /remove:* /server:<serverurl>" to clear the cached workspaces from a server (the command won't delete the workspaces), and possibly "tf workspaces /server:<server>" to refresh the workspace cache for a given server.  You will also have to do back up and restore the workspace before switching servers or your local files could be inconsistent. From the “Microsoft Visual Studio Team Foundation Server 2010 Beta 1” forum (not available anymore?) Technorati Tags: TFS 2010,TFS Workspaces,Team System,Team Foundation Server 2010

    Read the article

  • Applying Service Pack 1 to Team Foundation Server 2010

    - by Enrique Lima
    Disclosure:  I performed the following activities on my Windows 7 SP1 system, Visual Studio 2010 SP1 and a local Basic installation of TFS 2010. As with any deployment of a service pack into a server environment, take your recommended precautions and be aware of the changes you are putting in.  With that said, make sure you backup your databases, and that you have an exit/rollback strategy in the event of an unexpected situation. Team Foundation Server 2010 Service Pack 1 corresponds to KB2182621.  The KB article is http://support.microsoft.com/kb/2182621 The process will be very simple to follow, you will need to execute the mu_team_foundation_server_2010_sp1_x86_x64_651711.exe file.  That will extract files needed and launch the wizard driven Installation. Once this process completes, you need to validate the changes. By looking at Team Foundation Server 2010 Administration Console, you should see the reference to the KB number and SP1. There is also a good reason to validate log locations and records. From the Team Foundation Server 2010 Administration Console. Or from Windows Explorer, go to the C:\ProgramData\Microsoft\Team Foundation\Server Configuration\Logs location and review the logs referenced by the servicing references.

    Read the article

  • How to join the World of Programming? [closed]

    - by litebread
    Name's Vlad and I am currently on my third year of Community College, studying Computer Science with emphasis on Programming in C++ and Networking. I have completed a few programming courses with general ease, but have not gained advanced understanding of programming through school. None of my friends are serious programmers working in the industry. Being an active lurker on many programming websites, and in general tech oriented sites I have noticed how little I know about the industry, the lingo and terminology. (I have no clue how Git hub works, but I generally understand what its for). So I am looking for help as to where I should look for information on the programming world and the industry in which I a very interested. By that I mean, what sites I should utilize to gain information on programming practices, introduction to advanced C++ and resources that simply introduce a 20some programming noob. I like programming, but I haven't dug my hands deep into it yet, I want to start to do so before I transfer to a University. All in all, where do I find information on becoming an actual programmer (Information that lays out a path). Thank you for reading. Have a great day!

    Read the article

  • Expanding development team for a startup

    - by acjohnson55
    I'm a software developer and co-founder of a start up that's in a sprint to launch a web app the next 2 months. We have about 3 months of burn time we have before we need to get some funding. By that time, we want to have a product with active users, and ideally some revenue. I'm fairly confident that I can accomplish the task by myself, but I have also never launched a project of this magnitude. The better product we can build in this timespan, the faster we can grow our user base, and the better our fundraising options will be. So I'm looking to bring someone onboard to hack with me. Maybe more than one person. Good help is hard to find, as we all know, and while I'm willing to share equity, I also want that to be contingent on a productive fit. What is the best approach to a trial-type framework for hiring another developer? Something where the other person feels that their work will be rewarded if they do well and that they can't be left empty-handed at my whim, but where I know that if it turns out not to be a good fit, I can pull the cord without significant loss?

    Read the article

  • Using Private Extension Galleries in Visual Studio 2012

    - by Jakob Ehn
    Note: The installer and the complete source code is available over at CodePlex at the following location: http://inmetavsgallery.codeplex.com   Extensions and addins are everywhere in the Visual Studio ALM ecosystem! Microsoft releases new cool features in the form of extensions and the list of 3rd party extensions that plug into Visual Studio just keeps growing. One of the nice things about the VSIX extensions is how they are deployed. Microsoft hosts a public Visual Studio Gallery where you can upload extensions and make them available to the rest of the community. Visual Studio checks for updates to the installed extensions when you start Visual Studio, and installing/updating the extensions is fast since it is only a matter of extracting the files within the VSIX package to the local extension folder. But for custom, enterprise-specific extensions, you don’t want to publish them online to the whole world, but you still want an easy way to distribute them to your developers and partners. This is where Private Extension Galleries come into play. In Visual Studio 2012, it is now possible to add custom extensions galleries that can point to any URL, as long as that URL returns the expected content of course (see below).Registering a new gallery in Visual Studio is easy, but there is very little documentation on how to actually host the gallery. Visual Studio galleries uses Atom Feed XML as the protocol for delivering new and updated versions of the extensions. This MSDN page describes how to create a static XML file that returns the information about your extensions. This approach works, but require manual updates of that file every time you want to deploy an update of the extension. Wouldn’t it be nice with a web service that takes care of this for you, that just lets you drop a new version of your VSIX file and have it automatically detect the new version and produce the correct Atom Feed XML? Well search no more, this is exactly what the Inmeta Visual Studio Gallery Service does for you :-) Here you can see that in addition to the standard Online galleries there is an Inmeta Gallery that contains two extensions (our WIX templates and our custom TFS Checkin Policies). These can be installed/updated i the same way as extensions from the public Visual Studio Gallery. Installing the Service Download the installler (Inmeta.VSGalleryService.Install.msi) for the service and run it. The installation is straight forward, just select web site, application pool and (optional) a virtual directory where you want to install the service.   Note: If you want to run it in the web site root, just leave the application name blank Press Next and finish the installer. Open web.config in a text editor and locate the the <applicationSettings> element Edit the following setting values: FeedTitle This is the name that is shown if you browse to the service using a browser. Not used by Visual Studio BaseURI When Visual Studio downloads the extension, it will be given this URI + the name of the extension that you selected. This value should be on the following format: http://SERVER/[VDIR]/gallery/extension/ VSIXAbsolutePath This is the path where you will deploy your extensions. This can be a local folder or a remote share. You just need to make sure that the application pool identity account has read permissions in this folder Save web.config to finish the installation Open a browser and enter the URL to the service. It should show an empty Feed page:   Adding the Private Gallery in Visual Studio 2012 Now you need to add the gallery in Visual Studio. This is very easy and is done as follows: Go to Tools –> Options and select Environment –> Extensions and Updates Press Add to add a new gallery Enter a descriptive name, and add the URL that points to the web site/virtual directory where you installed the service in the previous step   Press OK to save the settings. Deploying an Extension This one is easy: Just drop the file in the designated folder! :-)  If it is a new version of an existing extension, the developers will be notified in the same way as for extensions from the public Visual Studio gallery: I hope that you will find this sever useful, please contact me if you have questions or suggestions for improvements!

    Read the article

  • Using .htaccess, can you hide the true URL?

    - by Richard Muthwill
    So I have a web hotel with 1 main website http://www.myrootsite.com/ and a few websites in subdirectories, in a folder called projects. I have domain names pointing to the subdirectories, but when holding the mouse over a link in those websites the URLs are shown as: http://www.myrootsite.com/projects/mysubsite/contact.html When I'm on mysubsite.com I want them to be shown as: http://www.mysubsite.com/contact.html I spoke to support for the web hotel and the guy said try using .htaccess, but I'm not sure exactly how to do this. Thank you very much for your time! Edit: For more information My website is: http://www.example1.com/ and I also own http://www.example2.com/. All of example2.com's files are in: example1.com/projects/example2/. When you visit example2.com, you'll notice all of the URL's point towards: example1.com/projects/example2/ but I want them to point towards: example2.com/ Can this be done? I hope this is enough info for you to go on :). Edit: For w3d I go to the url mysubsite.com and the browser shows the url mysubsite.com. The services I'm using create an iframe around myrootsite.com and use the url mysubsite.com I just hate that in Firefox and Internet Explorer, holding the mouse over link show that the destination url is: myrootsite.com/projects/mysubsite/...

    Read the article

  • Ubuntu 12.10 won't display properly after kernel upgrade

    - by Daniel
    After updating a system today, Ubuntu's doesn't display correctly. The desktop now looks like this. It was working properly before. I had to use the terminal to run synaptic package manager, so I could view the update history; which is as follows: Commit Log for Wed Nov 7 11:50:36 2012 Upgraded the following packages: linux-image-generic (3.5.0.17.19) to 3.5.0.18.21 Installed the following packages: linux-image-3.5.0-18-generic (3.5.0-18.29) linux-image-extra-3.5.0-18-generic (3.5.0-18.29) Prior to this issue, the last active driver was nvidia-current-updates, version 304.51. I tried using the nvidia-current driver, version 304.51.really.304.43 instead, but the problem persists. I tried running nvidia-settings from terminal, so I could try configuring something, but the application informs that the Nvidia driver is not being used. As the x-swat repository has nothing for Quantal, I desperately used the unstable x-edgers repository & upgraded, but to no avail; so I purged it. The display should normally be full HD, but the only available resolutions now are 1024x768(4:3) and 800x600(4:3). The system is Dell XPS-L702X, with NVIDIA GeForce GT 555M, and 17" screen. How can I fix this problem? Update: I tried using the Nouveau third-party driver & this fixes the issue. However, if you have any idea how to get the Nvidia drivers working properly with the latest kernel, please share; as I've noticed some videos playing very slowly on the system, though I'm not sure exactly why.

    Read the article

  • Ubuntu 11.10 won't let me login; it kicks me back to login screen

    - by zlyfire
    I was just copying files from my external HDD to my .wine directory, when I noticed the place where the launchers are (Unity desktop) was getting fuzzy and holding onto graphics from the things in the location prior(i have it autohide when a window covers it). I assumed it was just RAM problem, so I canceled the copying, since it wasn't actually important. The glitch remained, and so did another; very slow response time. The mouse moved just fine, but windows were waiting about a minute after I hit the x button to close or even switch active window. Once again, I blamed RAM (only have 2 GBs) so I restarted. Usually, it autologs me into my account, since I'm the only user, but this time it presented me with the login screen. I thought it odd, but tried to log in. A black screen with some text pops up (assuming terminal screen) for half a second then kicks me back to the login screen. I tried the guest account and no luck. I went into terminal (alt+ctrl+f1) and logged in and it worked. I deleted .Xauthority, made new account, and even rebooted quite a few times, all to no avail. Anyone have an idea?

    Read the article

  • Having extreme issues getting Compiz working on Ubuntu 11.10 (32-bit)

    - by Josh Hornell
    I have been working very hard the past few days to try to get Compiz configured and working correctly but I have been running into a lot of issues. I first installed the CompizConfig Settings Manager and tried different features such as the desktop cube and couldn't get any of them to work. Then I read that I may not have the right graphics card drivers installed (Nvidia GT540m). So I went into the Additional Drivers tool and it shows that 'no proprietary drivers are in use on this system', which struck me as a bit odd as when I very first installed Ubuntu it showed that my Nvidia drivers were installed an active until I downloaded and installed the updates to Ubuntu and since then it's shown empty. I then tried to install my graphics card drivers manually via this article How do I install the latest Nvidia drivers via the Additional Drivers tool?. I rebooted and had no issue although I tried to go back into the CompizConfig Settings Manager and couldn't get anything to work as well as my Additional Drivers tool still showed no drivers installed. I feel like I've tried about everything I can think of and any help would be much appreciated!

    Read the article

  • CUDA 4.1 Particle Update

    - by N0xus
    I'm using CUDA 4.1 to parse in the update of my Particle system that I've made with DirectX 10. So far, my update method for the particle systems is 1 line of code within a for loop that makes each particle fall down the y axis to simulate a waterfall: m_particleList[i].positionY = m_particleList[i].positionY - (m_particleList[i].velocity * frameTime * 0.001f); In my .cu class I've created a struct which I copied from my particle class and is as follows: struct ParticleType { float positionX, positionY, positionZ; float red, green, blue; float velocity; bool active; }; Then I have an UpdateParticle method in the .cu as well. This encompass the 3 main parameters my particles need to update themselves based off the initial line of code. : __global__ void UpdateParticle(float* position, float* velocity, float frameTime) { } This is my first CUDA program and I'm at a loss to what to do next. I've tried to simply put the particleList line in the UpdateParticle method, but then the particles don't fall down as they should. I believe it is because I am not calling something that I need to in the class where the particle fall code use to be. Could someone please tell me what it is I am missing to get it working as it should? If I am doing this completely wrong in general, the please inform me as well.

    Read the article

  • Slightly off topic - How to Fix Sky Go Error [t6013-c1501] (and [t6000-c1501])

    - by bconlon
    Sky doesn't seem to understand what their own errors mean, so I cobbled together an understanding from some other posts and managed to get it working.When you see the error [t6013-c1501] instead of your TV programme in Sky Go, it seems to mean:'You registered a device, but then changed the hardware, so now I'm confused!'In other words, the Digital rights management (DRM) used between Sky Go and Silverlight stored an old fingerprint of your PC, but rather than recognising this and allowing you to remove the device, it just disappears from the 'Manage Devices' page.DISCLAIMER: Perform the following steps at your own risk. It worked for me, but I didn't care if it broke stuff. If you care....don't do it!So, to fix this I did the following:1. Login to Sky Go and click 'Watch live TV' from the home page. It will attempt to show Sky News and fail with the error [t6013-c1501].2. Right click on the error and you should see the Menu option 'Silverlight'. Select this and a dialog should appear. Click the 'Application Storage' tab and delete any entry that relates to sky go. Clcik OK to close the dialog.3. Open explorer and navigate to the folder C:\ProgramData\Microsoft\PlayReady4. Rename the file mspr.hds to mspr.hds.OLD5. Go back to the browser and click F5. You may need to logout/login (not sure).Note: Don't rename/delete the folder C:\ProgramData\Microsoft\PlayReady or you will get the error [t6000-c1501]. The folder must exist in order for the new file to be created by Silverlight. Techie talk:So whoever wrote the code to create a new mspr.hds file didn't write code to check the folder existed causing what I assume is a generic error t6000, probably something like:catch (Exception ex) { WriteToLog("Oops, something broke!"); }#

    Read the article

  • No access to Samba shares

    - by koanhead
    I have three shared folders in my local home directory- that is to say, on my Ubuntu desktop's /home/me/. All were set up using "Sharing Options" in Nautilus' right-click menu. The standard "Music" and "Videos" folders are configured identically: the "Guest Access" box is checked, but the "Allow others to create and delete" is not. The third folder, called "shared", is configured to not allow Guest access but to allow others to modify files. I have not altered /etc/samba/smb.conf by hand, I have only used Sharing Options to create and modify these so-called "shares". My roommates have two Windows 7 computers and one Ubuntu Netbook Remix netbook. I have the aforementioned desktop machine and laptop running 10.04. None of these machines can access any of the shares. Attempts to access the Guest shares result in the message \\machine\directory is not accessible. The network name could not be found. This is the error message generated by a VM running Windows 2000. The other Windows machines generate a similar error. The Ubuntu laptop gives the error Unable to mount location: Failed to mount Windows share. Hurrah, once again, for informative error messages. That really helps a lot. When attempting to browse the folder called "shared" from the laptop, I'm confronted with a password dialog. This behavior is the same will all machines I've tried in the situation. On entering my username and password for the account to which the shares belong, the password dialog briefly disappears and is replaced with an identical dialog. No error message, useful or not, appears. When attempting to browse this folder with the VM, the outcome is the same except that the password dialog helpfully states "incorrect username or password". My assumption is that the username and password in question is that of the user which owns the shares. I have tried all other username and password combinations available in this context and the outcome is the same. I would like to be able to share files. Sharing them with Windows machines is a nice feature, or would be if it was available. Really I consider sharing files between two machines with the same version of the same operating system kind of a minimum condition for network usability. Samba last functioned reliably for me more than ten years ago. I have attempted to use it on and off since then with only intermittent success. Oh, and "Personal File Sharing" from the Preferences menu does not result in an entry in Places → Network → my-server. In fact, the old entry "MY-SERVER" goes away and is replaced by "koanhead's public files on my-server", which when I attempt to open it from the laptop gives a "DBus.Error.NoReply: Message did not receive a reply." I know I come here and gripe about Ubuntu a lot, but on the other hand I spend literally hours every day trying to fix things in Ubuntu. It's a good system which aspires to greatness, which is why things like this either Need to work; or Be adequately documented. Ideally both would be the case. Anyway, rant over. Hopefully someone will have some insight on this issue. Thanks all who bother to read this wall o'text for your time.

    Read the article

  • How to optimize a box2d simulation in action game?

    - by nathan
    I'm working on an action game and i use box2d for physics. The game use a tiled map. I have different types of body: Static ones used for tiles Dynamic ones for player and enemies Actually i tested my game with ~150 bodies and i have a 60fps constantly on my computer but not on my mobile (android). The FPS drop as the number of body increase. After having profiled the android application, i saw that the World.step took around 8ms in CPU time to execute. Here are few things to note: Not all the world is visible on screen, i use a scrolling system Enemies are constantly moving toward the player so there is alaways to force applied to their body Enemies need to collide between each others Enemies collide with tiles I also now that i can active/desactive or sleep/awake bodies. Considering the fact that only a part of the enemies are possibly displayed on screen, is there any optimizations i can do to reduce the execution time of box2d simulation? I found a guy trying an optimization based on distance of enemies from the player (link). But i seems like he just desactives far bodies (in my case, i could desactive bodies that are not visible). But my enemies need to move even when they are not visible on screen, and applying forces will not workd on inactive bodies. Should i play with sleeping bodies here? Also, enemies are composed by two fixtures and are constantly colliding with each others and with tiles but i really never need to get notified about that. Is there anything i can do to optimize this kind of scenario? Finally, am i wrong to try to run simulation at 60FPS on mobile and should i try to make it run at 30FPS?

    Read the article

  • what's a good approach to working with multiple databases?

    - by Riz
    I'm working on a project that has its own database call it InternalDb, but also it queries two other databases, call them ExternalDb1 and ExternalDb2. Both ExternalDb1 and ExternalDb2 are actually required by a few other projects. I'm wondering what the best approach for dealing with this is? Currently, I've just created a project for each of these external databases and then generated Edmx and entities using the entity-framework approach. My thought was that I could then include these projects in any of my solutions that require access to these databases. Also, I don't have any separate business layers. I just have a solution like below: Project.Domain ExternalDb1Project.Domain ExternalDb2Project.Domain Project.Web So my Domain projects contain the data access as well as the POCOs generated by Entity Framework and any business logic. But I'm not sure if this is a good approach. For example if I want to do Validation in my Project.Domain on the entities in the InternalDb, it's fine. But if I want to do Validation for entities from either of the ExternalDbs, then I wonder where it should go? To be more specific, I retrieve Employees from ExternalDb1Project.Domain. However, I want to make sure they are Active. Where should this Validation go? How to architect a project like this at a high level? Also, I want to make sure that I use IoC for my data contexts so I can create Fakes when writing tests. I wonder where the interfaces for these various data contexts would reside?

    Read the article

  • Community TFS Build Manager available for Visual Studio 2012 RC

    - by Jakob Ehn
    I finally got around to push out a version of the Community TFS Build Manager that is compatible with Visual Studio 2012 RC. Unfortunately I had to do this as a separate extension, it references different versions of the TFS assemblies and also some properties and methods that the 2010 version uses are now obsolete in the TFS 2012 API. To download it, just open the Extension Manager, select Online and search for TFS Build:   You can also download it from this link: http://visualstudiogallery.msdn.microsoft.com/cfdb84b4-285e-4eeb-9fa9-dad9bfe2cd10 The functionality is identical to the 2010 version, the only difference is that you can’t start it from the Team Explorer Builds node (since the TE has been completely rewritten and the extension API’s are not yet published). So, to start it you must use the Tools menu: We will continue shipping updates to both versions in the future, as long as it functionality that is compatible with both TFS 2010 and TFS 2012. You might also note that the color scheme used for the build manager doesn’t look as good with the VS2012 theme….   Hope you will enjoy the tool in Visual Studio 2012 as well. I want to thank all the people who have downloaded and used the 2010 version! For feedback, feature requests, bug reports please post this to the CodePlex site: http://tfsbuildextensions.codeplex.com

    Read the article

  • Dual displays not working - NVidia - Ubuntu 12.04 - Second Monitor - Two Screens

    - by user75105
    Graphics Card: NVidia 460 GTX. Driver: NVIDIA accelerated graphics driver (version current) I have one DVI monitor, an old Dell LCD from 2005, and one VGA monitor, an Asus ML238H from 2010 whose HDMI port broke. The Asus is plugged into my graphics card's primary monitor slot and is the better monitor even though it is VGA but my computer defaults to the Dell. This happens when I boot as well; the loading screens, the motherboard brand image, etc. are all displayed on the Dell monitor until Windows loads. Then both monitors work. The same thing happened when I booted up Ubuntu 12.04 but I did not see the second monitor when the log-in screen popped up, nor did I when I logged in. I went to System Settings/Displays and my Asus monitor is not an option. I clicked Detect Displays and the Asus is not detected. I looked at the other questions regarding NVIDIA drivers and recalled my problems with Ubuntu a few years ago and decided to check the driver. I went to Additional Drivers to install the proprietary driver and it looks like it's installed and active but I'm still having this problem. There is another driver option, the post-release NVIDIA driver, but that does not fix the problem either. Also, under System Details/Graphics the graphics device is listed as Unknown, which might indicate that it is using an open source generic driver and not the proprietary NVidia driver. But under Additional Drivers it says that I am using the NVidia driver. Any help is appreciated.

    Read the article

< Previous Page | 617 618 619 620 621 622 623 624 625 626 627 628  | Next Page >