Search Results

Search found 13955 results on 559 pages for 'easy angel'.

Page 494/559 | < Previous Page | 490 491 492 493 494 495 496 497 498 499 500 501  | Next Page >

  • Cannot get ATI Drivers installed

    - by bittoast67
    I am trying to install the Catalyst driver. The best I can get is a strange resolution problem and firefox acts all wonkt. The worst I have gotten is low graphics mode in which I just reinstall Ubuntu. I have a HP Pavilion Dv7 laptop. With Radeon 3200 HD. I plan to try again with a fresh install of Ubuntu 12.4.3 as I have heard its the most compatible. This is what I have done: I have tried just the easy way of going to synaptic and installing the drivers that way. the fglrx package (not the fglrx update). And if memory serves I think that boots me into low graphics mode. So, fresh install of Ubuntu and tried again. I have done everything a couple times from this site (http://wiki.cchtml.com/index.php/Ubuntu_Precise_Installation_Guide) following every instruction to a T. That gets me something, such as a lowered fan speed and a much cooler computer, but I also lose most of my resolution. And displays says its the best resolution I can get. I also have a very screwy firefox. Using this method I can see AMD Catalyst Control Center in my dash (two of them really one administrator and one not) but when I try to open it it says no amd driver detected. So again, ubuntu reinstall. I have tried the GUI method from the Legacy driver I got from AMD's site. It runs through smoothly and at the very end after I exit the installer it gives me an error. I have also tried various other methods using terminal, as well as various different drivers (the one from the amd's site and the one suggested in the above link for my graphics card) both to no avail. When I try the method in the link on number 2, and I get the super low res and screwy fire fox. I type in, fglrxinfo ,and get a badrequest error. I have yet to type in fglrxinfo and get anything like what I am supposed to. UPDATE: I am now currently reinstalling Ubuntu 12.4. I tried the above mentioned link - thank you very much!- just to see on the previously failed driver attempt by following the purge commands. And to no avail when typing fglrxinfo I still get the badrequest thing. I will update again after a try with a true fresh install. Thanks again!! UPDATE: Alright everyone. Still no go. I have done everything word per word in the provided tutorial. I have rebooted my computer again to a fucked up resolution and this is what I get when typing fglrxinfo: $ fglrxinfo X Error of failed request: BadRequest (invalid request code or no such operation) Major opcode of failed request: 153 (GLX) Minor opcode of failed request: 19 (X_GLXQueryServerString) Serial number of failed request: 12 Current serial number in output stream: 12 I would like to add that when installing this file: fglrx_8.970-0ubuntu1_amd64.deb I got this: Building initial module for 3.8.0-29-generic Error! Bad return status for module build on kernel: 3.8.0-29-generic (x86_64) Consult /var/lib/dkms/fglrx/8.970/build/make.log for more information. update-initramfs: deferring update (trigger activated) Processing triggers for ureadahead ... Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Processing triggers for initramfs-tools ... update-initramfs: Generating /boot/initrd.img-3.8.0-29-generic Processing triggers for libc-bin ... ldconfig deferred processing now taking place Any ideas? Anyone? I cant for the life of me figure out what I am doing wrong.

    Read the article

  • Executing Stored Procedures in Visual Studio LightSwitch.

    - by dataintegration
    A LightSwitch Project is very easy way to visualize and manipulate information directly from one of our ADO.NET Providers. But when it comes to executing the Stored Procedures, it can be a bit more complicated. In this article, we will demonstrate how to execute a Stored Procedure in LightSwitch. For the purposes of this article, we will be using the RSSBus Email Data Provider, but the same process will work with any of our ADO.NET Providers. Creating the RIA Service. Step 1: Open Visual Studio and create a new WCF RIA Service Class Project. Step 2:Add the reference to the RSSBus Email Data Provider dll in the (ProjectName).Web project. Step 3: Add a new Domain Service Class to the (ProjectName).Web project. Step 4: In the new Domain Service Class, create a new class with the attributes needed for the Stored Procedure's parameters. In this demo, the Stored Procedure we are executing is called SendMessage. The parameters we will need are as follows: public class NewMessage{ [Key] public int ID { get; set; } public string FromEmail { get; set; } public string ToEmail { get; set; } public string Subject { get; set; } public string Text { get; set; } } Note: The created class must have an ID which will serve as the key value. Step 5: Create a new method that will executed when the insert event fires. Inside this method you can use the standards ADO.NET code which will execute the stored procedure. [Insert] public void SendMessage(NewMessage newMessage) { try { EmailConnection conn = new EmailConnection(connectionString); EmailCommand comm = new EmailCommand("SendMessage", conn); comm.CommandType = System.Data.CommandType.StoredProcedure; if (!newMessage.FromEmail.Equals("")) comm.Parameters.Add(new EmailParameter("@From", newMessage.FromEmail)); if (!newMessage.ToEmail.Equals("")) comm.Parameters.Add(new EmailParameter("@To", newMessage.ToEmail)); if (!newMessage.Subject.Equals("")) comm.Parameters.Add(new EmailParameter("@Subject", newMessage.Subject)); if (!newMessage.Text.Equals("")) comm.Parameters.Add(new EmailParameter("@Text", newMessage.Text)); comm.ExecuteNonQuery(); } catch (Exception exc) { Console.WriteLine(exc.Message); } } Step 6: Create a query method. We are not going to be using getNewMessages(), so it does not matter what it returns for the purpose of our example, but you will need to create a method for the query event as well. [Query(IsDefault=true)] public IEnumerable<NewMessage> getNewMessages() { return null; } Step 7: Rebuild the whole solution. Creating the LightSwitch Project. Step 8: Open Visual Studio and create a new LightSwitch Application Project. Step 9: On the Data Sources, add a new data source. Choose a WCF RIA Service Step 10: Choose to add a new reference and select the (Project Name).Web.dll generated from the RIA Service. Step 11: Select the entities you would like to import. In this case, we are using the recently created NewMessage entity. Step 13: On the Screens section, create a new screen and select the NewMessage entity as the Screen Data. Step 14: After you run the project, you will be able to add a new record and save it. This will execute the Stored Procedure and send the new message. If you create a screen to check the sent messages, you can refresh this screen to see the mail you sent. Sample Project To help you with get started using stored procedures in LightSwitch, download the fully functional sample project. You will also need the RSSBus Email Data Provider to make the connection. You can download a free trial here.

    Read the article

  • College Ratings via the Federal Government

    - by user9147039
    A few weeks back you might remember news about a higher education rating system proposal from the Obama administration. As I've discussed previously, political and stakeholder pressures to improve outcomes and increase transparency are stronger than ever before. The executive branch proposal is intended to make progress in this area. Quoting from the proposal itself, "The ratings will be based upon such measures as: Access, such as percentage of students receiving Pell grants; Affordability, such as average tuition, scholarships, and loan debt; and Outcomes, such as graduation and transfer rates, graduate earnings, and advanced degrees of college graduates.” This is going to be quite complex, to say the least. Most notably, higher ed is not monolithic. From community and other 2-year colleges, to small private 4-year, to professional schools, to large public research institutions…the many walks of higher ed life are, well, many. Designing a ratings system that doesn't wind up with lots of unintended consequences and collateral damage will be difficult. At best you would end up potentially tarnishing the reputation of certain institutions that were actually performing well against the metrics and outcome measures that make sense in their "context" of education. At worst you could spend a lot of time and resources designing a system that would lose credibility with its "customers". A lot of institutions I work with already have in place systems like the one described above. They are tracking completion rates, completion timeframes, transfers to other institutions, job placement, and salary information. As I talk to these institutions there are several constants worth noting: • Deciding on which metrics to measure is complicated. While employment and salary data are relatively easy to track, qualitative measures are more difficult. How do you quantify the benefit to someone who studies in one field that may not compensate him or her as well as another field but that provides huge personal fulfillment and reward is a difficult measure to quantify? • The data is available but the systems to transform the data into actual information that can be used in meaningful ways are not. Too often in higher ed information is siloed. As such, much of the data that need to be a part of a comprehensive system sit in multiple organizations, oftentimes outside the reach of core IT. • Politics and culture are big barriers. One of the areas that my team and I spend a lot of time talking about with higher ed institutions all over the world is the imperative to optimize for student success. This, like the tracking of the students’ achievement after graduation, requires a level or organizational capacity that does not currently exist. The primary barrier is the culture of "data islands" in higher ed, and the need for leadership to drive out the divisions between departments, schools, colleges, etc. and institute academy-wide analytics and data stewardship initiatives that will enable student success. • Data quality is a very big issue. So many disparate systems exist (some on premise, some "in the cloud") that keep data about "persons" using different means to identify them. Establishing a single source of truth about an individual and his or her data is difficult without some type of data quality policy and tools. Good tools actually exist but are seldom leveraged. Don't misunderstand - I think it's a great idea to drive additional transparency and accountability into the system of higher education. And not just at home, but globally. Students and parents need access to key data to make informed, responsible choices. The tools exist to not only enable this kind of information to be shared but to capture the very metrics stakeholders care most about and in a way that makes sense in the context of a given institution's "place" in the overall higher ed panoply.

    Read the article

  • Introducing the Metro User Interface on Windows 2012

    - by andywe
    Although I am a big fan of using PowerShell to do many of my server operations, that aspect is well covered by those far more knowledgeable than I, and there is vast information around the web already on that. The new Metro interface, and getting around both Windows 8 and Windows Server 2012 though is relatively new, even for those whop ran the previews. What is this? A blank Desktop!   Where did the start button go? Well, it is still there...sort of. It is hidden, and acts like an auto hidden component that appear only when the mouse is hovered over the lower left corner of the screen. Those familiar with Gnome or OSX can relate this to the "Hot Corners" functions. To get to the start button, hover your mouse in the very left corner of the task bar. Let it sit there a moment, and a small blue square with colored tiles in it called start will appear. Click it. I clicked it and now I have all the tiles..What is this?   Welcome to the Metro interface. This is a much more modern look, and although at first seems weird and cumbersome, I have actually found that it is a bit more extensible, allowing greater organization and customization than the older explorer desktop. If you look closely, you'll see each box represents either a program, or program group. First, a few basics about using the start view. First and foremost, a right mouse click will bring up a bar on the bottom, with an icon towards the right. Notice it is titled “All Apps”. An even easier way in many places is to hover your mouse in the exact opposite corner, in the upper right. A sidebar will open and expose what used to be a widget bar (remember Vista?), and there are options for Search, Start, and Settings.   Ok Great, but where is everything? It’s all there…Click the All Apps icon.   Look better? Notice the scroll bar at the bottom. Move it right..your desktop is sized to your content..so you can have a smaller, or larger amount of programs exposed. Each icon can be secondary clicked (right mouse click for most of us, and an options bar at the bottom, rather than the old small context menu, is opened with some very familiar options.   Notice the top of the Windows Explorer window has some new features. You still have your right mouse click functions, but since the shortcuts for these items already exist..just copy them. There are many ways, but here is a long way to show you more of the interface. 1. Right mouse click a program icon, and select the Open File Location option. 2. Trusty file manager opens…but if you look closely up at top edge of the window, you’ll see a nifty enhancement. An orange colored box that is titled Shortcut Tools and another lavender box Title Application tools. Each of these adds options at the top of the file manager window to make selection easy. Of course, you can still secondary click an item in the listing window too. 3. Click shortcut tools, right click your app shortcut and copy it. Then simply paste it into the desktop outside the File Explorer window Also note some of the newer features. The large icons up top below the menu that has many common operations. The options change as you select each menu item. Well, that’s it for this installment. I hope this helps you out.

    Read the article

  • WebLogic How-to Videos: Install, Upgrade, & Patch

    - by Ruma Sanyal
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Here is another great YouTube video by our product manager Monica Riccelli. She talks about installers now being standardized in Oracle for greater consistency -- no more WebLogic native installers. Also, JDK is no longer a part of the WebLogic install. The various installers she discusses include OUI, ZIP, OEPE, Coherence and more. Monica then takes us through a step by step install process. After the install process is complete the video takes us through the configuration wizard. The ZIP installer is then discussed and its effectiveness, such as it being the smallest downloadable option, easy, and very popular with our customers and limitations (such as for development only and not to be used in production) highlighted. Monica then takes us through the configuration wizard, its usage, and when to use WLST scripts. The video then discusses NodeManager and its usage and discusses how to reconfigure a WebLogic domain on upgrade – through our GUI tools or through command line interface. Lastly, it highlights Opatch – a patch application tool used by our customers and standardized across all Oracle products. Really detailed video. Check it out!  /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Editing files without race conditions?

    - by user2569445
    I have a CSV file that needs to be edited by multiple processes at the same time. My question is, how can I do this without introducing race conditions? It's easy to write to the end of the file without race conditions by open(2)ing it in "a" (O_APPEND) mode and simply write to it. Things get more difficult when removing lines from the file. The easiest solution is to read the file into memory, make changes to it, and overwrite it back to the file. If another process writes to it after it is in memory, however, that new data will be lost upon overwriting. To further complicate matters, my platform does not support POSIX record locks, checking for file existence is a race condition waiting to happen, rename(2) replaces the destination file if it exists instead of failing, and editing files in-place leaves empty bytes in it unless the remaining bytes are shifted towards the beginning of the file. My idea for removing a line is this (in pseudocode): filename = "/home/user/somefile"; file = open(filename, "r"); tmp = open(filename+".tmp", "ax") || die("could not create tmp file"); //"a" is O_APPEND, "x" is O_EXCL|O_CREAT while(write(tmp, read(file)); //copy the $file to $file+".new" close(file); //edit tmp file unlink(filename) || die("could not unlink file"); file = open(filename, "wx") || die("another process must have written to the file after we copied it."); //"w" is overwrite, "x" is force file creation while(write(file, read(tmp))); //copy ".tmp" back to the original file unlink(filename+".tmp") || die("could not unlink tmp file"); Or would I be better off with a simple lock file? Appender process: lock = open(filename+".lock", "wx") || die("could not lock file"); file = open(filename, "a"); write(file, "stuff"); close(file); close(lock); unlink(filename+".lock"); Editor process: lock = open(filename+".lock", "wx") || die("could not lock file"); file = open(filename, "rw"); while(contents += read(file)); //edit "contents" write(file, contents); close(file); close(lock); unlink(filename+".lock"); Both of these rely on an additional file that will be left over if a process terminates before unlinking it, causing other processes to refuse to write to the original file. In my opinion, these problems are brought on by the fact that the OS allows multiple writable file descriptors to be opened on the same file at the same time, instead of failing if a writable file descriptor is already open. It seems that O_CREAT|O_EXCL is the closest thing to a real solution for preventing filesystem race conditions, aside from POSIX record locks. Another possible solution is to separate the file into multiple files and directories, so that more granular control can be gained over components (lines, fields) of the file using O_CREAT|O_EXCL. For example, "file/$id/$field" would contain the value of column $field of the line $id. It wouldn't be a CSV file anymore, but it might just work. Yes, I know I should be using a database for this as databases are built to handle these types of problems, but the program is relatively simple and I was hoping to avoid the overhead. So, would any of these patterns work? Is there a better way? Any insight into these kinds of problems would be appreciated.

    Read the article

  • Identity in .NET 4.5&ndash;Part 3: (Breaking) changes

    - by Your DisplayName here!
    I recently started porting a private build of Thinktecture.IdentityModel to .NET 4.5 and noticed a number of changes. The good news is that I can delete large parts of my library because many features are now in the box. Along the way I found some other nice additions. ClaimsIdentity now has methods to query the claims collection, e.g. HasClaim(), FindFirst(), FindAll(). ClaimsPrincipal has those methods as well. But they work across all contained identities. Nice! ClaimsPrincipal.Current retrieves the ClaimsPrincipal from Thread.CurrentPrincipal. Combined with the above changes, no casting necessary anymore. SecurityTokenHandler now has read and write methods that work directly with strings. This makes it much easier to deal with non-XML tokens like SWT or JWT. A new session security token handler that uses the ASP.NET machine key to protect the cookie. This makes it easier to get started in web farm scenarios. No need for a custom service host factory or the federation behavior anymore. WCF can be switched into “WIF mode” with the useIdentityConfiguration switch (odd name though). Tooling has become better and the new test STS makes it very easy to get started. On the other hand – and that was kind of expected – to bring claims into the core framework, there are also some breaking changes for WIF code. If you want to migrate (and I would recommend that), most changes to your code are mechanical. The following is a brain dump of the changes I encountered. Assembly Microsoft.IdentityModel is gone. The new functionality is now in mscorlib, System.IdentityModel(.Services) and System.ServiceModel. All the namespaces have changed as well. No IClaimsPrincipal and IClaimsIdentity anymore. Configuration section has been split into <system.identityModel /> and <system.identityModel.services />. WCF configuration story has changed as well. Claim.ClaimType is now Claim.Type. ClaimCollection is now IEnumerable<Claim>. IsSessionMode is now IsReferenceMode. Bootstrap token handling is different now. ClaimsPrincipalHttpModule is gone. This is not really needed anymore, apart from maybe claims transformation (see here). Various factory methods on ClaimsPrincipal are gone (e.g. ClaimsPrincipal.CreateFromIdentity()). SecurityTokenHandler.ValidateToken now returns a ReadOnlyCollection<ClaimsIdentity>. Some lower level helper classes are gone or internal now (e.g. KeyGenerator). The WCF WS-Trust bindings are gone. I think this is a pity. They were *really* useful when doing work with WSTrustChannelFactory. Since WIF is part of the Windows operating system and also supported in future versions of .NET, there is no urgent need to migrate to the 4.5 claims model. But obviously, going forward, at some point you want to make the move.

    Read the article

  • Simple Navigation In Windows Phone 7

    - by PeterTweed
    Take the Slalom Challenge at www.slalomchallenge.com! When moving to the mobile platform all applications need to be able to provide different views.  Navigating around views in Windows Phone 7 is a very easy thing to do.  This post will introduce you to the simplest technique for navigation in Windows Phone 7 apps. Steps: 1.     Create a new Windows Phone Application project. 2.     In the MainPage.xaml file copy the following xaml into the ContentGrid Grid:             <StackPanel Orientation="Vertical" VerticalAlignment="Center"  >                 <TextBox Name="ValueTextBox" Width="200" ></TextBox>                 <Button Width="200" Height="30" Content="Next Page" Click="Button_Click"></Button>             </StackPanel> This gives a text box for the user to enter text and a button to navigate to the next page. 3.     Copy the following event handler code to the MainPage.xaml.cs file:         private void Button_Click(object sender, RoutedEventArgs e)         {             NavigationService.Navigate(new Uri( string.Format("/SecondPage.xaml?val={0}", ValueTextBox.Text), UriKind.Relative));         }   The event handler uses the NavigationService.Navigate() function.  This is what makes the navigation to another page happen.  The function takes a Uri parameter with the name of the page to navigate to and the indication that it is a relative Uri to the current page.  Note also the querystring is formatted with the value entered in the ValueTextBox control – in a similar manner to a standard web querystring. 4.     Add a new Windows Phone Portrait Page to the project named SecondPage.xaml. 5.     Paste the following XAML in the ContentGrid Grid in SecondPage.xaml:             <Button Name="GoBackButton" Width="200" Height="30" Content="Go Back" Click="Button_Click"></Button>   This provides a button to navigate back to the first page. 6.     Copy the following event handler code to the SecondPage.xaml.cs file:         private void Button_Click(object sender, RoutedEventArgs e)         {             NavigationService.GoBack();         } This tells the application to go back to the previously displayed page. 7.     Add the following code to the constructor in SecondPage.xaml.cs:             this.Loaded += new RoutedEventHandler(SecondPage_Loaded); 8.     Add the following loaded event handler to the SecondPage.xaml.cs file:         void SecondPage_Loaded(object sender, RoutedEventArgs e)         {             if (NavigationContext.QueryString["val"].Length > 0)                 MessageBox.Show(NavigationContext.QueryString["val"], "Data Passed", MessageBoxButton.OK);             else                 MessageBox.Show("{Empty}!", "Data Passed", MessageBoxButton.OK);         }   This code pops up a message box displaying either the text entered on the first page or the message “{Empty}!” if no text was entered. 9.     Run the application, enter some text in the text box and click on the next page button to see the application in action:   Congratulations!  You have created a new Windows Phone 7 application with page navigation.

    Read the article

  • Software and/(x)or Hardware Projects for Pre-School Kids

    - by haylem
    I offered to participate at my kid's pre-school for various activities (yes, I'm crazy like that), and one of them is to help them discover extra-curricular (big word for a pre-school, but by lack of a better one... :)) hobbies, which may or may not relate to a professional activity. At first I thought that it wouldn't be really easy to have pre-schoolers relate to programming or the internal workings of a computer system in general (and I'm more used to teaching middle-school to university-level students), but then I thought there must be a way. So I'm trying to figure out ways to introduce very young kids (3yo) to computer systems in a fun and preferably educational way. Of course, I don't expect them to start smashing the stack for fun and profit right away (or at least not voluntarily, though I could use the occasion for some toddler tests...), but I'm confident there must be ways to get them interested in both: using the systems, becoming curious about understanding what they do, interacting with the systems to modify them. I guess this setting is not really relevant after all, it's pretty much the same as if you were aiming to achieve the same for your own kids at home. Ideas Considering we're talking 3yo pre-schoolers here, and that at this age some kids are already quite confident using a mouse (some even a keyboard, if not for typing, at least to press some buttons they've come to associate with actions) while others have not yet had any interaction with computers of any kind, it needs to be: rather basic, demonstrated and played with in less then 5 or 10 minutes, doable in in groups or alone, scalable and extendable in complexity to accommodate their varying abilities. The obvious options are: basic smallish games to play with, interactive systems like LOGO, Kojo, Squeak and clones (possibly even simpler than that), or thngs like Lego Systems. I guess it can be a thing to reflect on both at the software and the hardware levels: it could be done with a desktop or laptop machine, a tablet, a smartphone (or a crap-phone, for that matter, as long as you can modify it), or even get down to building something from scratch (Raspberry Pi and Arduino being popular options at the moment). I can probably be in the form of games, funny visualizations (which are pretty much games) w/ Prototype, virtual worlds to explore. I also thought on the moment (and I hope this won't offend anyone) that some approaches to teaching pets could work (reward systems, haptic feedback and such things could quickly point a kid in the right direction to understanding how things work, in a similar fashion - I'm not suggesting to shock the kids!). Hmm, Is There an Actual Question in There? What type of systems do you think might be a good fit, both in terms of hardware and software? Do you have seen such systems, or have anything in mind to work on? Are you aware of some research in this domain, with tangible results? Any input is welcome. It's not that I don't see options: there are tons, but I have a harder time pinpointing a more concrete and definite type of project/activity, so I figure some have valuable ideas or existing ones. Note: I am not advocating that every kid should learn to program, be interested in computer systems, or that all of them in a class would even care enough to follow such an introduction with more than a blank stare. I don't buy into the "everybody would benefit from learning to program" thing. Wouldn't hurt, but not necessary in any way. But if I can walk out of there with a few of them having smiled using the thing (or heck, cried because others took them away from them), that'd be good enough. Related Questions I've seen and that seem to complement what I'm looking for, but not exactly for the same age groups or with the same goals: Teaching Programming to Kids Recommendations for teaching kids math concepts & skills for programming?

    Read the article

  • Multidimensional multiple-choice knapsack problem: find a feasible solution

    - by Onheiron
    My assignment is to use local search heuristics to solve the Multidimensional multiple-choice knapsack problem, but to do so I first need to find a feasible solution to start with. Here is an example problem with what I tried so far. Problem R1 R2 R3 RESOUCES : 8 8 8 GROUPS: G1: 11.0 3 2 2 12.0 1 1 3 G2: 20.0 1 1 3 5.0 2 3 2 G3: 10.0 2 2 3 30.0 1 1 3 Sorting strategies To find a starting feasible solution for my local search I decided to ignore maximization of gains and just try to fit the resources requirements. I decided to sort the choices (strategies) in each group by comparing their "distance" from the multidimensional space origin, thus calculating SQRT(R1^2 + R2^2 + ... + RN^2). I felt like this was a keen solution as it somehow privileged those choices with resouce usages closer to each other (e.g. R1:2 R2:2 R3:2 < R1:1 R2:2 R3:3) even if the total sum is the same. Doing so and selecting the best choice from each group proved sufficent to find a feasible solution for many[30] different benchmark problems, but of course I knew it was just luck. So I came up with the problem presented above which sorts like this: R1 R2 R3 RESOUCES : 8 8 8 GROUPS: G1: 12.0 1 1 3 < select this 11.0 3 2 2 G2: 20.0 1 1 3 < select this 5.0 2 3 2 G3: 30.0 1 1 3 < select this 10.0 2 2 3 And it is not feasible because the resources consmption is R1:3, R2:3, R3:9. The easy solution is to pick one of the second best choices in group 1 or 2, so I'll need some kind of iteration (local search[?]) to find the starting feasible solution for my local search solution. Here are the options I came up with Option 1: iterate choices I tried to find a way to iterate all the choices with a specific order, something like G1 G2 G3 1 1 1 2 1 1 1 2 1 1 1 2 2 2 1 ... believeng that feasible solutions won't be that far away from the unfeasible one I start with and thus the number of iterations will keep quite low. Does this make any sense? If yes, how can I iterate the choices (grouped combinations) of each group keeping "as near as possibile" to the previous iteration? Option 2: Change the comparation term I tried to think how to find a better variable to sort the choices on. I thought at a measure of how "precious" a resource is based on supply and demand, so that an higer demand of a more precious resource will push you down the list, but this didn't help at all. Also I thought there probably isn't gonna be such a comparsion variable which assures me a feasible solution at first strike. I there such a variable? If not, is there a better sorting criteria anyways? Option 3: implement any known sub-optimal fast solving algorithm Unfortunately I could not find any of such algorithms online. Any suggestion?

    Read the article

  • C# Dev Challenge Part 1 of n &ndash; Beginner Edition

    - by mbcrump
    I developed this challenge to test one’s knowledge of C Sharp. I am planning on creating several challenges with different skill sets, so don’t get mad if this challenge doesn’t well challenge you... I noticed that most people like short quizzes so this one only contains 5 questions. All of the challenges are clear and concise of what I am asking you to do. No smoke and mirrors here, meaning that none of the code has syntax errors. The purpose of this exercise is to test several OOP concepts and see how much of the C# language you really know. Question #1 – Lets start off Easy… Will the following code snippet compile successfully? What does this question test? - Can this compile without a namespace? Do you have to have an entry point of “static void Main()”? class Test { static int Main() { System.Console.WriteLine("Developer Challenge"); return 0; } } Answer (select text in box below): Yes, it will compile successfully. Question #2 – What is the value of the Console.WriteLine statements? What does this question test? – Do I understand reference types/value types? If a variable is declared with the @ symbol and its not a reserved keyword does the application compile successfully? using System; internal struct MyStruct { public int Value; } internal class MyClass { public int Value; } class Test { static void Main() { MyStruct @struct1 = new MyStruct(); MyStruct @struct2 = @struct1; @struct2.Value = 100; MyClass @ref1 = new MyClass(); MyClass @ref2 = @ref1; @ref2.Value = 100; Console.WriteLine("Value Type: {0} {1}", @struct1.Value, @struct2.Value); Console.WriteLine("Reference Type: {0} {1}", @ref1.Value, @ref2.Value); } } Answer (select text in box below): Value Type: 0 100 Reference Type: 100 100 Question #3 – What is the value of the Console.WriteLine statements? What does this question test? – Can 2 objects reference the same point in memory? using System; class Test { static void Main() { string s1 = "Testing2"; string t1 = s1; Console.WriteLine(s1 == t1); Console.WriteLine((object)s1 == (object)t1); } } Answer (select text in box below): True True Question #4 – What is the value of the Console.WriteLine statements? What does this question test? – How does the “Stack” work – LIFO or FIFO?   using System; using System.Collections; class Test { static void Main() { Stack a = new Stack(5); a.Push("1"); a.Push("2"); a.Push("3"); a.Push("4"); a.Push("5"); foreach (var o in a) { Console.WriteLine(o); } } } Answer (select text in box below): 5 4 3 2 1 Question #5 – What is the value of the Console.WriteLine statements? What does this question test? – Array and General Looping Knowledge. using System; namespace ConsoleApplication5 { class Program { static void Main(string[] args) { int[] J_LIST = new int[5] { 1, 2, 3, 4, 5 }; int K = 10; int L = 5; foreach (var J in J_LIST) { K = K - J; L = K + 2 * J; Console.WriteLine("J = {0, 5} K = {1, 5} L = {2, 5}", J, K, L); } Console.ReadLine(); } } } Answer (select text in box below): J = 1 K = 9 L = 11 J = 2 K = 7 L = 11 J = 3 K = 4 L = 10 J = 4 K = 0 L = 8 J = 5 K = -5 L = 5 Stay Tuned for more challenges!

    Read the article

  • Concurrent Affairs

    - by Tony Davis
    I once wrote an editorial, multi-core mania, on the conundrum of ever-increasing numbers of processor cores, but without the concurrent programming techniques to get anywhere near exploiting their performance potential. I came to the.controversial.conclusion that, while the problem loomed for all procedural languages, it was not a big issue for the vast majority of programmers. Two years later, I still think most programmers don't concern themselves overly with this issue, but I do think that's a bigger problem than I originally implied. Firstly, is the performance boost from writing code that can fully exploit all available cores worth the cost of the additional programming complexity? Right now, with quad-core processors that, at best, can make our programs four times faster, the answer is still no for many applications. But what happens in a few years, as the number of cores grows to 100 or even 1000? At this point, it becomes very hard to ignore the potential gains from exploiting concurrency. Possibly, I was optimistic to assume that, by the time we have 100-core processors, and most applications really needed to exploit them, some technology would be around to allow us to do so with relative ease. The ideal solution would be one that allows programmers to forget about the problem, in much the same way that garbage collection removed the need to worry too much about memory allocation. From all I can find on the topic, though, there is only a remote likelihood that we'll ever have a compiler that takes a program written in a single-threaded style and "auto-magically" converts it into an efficient, correct, multi-threaded program. At the same time, it seems clear that what is currently the most common solution, multi-threaded programming with shared memory, is unsustainable. As soon as a piece of state can be changed by a different thread of execution, the potential number of execution paths through your program grows exponentially with the number of threads. If you have two threads, each executing n instructions, then there are 2^n possible "interleavings" of those instructions. Of course, many of those interleavings will have identical behavior, but several won't. Not only does this make understanding how a program works an order of magnitude harder, but it will also result in irreproducible, non-deterministic, bugs. And of course, the problem will be many times worse when you have a hundred or a thousand threads. So what is the answer? All of the possible alternatives require a change in the way we write programs and, currently, seem to be plagued by performance issues. Software transactional memory (STM) applies the ideas of database transactions, and optimistic concurrency control, to memory. However, working out how to break down your program into sufficiently small transactions, so as to avoid contention issues, isn't easy. Another approach is concurrency with actors, where instead of having threads share memory, each thread runs in complete isolation, and communicates with others by passing messages. It simplifies concurrent programs but still has performance issues, if the threads need to operate on the same large piece of data. There are doubtless other possible solutions that I haven't mentioned, and I would love to know to what extent you, as a developer, are considering the problem of multi-core concurrency, what solution you currently favor, and why. Cheers, Tony.

    Read the article

  • Is there any kind of established architecture for browser based games?

    - by black_puppydog
    I am beginning the development of a broser based game in which players take certain actions at any point in time. Big parts of gameplay will be happening in real life and just have to be entered into the system. I believe a good kind of comparison might be a platform for managing fantasy football, although I have virtually no experience playing that, so please correct me if I am mistaken here. The point is that some events happen in the program (i.e. on the server, out of reach for the players) like pulling new results from some datasource, starting of a new round by a game master and such. Other events happen in real life (two players closing a deal on the transfer of some team member or whatnot - again: have never played fantasy football) and have to be entered into the system. The first part is pretty easy since the game masters will be "staff" and thus can be trusted to a certain degree to not mess with the system. But the second part bothers me quite a lot, especially since the actions may involve multiple steps and interactions with different players, like registering a deal with the system that then has to be approved by the other party or denied and passed on to a game master to decide. I would of course like to separate the game logic as far as possible from the presentation and basic form validation but am unsure how to do this in a clean fashion. Of course I could (and will) put some effort into making my own architectural decisions and prototype different ideas. But I am bound to make some stupid mistakes at some point, so I would like to avoid some of that by getting a little "book smart" beforehand. So the question is: Is there any kind of architectural works that I can read up on? Papers, blogs, maybe design documents or even source code? Writing this down this seems more like a business application with business rules, workflows and such... Any good entry points for that? EDIT: After reading the first answers I am under the impression of having made a mistake when including the "MMO" part into the title. The game will not be all fancy (i.e. 3D or such) on the client side and the logic will completely exist on the server. That is, apart from basic form validation for the user which will also be mirrored on the server side. So the target toolset will be HTML5, JavaScript, probably JQuery(UI). My question is more related to the software architecture/design of a system that enforces certain rules. Separation of ruleset and presentation One problem I am having is that I want to separate the game rules from the presentation. The first step would be to make an own module for the game "engine" that only exposes an interface that allows all actions to be taken in a clean way. If an action fails with regard to some pre/post condition, the engine throws an exception which is then presented to the user like "you cannot sell something you do not own" or "after that you would end up in a situation which is not a valid game state." The problem here is that I would like to be able to not even present invalid action in the first place or grey out the corresponding UI elements. Changing and tweaking the ruleset Another big thing is the ruleset. It will probably evolve over time and most definitely must be tweaked. What's more, it should be possible (to a certain extent) to build a ruleset that fits a specific game round, i.e. choosing different kinds of behaviours in different aspects of the game. This would do something like "we play it with extension A today but we throw out extension B." For me, this screams "Architectural/Design pattern" but I have no idea on who might have published on something like this, not even what to google for.

    Read the article

  • Best way to load application settings

    - by enzom83
    A simple way to keep the settings of a Java application is represented by a text file with ".properties" extension containing the identifier of each setting associated with a specific value (this value may be a number, string, date, etc..). C# uses a similar approach, but the text file must be named "App.config". In both cases, in source code you must initialize a specific class for reading settings: this class has a method that returns the value (as string) associated with the specified setting identifier. // Java example Properties config = new Properties(); config.load(...); String valueStr = config.getProperty("listening-port"); // ... // C# example NameValueCollection setting = ConfigurationManager.AppSettings; string valueStr = setting["listening-port"]; // ... In both cases we should parse strings loaded from the configuration file and assign the ??converted values to the related typed objects (parsing errors could occur during this phase). After the parsing step, we must check that the setting values ??belong to a specific domain of validity: for example, the maximum size of a queue should be a positive value, some values ??may be related (example: min < max), and so on. Suppose that the application should load the settings as soon as it starts: in other words, the first operation performed by the application is to load the settings. Any invalid values for the settings ??must be replaced automatically with default values??: if this happens to a group of related settings, those settings are all set with default values. The easiest way to perform these operations is to create a method that first parses all the settings, then checks the loaded values ??and finally sets any default values??. However maintenance is difficult if you use this approach: as the number of settings increases while developing the application, it becomes increasingly difficult to update the code. In order to solve this problem, I had thought of using the Template Method pattern, as follows. public abstract class Setting { protected abstract bool TryParseValues(); protected abstract bool CheckValues(); public abstract void SetDefaultValues(); /// <summary> /// Template Method /// </summary> public bool TrySetValuesOrDefault() { if (!TryParseValues() || !CheckValues()) { // parsing error or domain error SetDefaultValues(); return false; } return true; } } public class RangeSetting : Setting { private string minStr, maxStr; private byte min, max; public RangeSetting(string minStr, maxStr) { this.minStr = minStr; this.maxStr = maxStr; } protected override bool TryParseValues() { return (byte.TryParse(minStr, out min) && byte.TryParse(maxStr, out max)); } protected override bool CheckValues() { return (0 < min && min < max); } public override void SetDefaultValues() { min = 5; max = 10; } } The problem is that in this way we need to create a new class for each setting, even for a single value. Are there other solutions to this kind of problem? In summary: Easy maintenance: for example, the addition of one or more parameters. Extensibility: a first version of the application could read a single configuration file, but later versions may give the possibility of a multi-user setup (admin sets up a basic configuration, users can set only certain settings, etc..). Object oriented design.

    Read the article

  • IIS7 dynamic_compression_not_success Reason 12

    - by Peter Oehlert
    So, I'm a bit of an IIS7 n00b but I've used most of the old IIS systems going back to 3. I'm trying to turn on dynamic compression and it's working, mostly. It doesn't work for my ADO.Net Data Service (Astoria) requests, batched or not. I found the freb tracing which was really helpful. And what I come up with unbatched requests is that it returns Reason Code 12, NO_MATCHING_CONTENT_TYPE. OK, so I don't have the matching mime type specified, that's easy. Except this is what I have in my web.config (which I think is correct, but maybe not). <httpCompression dynamicCompressionDisableCpuUsage="100" dynamicCompressionEnableCpuUsage="100" noCompressionForHttp10="false" noCompressionForProxies="false" noCompressionForRange="false" sendCacheHeaders="true" staticCompressionDisableCpuUsage="100" staticCompressionEnableCpuUsage="100"> <dynamicTypes> <clear/> <add mimeType="*/*" enabled="true" /> </dynamicTypes> <staticTypes> <clear/> <add mimeType="*/*" enabled="true" /> </staticTypes> </httpCompression> <urlCompression doDynamicCompression="true" doStaticCompression="true" dynamicCompressionBeforeCache="false" /> Now I think that this means it should compress any request that includes the Accept:Gzip header. I'd love to know what others might think here. My fiddler trace: GET /SecurityDataService.svc/GetCurrentAccount HTTP/1.1 Accept-Charset: UTF-8 Accept-Language: en-us dataserviceversion: 1.0;Silverlight Accept: application/atom+xml,application/xml maxdataserviceversion: 1.0;Silverlight Referer: http://sdev03/apptestpage.aspx Accept-Encoding: gzip, deflate User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0; WOW64; Trident/4.0; SLCC1; .NET CLR 2.0.50727; .NET CLR 3.5.21022; .NET CLR 3.5.30729; InfoPath.2; .NET CLR 3.0.30729; OfficeLiveConnector.1.4; OfficeLivePatch.1.3) Host: sdev03 Connection: Keep-Alive Cookie: .ASPXAUTH=<snip> HTTP/1.1 200 OK Cache-Control: no-cache Content-Type: application/atom+xml;charset=utf-8 Server: Microsoft-IIS/7.0 DataServiceVersion: 1.0; X-AspNet-Version: 2.0.50727 X-Powered-By: ASP.NET Date: Mon, 22 Mar 2010 22:29:06 GMT Content-Length: 2726 <?xml version="1.0" encoding="utf-8" standalone="yes"?> *** <snip> removed ***

    Read the article

  • Set up basic Windows Authentication to connect to SQL Server 2008 from a small, trusted network

    - by Margaret
    I'm guessing that this is documented somewhere on Microsoft's site, but thus far I haven't found it. I'm trying to set up a Windows Server 2008 box to have SQL Server 2008 with Windows Authentication (Mixed Mode, actually, but anyway) for work. We have a number of client machines that will need access to the databases, and I would like to keep configuration as simple as feasible. Here's what I've done so far: Install SQL Server 2008 selecting Mixed Mode Create a new 'Standard' (rather than Administrator) Windows login entitled "UserLogin" (with intent to use it as the access account) Create an SQL Server Login for Server\UserLogin and assign it 'Windows Authentication' Log in as UserLogin, check that I'm able to connect to SQL Server using WIndows Authentication, then log out again Start on the first client (Windows XPSP2, SQL Server 2005): Run C:\WINDOWS\system32\rundll32.exe keymgr.dll, KRShowKeyMgr Click "Add", enter the server name in the box, Server\UserLogin in the Username, and UserLogin's password in the Password field. Click "Ok" then "Close" Attempt to access SQL Server 2005 using Windows authentication. Succeed. Confetti! Start on the second client (Windows 7, SQL Server 2008): Run C:\WINDOWS\system32\rundll32.exe keymgr.dll, KRShowKeyMgr Click "Add", enter the server name in the box, Server\UserLogin in the Username, and UserLogin's password in the Password field. Click "Ok" then "Close" Attempt to access SQL Server 2008 using Windows authentication. Receive an error "Login failed. The login is from an untrusted domain and cannot be used with Windows authentication" Assume that this translates to "You can't have two connections from the same account" (Yes, I know that doesn't make sense, but I'm a bit like that) Go back to the server, create a second Windows account, give it SQL Server rights. Go back to the second client, create a new passkey for the second login, try logging in again. Continue to receive the same error. Is this all overly complex and there's an easy way to do what I'm trying to accomplish? Or am I missing some ultra-obvious step that would make everything behave as desired? Most of the stuff that's coming up when I try to Google seems to be along the lines of "My ASP.NET application isn't working!", which obviously isn't all that much use.

    Read the article

  • DPM 2010 PowerShell Script to Easily Restore Multiple Files

    - by bmccleary
    I’ve got what I thought would be a simple task with Data Protection Manager 2010 that is turning out to be quite frustrating. I have a file server on one server and it is the only server in a protection group. This file server is the repository for a document management application which stores the files according to the data within a SQL database. Sometimes users inadvertently delete files from within our application and we need to restore them. We have all the information needed to restore the files to include the file name, the folder that the file was stored in and the exact date that the file was deleted. It is easy for me to restore the file from within the DPM console since we have a recovery point created every day, I simply go to the day before the delete, browse to the proper folder and restore the file. The problem is that using the DPM console, the cumbersome wizard requires about 20 mouse clicks to restore a single file and it takes 2-4 minutes to get through all the windows. This becomes very irritating when a client needs 100’s of files restored… it takes all day of redundant mouse clicks to restore the files. Therefore, I want to use a PowerShell script (and I’m a novice at PowerShell) to automate this process. I want to be able to create a script that I pass in a file name, a folder, a recovery point date (and a protection group/server name if needed) and simply have the file restored back to its original location with some sort of success/failure notification. I thought it was a simple basic task of a backup solution, but I am having a heck of a time finding the right code. I have seen the sample code at http://social.technet.microsoft.com/wiki/contents/articles/how-to-use-a-windows-powershell-script-to-recover-an-item-in-data-protection-manager.aspx that I have tried to follow, but it doesn’t accomplish what I really want to do (it’s too simplistic) and there are errors in the sample code. Therefore, I would like to get some help writing a script to restore these files. An example of the known values to restore the data are: DPM Server: BACKUP01 Protection Group: Document Repository Data Protected Server: FILER01 File Path: R:\DocumentRepository\ToBackup\ClientName\Repository\2010\07\24\filename.pdf Date Deleted: 8/2/2010 (last recovery point = 8/1/2010) Bonus Points: If you can help me not only create this script, but also show me how to automate by providing a text file with the above information that the PowerShell script loops through, or even better, is able to query our SQL server for the needed data, then I would be more than willing to pay for this development.

    Read the article

  • Business Owners - What Remote Desktop Solution Do You Use To Service Your Clients PCs?

    - by Sootah
    Howdy fellow computer geeks, I am the owner of a local computer repair business that primarily services its clients on-site. On the occasions that we do service the machines in the office we generally have one of our techs pick the computer up while they are out and about and bring it back with them. Only rarely will we require the customer to bring us the computer themselves. In order to reduce costs, be much more efficient, and potentially expand our market far beyond what would be feasible with travel required; I am looking at ways that we can service our clients remotely whenever possible. What we're in need of is a solid remote desktop application that will be incredibly easy for our customers to connect to, as well as be robust enough that we don't need the client babysitting the computer during the entire repair. Ideally I would like to use a web-based solution so that we don't have to walk the customers through installing, connecting, and configuring it over the phone. This would be unacceptable because of the level of service they are used to. Effectively we'd want them to be able to just go to a URL, enter a PIN or something, and then they are connected and ready to rumble. (Obviously the option to just email them a link that'd do all this for them would be what we'd be aiming for) Along with the ease of use factor, we would need the product to not require any further intervention on the part of the client after we have connected. Nobody is going to be happy if we have to call them every 15 minutes so they can reconnect to us every time we reboot - so auto-reconnect is an absolute must. The only product I know of right now that does any of this is LogMeIn Rescue. It allows unattended access, the applet is lightweight and installs quickly, and the customer can either enter a PIN on the site or just click a link emailed to them in order to connect. The only real downside I see to LogMeIn Rescue is that it's $120.00/month per technician. While we'd ultimately end up saving far more than that per month just in fuel costs alone, I'd like to explore any other options out there that I may not have come across. So - Are there any equally good products out there? If so what are they, why do you recommend them, how have you been utilizing them yourself, and what do they cost? Thanks in advance for your help! -Sootah

    Read the article

  • which package i should choose, if i want to install virtualenv for python?

    - by hugemeow
    pip search just returns so many matches, i am confused about which package i should choose to install .. should i only install virtualenv? or i'd better also install virtualenv-commands and virtualenv-commands, etc, but i really don't know exactly what virtualenv-commands is ... mirror0@lab:~$ pip search virtualenv virtualenvwrapper - Enhancements to virtualenv virtualenv - Virtual Python Environment builder veh - virtualenv for hg pyutilib.virtualenv - PyUtilib utility for building custom virtualenv bootstrap scripts. envbuilder - A package for automatic generation of virtualenvs virtstrap-core - A bootstrapping mechanism for virtualenv+pip and shell scripts tox - virtualenv-based automation of test activities virtualenvwrapper-win - Port of Doug Hellmann's virtualenvwrapper to Windows batch scripts everyapp.bootstrap - Enhanced virtualenv bootstrap script creation. orb - pip/virtualenv shell script wrapper monupco-virtualenv-python - monupco.com registration agent for stand-alone Python virtualenv applications virtualenvwrapper-powershell - Enhancements to virtualenv (for Windows). A clone of Doug Hellmann's virtualenvwrapper RVirtualEnv - relocatable python virtual environment virtualenv-clone - script to clone virtualenvs. virtualenvcontext - switch virtualenvs with a python context manager lessrb - Wrapper for ruby less so that it's in a virtualenv. carton - make self-extracting virtualenvs virtualenv5 - Virtual Python 3 Environment builder clever-alexis - Clever redhead girl that builds and packs Python project with Virtualenv into rpm, deb, etc. kforgeinstall - Virtualenv bootstrap script for KForge pypyenv - Install PyPy in virtualenv virtualenv-distribute - Virtual Python Environment builder virtualenvwrapper.project - virtualenvwrapper plugin to manage a project work directory virtualenv-commands - Additional commands for virtualenv. rjm.recipe.venv - zc.buildout recipe to turn the entire buildout tree into a virtualenv virtualenvwrapper.bitbucket - virtualenvwrapper plugin to manage a project work directory based on a BitBucket repository tg_bootstrap - Bootstrap a TurboGears app in a VirtualEnv django-env - Automaticly manages virtualenv for django project virtual-node - Install node.js into your virtualenv django-environment - A plugin for virtualenvwrapper that makes setting up and creating new Django environments easier. vip - vip is a simple library that makes your python aware of existing virtualenv underneath. virtualenvwrapper.django - virtualenvwrapper plugin to create a Django project work directory terrarium - Package and ship relocatable python virtualenvs venv_dependencies - Easy to install any dependencies in a virtualenviroment(without making symlinks by hand and etc...) virtualenv-sh - Convenient shell interface to virtualenv virtualenvwrapper.github - Plugin for virtualenvwrapper to automatically create projects based on github repositories. virtualenvwrapper.configvar - Plugin for virtualenvwrapper to automatically export config vars found in your project level .env file. virtualenvwrapper-emacs-desktop - virtualenvwrapper plugin to control emacs desktop mode bootstrapper - Bootstrap Python projects with virtualenv and pip. virtualenv3 - Obsolete fork of virtualenv isotoma.depends.zope2_13_8 - Running zope in a virtualenv virtual-less - Install lessc into your virtualenv virtualenvwrapper.tmpenv - Temporary virtualenvs are automatically deleted when deactivated isotoma.plone.heroku - Tooling for running Plone on heroku in a virtualenv gae-virtualenv - Using virtualenv with zipimport on Google App Engine pinvenv - VirtualEnv plugins for pin isotoma.depends.plone4_1 - Running plone in a virtualenv virtualenv-tools - A set of tools for virtualenv virtualenvwrapper.npm - Plugin for virtualenvwrapper to automatically encapsulate inside the virtual environment any npm installed globaly when the venv is activated d51.django.virtualenv.test_runner - Simple package for running isolated Django tests from within virtualenv difio-virtualenv-python - Difio registration agent for stand-alone Python virtualenv applications VirtualEnvManager - A package to manage various virtual environments. virtualenvwrapper.gem - Plugin for virtualenvwrapper to automatically encapsulate inside the virtual environment any gems installed when the venv is activated

    Read the article

  • What Remote Desktop Solution Do You Use To Service Your Clients' PCs? [closed]

    - by Sootah
    Possible Duplicate: What’s the best Remote Desktop Application? I am the owner of a local computer repair business that primarily services its clients on-site. On the occasions that we do service the machines in the office we generally have one of our techs pick the computer up while they are out and about and bring it back with them. Only rarely will we require the customer to bring us the computer themselves. In order to reduce costs, be much more efficient, and potentially expand our market far beyond what would be feasible with travel required; I am looking at ways that we can service our clients remotely whenever possible. What we're in need of is a solid remote desktop application that will be incredibly easy for our customers to connect to, as well as be robust enough that we don't need the client babysitting the computer during the entire repair. Ideally I would like to use a web-based solution so that we don't have to walk the customers through installing, connecting, and configuring it over the phone. This would be unacceptable because of the level of service they are used to. Effectively we'd want them to be able to just go to a URL, enter a PIN or something, and then they are connected and ready to rumble. (Obviously the option to just email them a link that'd do all this for them would be what we'd be aiming for) Along with the ease of use factor, we would need the product to not require any further intervention on the part of the client after we have connected. Nobody is going to be happy if we have to call them every 15 minutes so they can reconnect to us every time we reboot - so auto-reconnect is an absolute must. The only product I know of right now that does any of this is LogMeIn Rescue. It allows unattended access, the applet is lightweight and installs quickly, and the customer can either enter a PIN on the site or just click a link emailed to them in order to connect. The only real downside I see to LogMeIn Rescue is that it's $120.00/month per technician. While we'd ultimately end up saving far more than that per month just in fuel costs alone, I'd like to explore any other options out there that I may not have come across. Are there any equally good products out there? If so what are they, why do you recommend them, how have you been utilizing them yourself, and what do they cost?

    Read the article

  • Why Does Wireless Gear Degrade Over Time?

    - by bahamat
    I saw this originally posted on slashdot, but their comment format is not conducive to actually getting a correct answer. Having directly experienced this phenomenon myself, I'm now asking here where I think I can actually get an educated answer. Here's the original question verbatim: Lately I have replaced several home wireless routers because the signal strength has been found to be degraded. These devices, when new (2+ years ago) would cover an entire house. Over the years, the strength seems to decrease to a point where it might only cover one or two rooms. Of the three that I have replaced for friends, I have not found a common brand, age, etc. It just seems that after time, the signal strength decreases. I know that routers are cheap and easy to replace but I'm curious what actually causes this. I would have assumed that the components would either work or not work; we would either have a full signal or have no signal. I am not an electrical engineer and I can't find the answer online so I'm reaching out to you. Can someone explain how a transmitter can slowly go bad? Common (incorrect, but repeated) answers from slashdot include: Back then your neighbors didn't have wifi, now they do. They drowning you out. I don't think this is likely because replacing the access point with a new one and using the same frequencies solves the problem. Older devices had low transmit power. Crank that baby. As mentioned by a FreeBSD wireless developer this violates regulations and can physically damage the equipment. It was also mentioned that higher power in one direction is not necessarily reciprocated. This shows higher bars, but not necessarily a better connection. Manufacturers make cheap crap designed to wear out. This one actually may be legitimate although it is overly broad. What specifically causes damage over time? Heat? Excessive power? So can anyone provide an informed answer on this? Is there any way to fix these older access points?

    Read the article

  • What is a good method to solve cabal install problems?

    - by sp3ctum
    I've used the cabal package manager for Haskell programs to install libraries and new projects that I've cloned from some repositories. More often than not, I keep running into problems. Most projects make installing them seem super easy, but in my case that's not always true - sometimes they are very hard to get running. Some are so hard, in fact, that I've lost interest in the project solely because of not being able to install it. So instead of complaining, I'd like to ask what I should do to better this situation. I'd like to use my most recent problem as an example. I'm interested in trying out the Gitit project. It's a promising looking personal wiki that runs on various version control systems. So here's what I've done: Clone from Github run cabal install in the project directory like I'm told on the project install page: mika@eka:~/git/gitit$ ls BLUETRIP-LICENSE CHANGES HCAR-gitit.tex LICENSE Network README.markdown RELANN-0.6.1 Setup.lhs TANGOICONS YUI-LICENSE data expireGititCache.hs gitit.cabal gitit.hs plugins mika@eka:~/git/gitit$ cabal install Resolving dependencies... cabal: cannot configure happstack-server-7.0.7. It requires base64-bytestring ==1.0.* For the dependency on base64-bytestring ==1.0.* there are these packages: base64-bytestring-1.0.0.0. However none of them are available. base64-bytestring-1.0.0.0 was excluded because gitit-0.10 requires base64-bytestring ==0.1.* mika@eka:~/git/gitit$ So now I'm thinking: well, I'll install happstack-server on its own, maybe that will work: mika@eka:~/git/gitit$ cabal install happstack-server Resolving dependencies... Warning: happstack-server.cabal: Ignoring unknown section type: test-suite Configuring happstack-server-7.0.7... cabal: At least the following dependencies are missing: blaze-html ==0.5.*, hslogger >=1.0.2, monad-control ==0.3.*, network >=2.2.3, sendfile >=0.7.1 && <0.8, system-filepath >=0.3.1, text >=0.10 && <0.12, threads >=0.5, transformers-base ==0.4.* cabal: Error: some packages failed to install: happstack-server-7.0.7 failed during the configure step. The exception was: ExitFailure 1 So looks like there are some dependencies missing. But isn't installing these dependencies the whole point of using cabal in the first place? What should I do? File bug reports (to which project?), install the dependencies manually or something else? Bonus points for explaining what causes these kinds of problems.

    Read the article

  • How do I get the latest FastCGI and PHP versions to peacefully coexist on IIS 6?

    - by BHelman
    I have been going round and round trying to get any sort of PHP running on IIS 6. I somehow managed to successfully get version 5.1.4 running using the php5isapi.dll file. However, I want to upgrade a website to begin using a Content Management System. I have never dug into CMS before so I'm open to programs that are easy to use. I am currently looking into TomatoCMS and ImpressCMS - but that's beside the point. I have never done an installation with PHP before and I think I'm getting familiar with how it works. However the current situation is this. Microsoft's Web Platform Installer 2.0 installed FastCGI for me. I need to upgrade to PHP 5.3.1 for a CMS system. So I downloaded the Windows installer and let it go at it. After consulting several other blog articles, I believe I know how it is supposed to work but I am currently not having luck. THE SETUP *.php is a registered extension in IIS 6 for all websites (on Win 2k3). The application that it calls is C:\Windows\system32\inetsvr\fcgiext.dll, like it should. The fcgiext.ini config has the proper lines: [Types] php=PHP [PHP] ext=C:\program files\PHP\php-cgi.exe And the php.ini file also has the correct configs. All extensions are disabled and I changed the correct things for FastCGI. And everything is registered correctly with the PATH variable. Everything is exactly how it should be. BUT when I launch the "info.php" page () on another computer, I get the following error: FastCGI Error The FastCGI Handler was unable to process the request. Error Details: * Section [PHP] not found in config file. * Error Number: 1413 (0x80070585). * Error Description: Invalid index. HTTP Error 500 - Server Error. Internet Information Services (IIS) A quick Google search reveals that I have it all setup correctly as far as the INI's go and the mapping of the php extension. I am completely at a loss. Does anyone have any suggestions? Although the server is hosting three small websites, I don't really care what I have to do to it to get it to work.

    Read the article

  • Splitting an internet connection between multiple separate subnetworks

    - by pythonian4000
    Problem I have an internet connection that I want to split between four separate networks. My requirements are: I need to be able to monitor the amount of bandwidth and data being used by each network, and notify or control as necessary. The four networks should only be able to connect to the internet, not each other. My parents need to be able to operate it, so it needs a simple, preferably Windows-based GUI. Progress so far Server I have a mini-ITX server with six Gigabit ethernet ports - one for the ethernet internet connection, one for each of the four networks, and one for remote access to the server for administration. Bandwidth control I spent a long time researching solutions here. The majority of the control systems/software I found could control bandwidth usage via QOS, but could not monitor or control the amount of data being used. Eventually I found the SoftPerfect Bandwidth Manager, which has everything I need in terms of monitoring and control - per-interface quota management, usage statistics, a web interface for checking usage, and email notifications when quotas are exceeded. It is also Windows-based and has a simple GUI. Internet sharing This is where I am having issues. I am currently using Windows XP Pro SP2 for the server (yes, I know this is far from ideal, but it's the only spare Windows OS I currently have). I can't use the built-in Internet Connection Sharing for several reasons: The upstream internet router has an IP of 192.168.0.1 which ICS clashes with, and I cannot change the router settings. ICS can only share an internet connection with a single interface, but I have four. I have tried bridging the four network cards, but then the Bandwidth Manager cannot see the four individual interfaces - it only sees the bridge. I have tried setting up Dual DHCP DNS server (and am having issues getting DHCP offers to be received by clients), but that would still require gateway software of some sort, which I have been unable to find. My current attempt is to use OpenVPN, with a server for the internet NIC and a separate client for each of the four networks. My thought is that I could bridge the OpenVPN TAP devices to each NIC, meaning that the Bandwidth Manager would control traffic from the bridge instead of the interface. I have not made much progress here though - I've never used OpenVPN before. Questions Is there a Windows software package that does everything I need? (Unlikely, I know) Is there a Windows software package that will share internet between multiple NICs without bridging? Are either of my about attempts feasible? Would it help to have a newer/server version of Windows? Is there a non-Windows alternative that is easy to use?

    Read the article

  • Persuading openldap to work with SSL on Ubuntu with cn=config

    - by Roger
    I simply cannot get this (TLS connection to openldap) to work and would appreciate some assistance. I have a working openldap server on ubuntu 10.04 LTS, it is configured to use cn=config and most of the info I can find for TLS seems to use the older slapd.conf file :-( I've been largely following the instructions here https://help.ubuntu.com/10.04/serverguide/C/openldap-server.html plus stuff I've read here and elsewhere - which of course could be part of the problem as I don't totally understand all of this yet! I have created an ssl.ldif file as follows; dn:cn=config add: olcTLSCipherSuite olcTLSCipherSuite: TLSV1+RSA:!NULL add: olcTLSCRLCheck olcTLSCRLCheck: none add: olcTLSVerifyClient olcTLSVerifyClient: never add: olcTLSCACertificateFile olcTLSCACertificateFile: /etc/ssl/certs/ldap_cacert.pem add: olcTLSCertificateFile olcTLSCertificateFile: /etc/ssl/certs/my.domain.com_slapd_cert.pem add: olcTLSCertificateKeyFile olcTLSCertificateKeyFile: /etc/ssl/private/my.domain.com_slapd_key.pem and I import it using the following command line ldapmodify -x -D cn=admin,dc=mydomain,dc=com -W -f ssl.ldif I have edited /etc/default/slapd so that it has the following services line; SLAPD_SERVICES="ldap:/// ldapi:/// ldaps:///" And everytime I'm making a change, I'm restarting slapd with /etc/init.d/slapd restart The following command line to test out the non TLS connection works fine; ldapsearch -d 9 -D cn=admin,dc=mydomain,dc=com -w mypassword \ -b dc=mydomain,dc=com -H "ldap://mydomain.com" "cn=roger*" But when I switch to ldaps using this command line; ldapsearch -d 9 -D cn=admin,dc=mydomain,dc=com -w mypassword \ -b dc=mydomain,dc=com -H "ldaps://mydomain.com" "cn=roger*" This is what I get; ldap_url_parse_ext(ldaps://mydomain.com) ldap_create ldap_url_parse_ext(ldaps://mydomain.com:636/??base) ldap_sasl_bind ldap_send_initial_request ldap_new_connection 1 1 0 ldap_int_open_connection ldap_connect_to_host: TCP mydomain.com:636 ldap_new_socket: 3 ldap_prepare_socket: 3 ldap_connect_to_host: Trying 127.0.0.1:636 ldap_pvt_connect: fd: 3 tm: -1 async: 0 TLS: can't connect: A TLS packet with unexpected length was received.. ldap_err2string ldap_sasl_bind(SIMPLE): Can't contact LDAP server (-1) Now if I check netstat -al I can see; tcp 0 0 *:www *:* LISTEN tcp 0 0 *:ssh *:* LISTEN tcp 0 0 *:https *:* LISTEN tcp 0 0 *:ldaps *:* LISTEN tcp 0 0 *:ldap *:* LISTEN I'm not sure if this is significant as well ... I suspect it is; openssl s_client -connect mydomain.com:636 -showcerts CONNECTED(00000003) 916:error:140790E5:SSL routines:SSL23_WRITE:ssl handshake failure:s23_lib.c:188: I think I've made all my certificates etc OK and here are the results of some checks; If I do this; certtool -e --infile /etc/ssl/certs/ldap_cacert.pem I get Chain verification output: Verified. certtool -e --infile /etc/ssl/certs/mydomain.com_slapd_cert.pem Gives "certtool: the last certificate is not self signed" but it otherwise seems OK? Where have I gone wrong? Surely getting openldap to run securely on ubuntu should be easy and not require a degree in rocket science! Any ideas?

    Read the article

< Previous Page | 490 491 492 493 494 495 496 497 498 499 500 501  | Next Page >