Search Results

Search found 22900 results on 916 pages for 'pascal case'.

Page 384/916 | < Previous Page | 380 381 382 383 384 385 386 387 388 389 390 391  | Next Page >

  • Getting Internal Name of a Share Point List Fields

    - by Gino Abraham
    Over the last 2 weeks i was developing a tool to migrate Lotus notes data base to Share point. The mapping between Lotus notes schema and share point list schema was done manually in an xml file for out tool. To map the columns we wanted internal names of each field. There are quite a few ways to achieve this, have explained few below. If you want internal names for one or 2 columns you can do so by navigating to the list setting and clicking on the column name. Once you are in column's details, you can check the query string of the page. The last item in the query string would be field's internal. Replace all "%5f" with '_' will give you the field internal name. In my case there were more than 80 columns. I used power shell to get the list of columns with details. Open windows Powershell and paste the following script after modifying the url and list name. [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SharePoint") $site = new-object Microsoft.SharePoint.SPSite(http://yousitecolurl) $web = $site.OpenWeb() $list = $web.Lists["yourlist name"] $list.Fields | Format-Table Title, InternalName, TypeAsString I also found a tool in Codeplex.com which can generate a wrapper class for a list. The wrapper class will give you the guid and internal name for all fields in the list.  You can download the tool from http://imtech.codeplex.com/ Just enter the url in the text box and hit open. All the site content will be listed at the left hand side, expand the list, right click and select generate wrapper class.

    Read the article

  • JSP Include: one large bean or bean for each include

    - by shylynx
    I want to refactor a webapp that consists of very distorted JSPs and servlets. Because we can't switch to a web framework easily we have to keep JSPs and Servlets, and now we are in doubt how to include pages into another and how to setup the use:bean-directives effectively. At the first step we want to decouple the code for the core-actions and the bean-creation into servlets. The servlets should forward to their corresponding pages, which should use the bean. The problem here is, that each jsp consists of different sub- and sub-sub-jsp that are included into another. Here is a shortend extract (because reality is more complex): head header top navigation actionspanel main header actionspanel foot footer Moreover each jsp (also the header and footer) use dynamic data. For example title and actionspanel can change on each page-reload or do have links and labels that depend on the processing by the preceding servlet. I know that jsp-include-directives should only be used for static content und should be avoided for dynamic content. But here we have very large pages, that consist of many parts. Now the core questions: Should I use one big bean for each page, so that each bean holds also data for header and footer beside its core data, so that each subsequent included jsp uses the same bean-directive? For example: DirectoryJSP <- DirectoryBean CompareJSP <- CompareBean Or should I use one bean for each jsp, so that each bean only holds the data for one jsp and its own purpose. For example: DirectoryJSP <- DirectoryBean HeaderJSP <- HeaderBean FooterJSP <- FooterBean CompareJSP <- CompareBean HeaderJSP <- HeaderBean FooterJSP <- FooterBean In the second case: should the subsequent beans be a member of the corresponding parent bean, so that only the parent bean is attached as attribute to the request? Or should each bean attached to the request?

    Read the article

  • Keyboard lights sometimes stay on after PC is powered off. Why?

    - by Vilx-
    Sometimes when I power off my PC some lights on the keyboard stay on. It's pretty random which ones, though I suspect it has something to do with what lights were on just before the shutdown (if I vigorously follow that they are all off while the PC is shutting down then they stay off). Not that this would change anything, but it is annoying. Why does this happen and how can I avoid it? Added: I know that I can turn off the power completely to my computer and it will go off. That's not the point. Actually, I even want my keyboard to have power, because I use it to turn on my computer (the power button on the case is difficult to reach). I just want the lights to go off. Or at least understand why this happens. Oh, and it's a PS2 keyboard.

    Read the article

  • A little tidbit on Team Build 2010 and error MSB3147

    - by Enrique Lima
    The problem? Performing a build on a ClickOnce solution would not be successful due to the setup.bin not being located. Ok, now what? Researched from corner to corner, install, re-install, update.  Found some interesting posts to fix the issue, but most of them were focused on Team Foundation Server/Team Build 2008, and some other on 2005.  The other interesting tidbit was the frequent indication to modify the registry to help Team Build find the bootstrapper. Background info:  This was a migration I posted about a few days ago, a 32 bit TFS implementation to a full 64 bit TFS implementation.  Now, the project has binaries and dependencies on X86 (This piece of information became essential to moving from a failed build to a successful build). So, what’s the fix? The trick in this case was to go back into the Build Type and check the properties/configuration.  Upon further investigation, I found the following:  Once you Edit the Build Definition, then select Process, expand 3. Advanced and look for MSBuild Platform, switch from Auto to X86.  Ran the Build, and success!

    Read the article

  • How to use private DNS to map private IP with "non registred" domain name

    - by PapelPincel
    I would like to use a private DNS (Route53 in our case) in order to map hosts to EC2 instance private IP addresse. The hosted zone we are using for testing is not declared in any registrar (company-test.com.). There are different servers (Nagios, Puppet, ActiveMQ ...) all hosted in ec2, that means their IP can change over time (restart, new instance launch...). That would be great if I can use DNS instead of clients' /etc/hosts for mapping private IP/internal domain name... The ActiveMQ server url is activemq.company-test.com and it maps to (A record) private IP address of the AMQ server. This url is only reachable by other ec2 owned by the same aws account. My question is how to configure ec2 instances so they could reach the ActiveMQ server WITHOUT having to buy a new domain company-test.com ?

    Read the article

  • I can't "unmaximize" my window

    - by Beska
    I've got a windows app (I don't think it matters which one, but in case you're wondering, it's SQL Server Profiler) that I can't put back into "windowed" mode. I can maximize or minimize it, either by right-clicking on the task bar and selecting maximize...or if the window is already maximized, I can click the minimize button to minimize it... The problem is when I click the middle button...the one that toggles between maximized and "windowed" mode, the windowed mode just makes it disappear. The program is still running fine, and I can bring it back up (maximized) by selecting it in the task bar. It doesn't seem to be hanging out on any of the edges of the screen...far as I can tell, it's just not there. And, of course, the app is "smart" enough to remember its status, so restarting the app doesn't help. Has anyone seen this? Know how to fix it?

    Read the article

  • Debugging the NetBeans Platform

    - by Geertjan
    Once you've set up the NetBeans Platform sources as your NetBeans Platform, you're able to debug the NetBeans Platform itself. That's an occasional question (certainly not a frequent question) on the mailing list and in NetBeans Platform courses: "Is it possible to debug the NetBeans Platform?" Well, here's how: Firstly, set up the NetBeans Platform sources as your NetBeans Platform. Now, open into NetBeans IDE the NetBeans module where you'd like to place a breakpoint. That in itself is the hardest part of this task. I.e., you know you want to debug the NetBeans Platform, but have no idea where to place your breakpoint. One way to figure that out, from 7.1 onwards, is to take a visual snapshot of the NetBeans Platform and then analyze that snapshot in NetBeans IDE. To do this, right-click a module that you've set as using the NetBeans Platform sources as your NetBeans Platform and then choose Debug. The application, i.e., the NetBeans Platform, including your custom module, starts up and you'll see this, i.e., NetBeans IDE in debug mode together with your NetBeans Platform application:Notice there's a new toolbar button (new in NetBeans IDE 7.1) that resembles an orange camera. Click that button and the IDE creates a visual snapshot of the running application, which in this case is the NetBeans Platform. When you click components in the visual snapshot, the Navigator and Properties window display information about the related GUI component: By clicking the above components, you can end up identifying the component you'd like to debug and even the module where it is found. Open that module. Set a breakpoint on the line of interest. Right-click the module again and choose Debug. A debug session starts and when the breakpoint is hit, the Debugger in the IDE will open and there you can step through the NetBeans Platform sources.

    Read the article

  • Agile Testing Days 2012 – Day 2 – Learn through disagreement

    - by Chris George
    I think I was in the right place! During Day 1 I kept on reading tweets about Lean Coffee that has happened earlier that morning. It intrigued me and I figured in for a penny in for a pound, and set my alarm for 6:45am. Following the award night the night before, it was _really_ hard getting up when it went off, but I did and after a very early breakfast, set off for the 10 min walk to the Dorint. With Lean Coffee due to start at 07:30, I arrived at the hotel and made my way to one of the hotel bars. I soon realised I was in the right place as although the bar was empty, there was a table with post-it’s and pens! This MUST be the place! The premise of Lean Coffee is to have several small timeboxed discussions. Everyone writes down what they would like to discuss on post-its that are then briefly explained and submitted to the pile. Once everyone is done, the group dot-votes on the topics. The topics are then sorted by the dot vote counts and the discussions begin. Each discussion had 8 mins to start with, which meant it prevented the discussions getting off topic too much. After the time elapsed, the group had a vote whether to extend the discussion by a further 4 mins or move on. Several discussion were had around training, soft skills etc. The conversations were really interesting and there were quite a few good ideas. Overall it was a very enjoyable experience, certainly worth the early start! Make Melly Happy Following Lean Coffee was real coffee, and much needed that was! The first keynote of the day was “Let’s help Melly (Changing Work into Life)”by Jurgen Appelo. Draw lines to track happiness This was a very interesting presentation, and set the day nicely. The theme to the keynote was projects are about the people, more-so than the actual tasks. So he started by showing a photo of an employee ‘Melly’ who looked happy enough. He then stated that she looked happy but actually hated her job. In fact 50% of Americans hate their jobs. He went on to say that the world over 50% of people hate Americans their jobs. Jurgen talked about many ways to reduce the feedback cycle, not only of the project, but of the people management. Ideas such as Happiness doors, happiness tracking (drawing lines on a wall indicating your happiness for that day), kudo boxes (to compliment a colleague for good work). All of these (and more) ideas stimulate conversation amongst the team, lead to early detection of issues and investigation of solutions. I’ve massively simplified Jurgen’s keynote and have certainly not done it justice, so I will post a link to the video once it’s available. Following more coffee, the next talk was “How releasing faster changes testing” by Alexander Schwartz. This is a topic very close to our hearts at the moment, so I was eager to find out any juicy morsels that could help us achieve more frequent releases, and Alex did not disappoint. He started off by confirming something that I have been a firm believer in for a number of years now; adding more people can do more harm than good when trying to release. This is for a number of reasons, but just adding new people to a team at such a critical time can be more of a drain on resources than they add. The alternative is to have the whole team have shared responsibility for faster delivery. So the whole team is responsible for quality and testing. Obviously you will have the test engineers on the project who have the specialist skills, but there is no reason that the entire team cannot do exploratory testing on the product. This links nicely with the Developer Exploratory testing presented by Sigge on Day 1, and certainly something that my team are really striving towards. Focus on cycle time, so what can be done to reduce the time between dev cycles, release cycles. What’s stops a release, what delays a release? all good solid questions that can be answered. Alex suggested that perhaps the product doesn’t need to be fully tested. Doing less testing will reduce the cycle time therefore get the release out faster. He suggested a risk-based approach to planning what testing needs to happen. Reducing testing could have an impact on revenue if it causes harm to customers, so test the ‘right stuff’! Determine a set of tests that are ‘face saving’ or ‘smoke’ tests. These tests cover the core functionality of the product and aim to prevent major embarrassment if these areas were to fail! Amongst many other very good points, Alex suggested that a good approach would be to release after every new feature is added. So do a bit of work -> release, do some more work -> release. By releasing small increments of work, the impact on the customer of bugs being introduced is reduced. Red Pill, Blue Pill The second keynote of the day was “Adaptation and improvisation – but your weakness is not your technique” by Markus Gartner and proved to be another very good presentation. It started off quoting lines from the Matrix which relate to adapting, improvising, realisation and mastery. It has alot of nerds in the room smiling! Markus went on to explain how through deliberate practice ( and a lot of it!) you can achieve mastery, but then you never stop learning. Through methods such as code retreats, testing dojos, workshops you can continually improve and learn. The code retreat idea was one that interested me. It involved pairing to write an automated test for, say, 45 mins, they deleting all the code, finding a different partner and writing the same test again! This is another keynote where the video will speak louder than anything I can write here! Markus did elaborate on something that Lisa and Janet had touched on yesterday whilst busting the myth that “Testers Must Code”. Whilst it is true that to be a tester, you don’t need to code, it is becoming more common that there is this crossover happening where more testers are coding and more programmers are testing. Markus made a special distinction between programmers and developers as testers develop tests code so this helped to make that clear. “Extending Continuous Integration and TDD with Continuous Testing” by Jason Ayers was my next talk after lunch. We already do CI and a bit of TDD on my project team so I was interested to see what this continuous testing thing was all about and whether it would actually work for us. At the start of the presentation I was of the opinion that it just would not work for us because our tests are too slow, and that would be the case for many people. Jason started off by setting the scene and saying that those doing TDD spend between 10-15% of their time waiting for tests to run. This can be reduced by testing less often, reducing the test time but this then increases the risk of introduced bugs not being spotted quickly. Therefore, in comes Continuous Testing (CT). CT systems run your unit tests whenever you save some code and runs them in the background so you can continue working. This is a really nice idea, but to do this, your tests must be fast, independent and reliable. The latter two should be the case anyway, and the first is ideal, but hard! Jason makes several suggestions to make tests fast. Firstly keep the scope of the test small, secondly spin off any expensive tests into a suite which is run, perhaps, overnight or outside of the CT system at any rate. So this started to change my mind, perhaps we could re-engineer our tests, and continuously run the quick ones to give an element of coverage. This talk was very interesting and I’ve already tried a couple of the tools mentioned on our product (Mighty Moose and NCrunch). Sadly due to the way our solution is built, it currently doesn’t work, but we will look at whether we can make this work because this has the potential to be a mini-game-changer for us. Using the wrong data Gojko’s Hierarchy of Quality The final keynote of the day was “Reinventing software quality” by Gojko Adzic. He opened the talk with the statement “We’ve got quality wrong because we are using the wrong data”! Gojko then went on to explain that we should judge a bug by whether the customer cares about it, not by whether we think it’s important. Why spend time fixing issues that the customer just wouldn’t care about and releasing months later because of this? Surely it’s better to release now and get customer feedback? This was another reference to the idea of how it’s better to build the right thing wrong than the wrong thing right. Get feedback early to make sure you’re making the right thing. Gojko then showed something which was very analogous to Maslow’s heirachy of needs. Successful – does it contribute to the business? Useful – does it do what the user wants Usable – does it do what it’s supposed to without breaking Performant/Secure – is it secure/is the performance acceptable Deployable Functionally ok – can it be deployed without breaking? He then explained that User Stories should focus on change. In other words they should focus on the users needs, not the users process. Describe what the change will be, how that change will happen then measure it! Networking and Beer Following the day’s closing keynote, there were drinks and nibble for the ‘Networking’ evening. This was a great opportunity to talk to people. I find approaching strangers very uncomfortable but once again, when in Rome! Pete Walen and I had a long conversation about only fixing issues that the customer cares about versus fixing issues that make you proud of your software! Without saying much, and asking the right questions, Pete made me re-evaluate my thoughts on the matter. Clever, very clever!  Oh and he ‘bought’ me a beer! My Takeaway Triple from Day 2: release small and release often to minimize issues creeping in and get faster feedback from ‘the real world’ Focus on issues that the customers care about, not what we think is important It’s okay to disagree with someone, even if they are well respected agile testing gurus, that’s how discussion and learning happens!  

    Read the article

  • Needs free/ opensource network monitoring tool for office LAN

    - by Amit Ranjan
    I know there must be a lot similar questions on SU. Let me explain my setup first. I have 4-5 PC, Laptops and Few Android Phones in my office. To get them on a network , I have a UTStarCom, WA3002G1 ADSL2+ router with a landline broadband connection which has nothing to do with any PC except the configuration settings. Broadband channel is always on, we need to switch on the router and the internet is ready for us. No Internet Connection sharing is done via any PC. I have a limited 20GB monthly plan, which is consumed in 10-20 days, depending upon the download requirements. So in the above case, i need some suggestions from you: How do I monitor my Internet Bandwidth along-with the connected systems, realtime? Any free opensource tool available? Tweaks / Changes in PC to save bandwidth as my ISP do not have any Unlimited plan. PC and Laptops are Windows XP and/Or windows 7. Either of the platform tools are welcome.

    Read the article

  • "Don't do programming after a few years of starting career". Is this a fair advice?

    - by Muhammad Yasir
    I am a little experienced developer having approximately 5 years experience in PHP and somewhat less in Java, C# and trying to learn some Python nowadays. Since the start of my career as a programmer I have been told every now and then by fellow programmers that programming is suitable for a few early years of a career (most of them take it as 5 years) and that one must change the direction after it. The reason they present include headaches and pressures associated with programming. They also say that programmers are less social and don't usually like to give time to their families, etc. and especially "Oh come on, you can not do programming your entire life!" I am somewhat confused here and need to ask others about it. If I leave programming then what do I do?! I guess teaching may be a good option in this case, but it will require to first earn a PhD degree perhaps. It may also be noteworthy that in my country (Pakistan) the life of a programmer is not very good in that normally they must give 2-3 extra hours in the office to accomplish urgent programming tasks. I have a sense that situation is somewhat similar in other countries and regions as well. Do you think it is fair advice to change career from programming to something else after spending 5 years in this field? UPDATE Oh wow... I never knew people can have 40+ years of experience in this field. I am both excited and amazed seeing that people are doing it since 1971... That means 15 years before my birth! It is nice to be able to talk to such experienced people, we don't get such a chance here in Pakistan.

    Read the article

  • Configuring Weblogic Server 10.3.6 from 32-bit mode to 64-bit mode

    - by Ekta Malik
    This post pertains to the configuration of Weblogic Server from 32-bit mode to 64-bit mode on Solaris OS. Just in case, you have WLS 10.3.6 running in 32-bit mode and the JDK being used is installed for 64-bit mode [On Solaris OS, JDK 64-bit installation comprises of installing 32-bit JDK followed by a patch for 64-bit JDK].  Verification of the mode being used One can verify the mode of Weblogic Server in the following ways Either check the commonEnv.sh script located at $MIDDLEWARE_HOME/wlserver_10.3/common/bin where $MIDDLEWARE_HOME refers to the install directory of Middleware. Look for the patterns - SUN_ARCH_DATA_MODEL and JAVA_USE_64BIT in the file. For 32-bit mode, the parameters would appear as shown belowSUN_ARCH_DATA_MODEL="32"JAVA_USE_64BIT=false Check the server console logs; which JDK is being used during start-up By checking which JDK is used by the running process of Weblogic Server Configuration Steps Take a backup of the commonEnv.sh script located at $MIDDLEWARE_HOME/wlserver_10.3/common/bin where $MIDDLEWARE_HOME refers to the install directory of Middleware Modify the commonEnv.sh script for the following parameters: The values should be 64 and true respectively for 64-bit modeSUN_ARCH_DATA_MODEL="64"JAVA_USE_64BIT=true  Restart the weblogic server. One can confirm that the JDK being used is 64-bit by looking at the Weblogic console logs during server start up or by looking at the running process.

    Read the article

  • How to evaluate a user against optimal performance?

    - by Alex K
    I have trouble coming up with a system of assigning a rating to player's performance. Well, technically there is is a trivial rating system, but I don't like it because it would mean assigning negative scores, which I think most players will be discouraged by. The problem is that I only know the ideal number of actions to get the desired result. The worst case is infinite number of actions, so there is no obvious scale. The trivial way I referred to above is to take score = (#optimal-moves - #players-moves), with ideal score being zero. However, psychologically people like big numbers. No one wants to win by getting a mark of 0. I wonder if there is a system that someone else has come up with before to solve this problem? Essentially I wish to score the players based on: How close they've come to the ideal solution. Different challenges will have different optimal number of actions, so the scoring system needs to take that into account, e.g. Challenge 1 - max 10 points, Challenge 2 - max 20 points. I don't mind giving the players negative scores if they've performed exceptionally badly, I just don't want all scores to be <=0

    Read the article

  • Mac Disk-Utility input/output error

    - by Michelle
    a couple of other people have posted about this but my specific problem has not been addressed. For months I have been backing up DVDs and home movies with no problem then all of a sudden I get an "input/output" error. Yes I have cleaned the disks. Actually I have tried 8 different ones - they are not all bad so its obviously my computer. I have done a scan and cleaned up the HD a bit just in case but nothing is helping. I don't want to download other programs since this one works but seems to be having a problem. Any ideas?

    Read the article

  • rkhunter: right way to handle warnings further?

    - by zuba
    I googled some and checked out two first links it found: http://www.skullbox.net/rkhunter.php http://www.techerator.com/2011/07/how-to-detect-rootkits-in-linux-with-rkhunter/ They don't mention what shall I do in case of such warnings: Warning: The command '/bin/which' has been replaced by a script: /bin/which: POSIX shell script text executable Warning: The command '/usr/sbin/adduser' has been replaced by a script: /usr/sbin/adduser: a /usr/bin/perl script text executable Warning: The command '/usr/bin/ldd' has been replaced by a script: /usr/bin/ldd: Bourne-Again shell script text executable Warning: The file properties have changed: File: /usr/bin/lynx Current hash: 95e81c36428c9d955e8915a7b551b1ffed2c3f28 Stored hash : a46af7e4154a96d926a0f32790181eabf02c60a4 Q1: Is there more extended HowTos which explain how to deal with different kind warnings? And the second question. Were my actions sufficient to resolve these warnings? a) To find the package which contains the suspicious file, e.g. it is debianutils for the file /bin/which ~ > dpkg -S /bin/which debianutils: /bin/which b) To check the debianutils package checksums: ~ > debsums debianutils /bin/run-parts OK /bin/tempfile OK /bin/which OK /sbin/installkernel OK /usr/bin/savelog OK /usr/sbin/add-shell OK /usr/sbin/remove-shell OK /usr/share/man/man1/which.1.gz OK /usr/share/man/man1/tempfile.1.gz OK /usr/share/man/man8/savelog.8.gz OK /usr/share/man/man8/add-shell.8.gz OK /usr/share/man/man8/remove-shell.8.gz OK /usr/share/man/man8/run-parts.8.gz OK /usr/share/man/man8/installkernel.8.gz OK /usr/share/man/fr/man1/which.1.gz OK /usr/share/man/fr/man1/tempfile.1.gz OK /usr/share/man/fr/man8/remove-shell.8.gz OK /usr/share/man/fr/man8/run-parts.8.gz OK /usr/share/man/fr/man8/savelog.8.gz OK /usr/share/man/fr/man8/add-shell.8.gz OK /usr/share/man/fr/man8/installkernel.8.gz OK /usr/share/doc/debianutils/copyright OK /usr/share/doc/debianutils/changelog.gz OK /usr/share/doc/debianutils/README.shells.gz OK /usr/share/debianutils/shells OK c) To relax about /bin/which as I see OK /bin/which OK d) To put the file /bin/which to /etc/rkhunter.conf as SCRIPTWHITELIST="/bin/which" e) For warnings as for the file /usr/bin/lynx I update checksum with rkhunter --propupd /usr/bin/lynx.cur Q2: Do I resolve such warnings right way?

    Read the article

  • ScreenManagement better practices ?! Textbox not focusing

    - by xykudyax
    I saw a question here using DataTemplates with WPF for ScreenManagement, I was curious and I gave it a try I think the ideia is amazing and very clean. Though I'm new to WPF and I read a lot of times that almost everything should be made in XAML and very little should be "coded behind". My questions resolves about using the datatemplate ideia, WHERE should the code that calls the transitions be? where should I define which commands are avaiable in which screens. For example: [ScreenA] Commands: Pressing B - Goes to state B Pressing ESC - Exits [ScreenB] Commands: Pressing A - Goes to state A Pressing SPACE - Exits where do I define the keyEventHandlers? and where do I call the next screen? I'm doing this as an hobby for learning and "if you are learning, better learn it right" :) Thank you for your time. Yes the Q/A I was talking is: What's a good way to handle game screen management in WPF? What I've done so far was to create a Screen class (derived from UserControl) and create some virtual methods: - one for Initializing stuff (like focus a given component by default) - another for inputHandling I handle it by using a switch case and by listening to the PreviewKeyDown event from the parent container (MainWindow) Im not able to do it another way! Help?!. - and a finally one that removes the keyEvent method (when the screen is terminated) Parent.PreviewKeyDown -= OnKeyDown; am I doing okay? I face a problem. When I add a new screen (userControl) containing a TextBox I'm not able to give it autofocus :/ The Caret is there but is not blinking and I have to hit "TAB" before being able to input anything at all :/

    Read the article

  • Transport rules on a live@edu instance to filter SSNs

    - by wbfreema
    Has anyone implemented transport rules on a live@edu(or whatever Microsoft is calling it these days) instance to reject the delivery of messages that include SSN's? I see how to set up transport rules in Exchange 2007, and even how to do it in the Windows Live Admin page, though the rules there don't seem to allow for regexes, can anyone confirm this? If this is the case has anyone ever connected Powershell to a live@edu instance to implement the code found at the bottom of this page: http://technet.microsoft.com/en-us/library/aa997187.aspx What I really need is a concise how-to.

    Read the article

  • fsck: FILE SYSTEM WAS MODIFIED after each check with -c, why?

    - by Chris
    Hi I use a script to partition and format CF cards (connected with a USB card writer) in an automated way. After the main process I check the card again with fsck. To check bad blocks I also tried the '-c' switch, but I always get a return value != 0 and the message "FILE SYSTEM WAS MODIFIED" (see below). I get the same result when checking the very same drive several times... Does anyone know why a) the file system is modified at all and b) why this seems to happen every time I check and not only in case of an error (like bad blocks)? Here's the output: linux-box# fsck.ext3 -c /dev/sdx1 e2fsck 1.40.2 (12-Jul-2007) Checking for bad blocks (read-only test): done Pass 1: Checking inodes, blocks, and sizes Pass 2: Checking directory structure Pass 3: Checking directory connectivity Pass 4: Checking reference counts Pass 5: Checking group summary information Volume (/dev/sdx1): ***** FILE SYSTEM WAS MODIFIED ***** Volume (/dev/sdx1): 5132/245760 files (1.2% non-contiguous), 178910/1959896 blocks Thanks, Chris

    Read the article

  • Implementing game rules in a tactical battle board game

    - by Setzer22
    I'm trying to create a game similar to what one would find in a typical D&D board game combat. For mor examples you could think of games like Advance Wars, Fire Emblem or Disgaea. I should say that I'm using design by component so far, but I can't find a nice way to fit components into the part I want to ask. I'm struggling right now with the "game rules" logic. That is, the code that displays the menu, allows the player to select units, and command them, then tells the unit game objects what to do given the player input. The best way I could thing of handling this was using a big state machine, so everything that could be done in a "turn" is handled by this state machine, and the update code of this state machine does different things depending on the state. This approach, though, leads to a large amount of code (anything not model-related) to go into a big class. Of course I can subdivide this big class into more classes, but it doesn't feel modular and upgradable enough. I'd like to know of better systems to handle this in order to be able to upgrade the game with new rules without having a monstruous if/else chain (or switch / case, for that matter). So, any ideas? I'd also like to ask that if you recommend me a specific design pattern to also provide some kind of example or further explanation and not stick to "Yeah you should use MVC and it'll work".

    Read the article

  • How to create per-vertex normals when reusing vertex data?

    - by Chris Smith
    I am displaying a cube using a vertex buffer object (gl.ELEMENT_ARRAY_BUFFER). This allows me to specify vertex indicies, rather than having duplicate vertexes. In the case of displaying a simple cube, this means I only need to have eight vertices total. Opposed to needing three vertices per triangle, times two triangles per face, times six faces. Sound correct so far? My question is, how do I now deal with vertex attribute data such as color, texture coordinates, and normals when reusing vertices using the vertex buffer object? If I am reusing the same vertex data in my indexed vertex buffer, how can I differentiate when vertex X is used as part of the cube's front face versus the cube's left face? In both cases I would like the surface normal and texture coordinates to be different. I understand I could average the surface normal, however I would like to render a cube. Also, this still doesn't work for texture coordinates. Is there a way to save memory using a vertex buffer object while being able to provide different vertex attribute data based on context? (Per-triangle would be idea.) Or should I just duplicate each vertex for each context in which it gets rendered. (So there is a one-to-one mapping between vertex, normal, color, etc.) Note: I'm using OpenGL ES.

    Read the article

  • How to reset display settings in XFCE \ Ubuntu 12.04 and also flgrx drivers

    - by Agent24
    I recently upgraded to Ubuntu 12.04 and since I hate unity I installed the Xubuntu package and am using XFCE instead. Since I have a Radeon HD5770 I also installed the fglrx drivers. This all went fine (aside from the fact that the post-release update fglrx drivers have an error on installation and Ubuntu thinks they're not installed when they actually are. I configured my display settings (dual monitors, a 17" CRT on VGA and a 17" LCD on DVI) in the amdcccle program and everything was perfect. THEN, 2 days ago, I accidentally clicked on the "Display" settings in XFCE "settings" manager. After that, everything got screwed. Now, I normally run the CRT at 1152x854 and the LCD at 1280x1024 with the CRT as my primary monitor (with panel) and the LCD without panels etc just to display other windows when I want to drag them over there. The problem is now that if I set my CRT to 1152x864, it stays at 1280x1024 virtually and half the stuff falls off the screen. It also puts the LCD at 1280x1024 BUT then overlays the CRT's display ontop with different wallpaper in an L shape down the right-hand and bottom edges. In short, nothing makes sense and everything is FUBAR. I tried uninstalling fglrx through synaptic, and renaming xorg.conf and also the xfce XML file that has monitor settings but it still won't make sense. Unity on the other hand can currently set everything normally so the problem appears to be only with XFCE. In any case, I can't even get the fglrx drivers back, when I re-installed them, I can't run amdccle anymore as it says the driver isn't installed!! Can someone help me reset my XFCE settings so the monitors aren't screwed with some incorrect virtual desktop size and also so I can get fglrx drivers back and working? I really don't want to have to format and reinstall and go through all the hassle but it looks like I may have to :(

    Read the article

  • Should I ditch a creative pet project in lieu of one that would demonstrate skills more applicable to an employer?

    - by Hart Simha
    I am currently working on a project on github that I think would be a good demonstration of my initiative, creativity and enthusiasm. It is an educational game I am developing in pygame that enables the user to learn to improve their development productivity by using vim, specifically with python, though learning to code faster with vim should be transferable to any language. I think this is something that might have a mass appeal and benefit to a lot of people in a measurable way. -However- I am graduating from college in a month (my degree is computer science with a minor in english), with no experience that is relevant to helping me get any kind of job in the field, and a gpa that doesn't tout my merits. I could pursue a career in game development, but it's not necessarily what I'm most interested in, and see myself applying to startups around the country. To the places I am looking at applying, showing that I have experience with pygame is going to be largely irrelevant, except in demonstration of my ability to code, period. A lot of skills that ARE more marketable, such a data modeling, GIS, mobile development, javascript, .net framework, and various web development technologies, are not going to be showcased by this project (on the upside, employers do like to see familiarity with git and python). I'm wondering if I should sink all my free time in the next couple of months into this project, since I'm motivated and interested in it, and if the value of being able to demonstrate ambition and 'good ideas' (for lack of a better term, and in my own opinion) will compensate for the absence of demonstrating more sought-after skills. I am probably at a point where I should either commit fully to this project now, or put it on the backburner in favor of something else, and I am leaning towards continuing with what I am already working on, because I think it's a great idea, and something achievable to me with enough dedication over the next couple months. But the most important thing to me is being able to get a job out of college, which I am exceedingly concerned about as the professional landscape which I am navigating for the first time is a lot more intimidating than I could have anticipated, with almost every job (even short-term contract positions) requiring years of experience which I lack. Oh, and in case anyone is interested, my repository is here: www.github.com/hmsimha/vimagine

    Read the article

  • Have you ever used kon-boot?

    - by Ctrl Alt D-1337
    Has anyone here ever used kon-boot? I guess it may work because of the few blog posts about it but I feel kinda concerned and am interested at hearing experiences from anyone who have used multiple times with no side effects. I am slightly worried for any direct memory altering it tries to do. I am also worried if this will do its job fine to hide the fact it puts in a low level trojan or if the author planned to do anything like that in a future release as it looks like closed source from the site. Also I don't intend to gain illegal access but I find these sort of things very useful for my box of live discs I take every where, just in case. OT: Other question that me be of interest to readers here

    Read the article

  • Qualcomm Gobi WWAN modem no longer appears in Network Manager

    - by Andy E
    Not sure what happened here. I installed Cinnamon on my Ubuntu 12.10 environment yesterday, rebooted when finished and everything was working fine. I even used my WWAN modem after my fixed line broadband went down. However, after starting my machine this morning and seeing that my fixed line is still having problems (intermittently), I clicked the network applet and my WWAN device wasn't listed. It's not in the main network manager window either. It is still present on the system, however: $ lsusb Bus 001 Device 003: ID 05ca:18b0 Ricoh Co., Ltd Sony Vaio Integrated Webcam Bus 001 Device 004: ID 05c6:9221 Qualcomm, Inc. Gobi Wireless Modem (QDL mode) Bus 002 Device 005: ID 04e8:6865 Samsung Electronics Co., Ltd Bus 002 Device 002: ID 0409:005a NEC Corp. HighSpeed Hub Bus 003 Device 002: ID 147e:1000 Upek Biometric Touchchip/Touchstrip Fingerprint Sensor Bus 008 Device 002: ID 044e:3017 Alps Electric Co., Ltd BCM2046 Bluetooth Device Debug output from modem-manager, refers to a device that is blacklisted: modem-manager[10186]: [1355478137.024491] [mm-manager.c:866] device_added(): (tty/ttyUSB0): port's parent device is blacklisted modem-manager[10186]: [1355478137.024607] [mm-manager.c:875] device_added(): (tty/ttyS0): port's parent platform driver is not whitelisted modem-manager[10186]: [1355478137.024700] [mm-manager.c:875] device_added(): (tty/ttyS1): port's parent platform driver is not whitelisted ... I couldn't see anything relevant in the debug output for network-manager, but I've created a paste for it just in case. In /lib/udev/rules.d/77-mm-qdl-device-blacklist.rules, I found the following line that matches the device IDs from the lsusb output: # Generic Gobi QDL device ATTRS{idVendor}=="05c6", ATTRS{idProduct}=="9221", ENV{ID_MM_DEVICE_IGNORE}="1" I tried commenting out the second line and restarting network-manager/modem-manager but it didn't help. I've no idea where to go from here, does anyone else have any ideas?

    Read the article

  • How atomic is a SELECT INTO?

    - by leo.pasta
    Last week I got an interesting situation that prompted me to challenge a long standing assumption. I always thought that a SELECT INTO was an atomic statement, i.e. it would either complete successfully or the table would not be created. So I got very surprised when, after a “select into” query was chosen as a deadlock victim, the next execution (as the app would handle the deadlock and retry) would fail with: Msg 2714, Level 16, State 6, Line 1 There is already an object named '#test' in the database. The only hypothesis we could come up was that the “create table” part of the statement was committed independently from the actual “insert”. We can confirm that by capturing the “Transaction Log” event on Profiler (filtering by SPID0). The result is that when we run: SELECT * INTO #results FROM master.sys.objects we get the following output on Profiler: It is easy to see the two independent transactions. Although this behaviour was a surprise to me, it is very easy to workaround it if you feel the need (as we did in this case). You can either change it into independent “CREATE TABLE / INSERT SELECT” or you can enclose the SELECT INTO in an explicit transaction: SET XACT_ABORT ON BEGIN TRANSACTION SELECT * INTO #results FROM master.sys.objects COMMIT

    Read the article

  • Roadmap for Thinktecture IdentityServer

    - by Your DisplayName here!
    I got asked today if I could publish a roadmap for thinktecture IdentityServer (idrsv in short). Well – I got a lot of feedback after B1 and one of the biggest points here was the data access layer. So I made two changes: I moved to configuration database access code to EF 4.1 code first. That makes it much easier to change the underlying database. So it is now just a matter of changing the connection string to use real SQL Server instead of SQL Compact. Important when you plan to do scale out. I included the ASP.NET Universal Providers in the download. This adds official support for SQL Azure, SQL Server and SQL Compact for the membership, roles and profile features. Unfortunately the Universal Provider use a different schema than the original ASP.NET providers (that sucks btw!) – so I made them optional. If you want to use them go to web.config and uncomment the new provider. Then there are some other small changes: The relying party registration entries now have added fields to add extra data that you want to couple with the RP. One use case could be to give the UI a hint how the login experience should look like per RP. This allows to have a different look and feel for different relying parties. I also included a small helper API that you can use to retrieve the RP record based on the incoming WS-Federation query string. WS-Federation single sign out is now conforming to the spec. I made certificate based endpoint identities for SSL endpoints optional. This caused some problems with configuration and versioning of existing clients. I hope I can release the RC in the next days. If there are no major issues, there will be RTM very soon!

    Read the article

< Previous Page | 380 381 382 383 384 385 386 387 388 389 390 391  | Next Page >