Search Results

Search found 1246 results on 50 pages for 'sam cogan'.

Page 48/50 | < Previous Page | 44 45 46 47 48 49 50  | Next Page >

  • How should I monitor memory usage/performance in SunOS/Solaris?

    - by exhuma
    Last week we decided to add some SunOS (uname -a = SunOS bbs-sam-belair 5.10 Generic_127128-11 i86pc i386 i86pc) machines into our running munin instance. First off, the machines are pre-configured appliances, so, I want to avoid touching the system too much without supervision of the service provider. But adding it to munin was fairly easy by writing a small socket-service (if anyone is interested, I put it up on github: https://github.com/munin-monitoring/contrib/tree/master/tools/pypmmn) Yesterday, I implemented/adapted the required plugins for our machines. And here the questions start: First, I have not found a way to determine detailed memory usage values. I get the total memory by running prtconf | grep Memory, and the free memory using vmstat. Fiddling together a munin-plugin, gives me the following graph: This is pretty much uninformative. Compare this to the default plugin for linux nodes which has a lot more detail: Most importantly, this shows me how much memory is actually used by applications. So, first question: Is it possible to get detailed memory information on SunOS with the default system tools (i.e. not using top)? Onto the next puzzle: Seeing the graphs, I noticed activity in the "Paging in/out" graphs, even though the memory graph still has unused memory: Upon further investigation, I found out that df reports that /tmp is mounted on swap. Drilling around on the web, I understood that df will display swap, but in fact, it's mounted as a tmpfs. Now I don't know if this explains the swap activity. The default munin-plugin for solaris uses kstat -p -c misc -m cpu_stat to get these values. I find it already strange that this is using the cpu_stat module. So maybe I simply misinterpret the "paging" graphs? Second question: Do the paging graphs indicate that parts of the memory are paged to disk? Or is the activity caused by file operations in /tmp?

    Read the article

  • Running a service with a user from a different domain not working

    - by EWood
    I've been stuck on this for a while, not sure what permission I'm missing. I've got domain A and domain B, A trusts B, but B does not trust A. I'm trying to run a service in domain A with a user account from domain B and I keep getting Access is Denied. I'm using the FQDN after the username and the password is correct. The user account from domain B is a local administrator on the domain A server, the user account has the logon locally, and as a service permissions. Must. Get. This. Working. Update: I found something interesting in the logs I must have missed. This ought to get me pointed in the right direction. Event ID: 40961 - LsaSrv : The Security System could not establish a secured connection with the server ldap/{server fqdn/fqdn@fqdn} No authentication protocol was available. I've found a few fixes for 40961 but nothing has worked so far. I've verified reverse lookup zones. nslookup resolves the correct dc properly. still workin' at it. Upadte: In response to Evan; I ran " runas /env /user:ftp_user@fqdn "notepad" " then entered the users password and notepad came up. It seems to work successfully. This issue is now resolved. The problem is visible in the screenshot. Windows tries to use the UPN for the user account if you dig your user out of AD with the Browse button. This fails every time even with the right user and password. Simply using the SAM format (Domain\User) works. So simple, yet so annoying. Can't believe I missed this. Thanks to everyone who helped.

    Read the article

  • Old scheduled task still being started, but can't find it.

    - by JvO
    System: Windows XP Home Summary: Some scheduled task is still being started by Windows, but I can't find it, nor determine where its configuration has been stored. This is turning into a mystery for me... I set up a Windows XP Home machine to run a task at 7:00 AM, using the Task manager. This was a clean install, no users defined, so you got straight to the desktop after starting the machine. The filesystem uses NTFS. Later on, I needed to introduce users, so I created one (named Sam) with administrator privileges. After this I noticed that the scheduled task failed, most likely due to privilege errors (i.e. can't write to a network drive). So I want to delete the old task, and add it again with the correct user credentials. However.... I can't find the old task!! I know it is still being executed at 7:00 AM, but there's no mention anywhere on the system of this task. I've looked in c:\windows\tasks for .job files, but there's only the "MP Scheduled Scan.job" from Security Essentials. I've searched the whole disk for mention of the batch file that is being run, but can't find it. So why is this old task still running, and more importantly, why can't I find it? Would it have something to do with introducing users on XP?

    Read the article

  • grub2 error : out of disk

    - by Nostradamnit
    Hi everyone, I recently install ArchBang on a machine with Ubuntu and XP. I ran update-grub from Ubuntu and it found the new install and created an entry. However, when I try to boot it, I get: error: out of disk error: you need to load kernel first I've tried several things, including adding a new entry in 40_custom, but nothing changes. Here are the entries I have: default found by update-grub ### BEGIN /etc/grub.d/30_os-prober ### menuentry "ArchBang Linux (on /dev/sda4)" { insmod part_msdos insmod ext2 set root='(hd0,msdos4)' search --no-floppy --fs-uuid --set 75f96b44-3a8f-4727-9959-d669b9244f2a linux /boot/vmlinuz26 root=/dev/sda4 rootfstype=ext4 ro xorg=vesa quiet nomodeset swapon initrd /boot/kernel26.img } menuentry "ArchBang Linux Fallback (on /dev/sda4)" { insmod part_msdos insmod ext2 set root='(hd0,msdos4)' search --no-floppy --fs-uuid --set 75f96b44-3a8f-4727-9959-d669b9244f2a linux /boot/vmlinuz26 root=/dev/sda4 rootfstype=ext4 ro xorg=vesa quiet nomodeset swapon initrd /boot/kernel26-fallback.img } ### END /etc/grub.d/30_os-prober ### custom entry in 40_custom based on various ideas found on the internets menuentry "ArchBang Linux (on /dev/sda4)" { insmod part_msdos insmod ext2 set root='(hd0,msdos4)' search --no-floppy --fs-uuid --set 75f96b44-3a8f-4727-9959-d669b9244f2a linux /boot/vmlinuz26 root=/dev/disk/by-uuid/75f96b44-3a8f-4727-9959-d669b9244f2a rootfstype=ext4 ro xorg=vesa quiet nomodeset swapon initrd /boot/kernel26.img } I think the problem has something to do with the sda4 not being mounted at boot-time... Thanks in advance for you help, Sam

    Read the article

  • Upgrading Team Foundation Server 2008 to 2010

    - by Martin Hinshelwood
    I am sure you will have seen my posts on upgrading our internal Team Foundation Server from TFS2008 to TFS2010 Beta 2, RC and RTM, but what about a fresh upgrade of TFS2008 to TFS2010 using the RTM version of TFS. One of our clients is taking the plunge with TFS2010, so I have the job of doing the upgrade. It is sometimes very useful to have a team member that starts work when most of the Sydney workers are heading home as I can do the upgrade without impacting them. The down side is that if you have any blockers then you can be pretty sure that everyone that can deal with your problem is asleep I am starting with an existing blank installation of TFS 2010, but Adam Cogan let slip that he was the one that did the install so I thought it prudent to make sure that it was OK. Verifying Team Foundation Server 2010 We need to check that TFS 2010 has been installed correctly. First, check the Admin console and have a root about for any errors. Figure: Even the SQL Setup looks good. I don’t know how Adam did it! Backing up the Team Foundation Server 2008 Databases As we are moving from one server to another (recommended method) we will be taking a backup of our TFS2008 databases and resorting them to the SQL Server for the new TFS2010 Server. Do not just detach and reattach. This will cause problems with the version of the database. If you are running a test migration you just need to create a backup of the TFS 2008 databases, but if you are doing the live migration then you should stop IIS on the TFS 2008 server before you backup the databases. This will stop any inadvertent check-ins or changes to TFS 2008. Figure: Stop IIS before you take a backup to prevent any TFS 2008 changes being written to the database. It is good to leave a little time between taking the TFS 2008 server offline and commencing the upgrade as there is always one developer who has not finished and starts screaming. This time it was John Liu that needed 10 more minutes to make his changes and check-in, so I always give it 30 minutes and see if anyone screams. John Liu [SSW] said:   are you doing something to TFS :-O MrHinsh [SSW UK][VS ALM MVP] said:   I have stopped TFS 2008 as per my emails John Liu [SSW] said:   haven't finish check in @_@   can we have it for 10mins? :) MrHinsh [SSW UK][VS ALM MVP] said:   TFS 2008 has been started John Liu [SSW] said:   I love you! -IM conversation at TFS Upgrade +25 minutes After John confirmed that he had everything done I turned IIS off again and made a cup of tea. There were no more screams so the upgrade can continue. Figure: Backup all of the databases for TFS and include the Reporting Services, just in case.   Figure: Check that all the backups have been taken Once you have your backups, you need to copy them to your new TFS2010 server and restore them. This is a good way to proceed as if we have any problems, or just plain run out of time, then you just turn the TFS 2008 server back on and all you have lost is one upgrade day, and not 10 developer days. As per the rules, you should record the number of files and the total number of areas and iterations before the upgrade so you have something to compare to: TFS2008 File count: Type Count 1 1845 2 15770 Areas & Iterations: 139 You can use this to verify that the upgrade was successful. it should however be noted that the numbers in TFS 2010 will be bigger. This is due to some of the sorting out that TFS does during the upgrade process. Restore Team Foundation Server 2008 Databases Restoring the databases is much more time consuming than just attaching them as you need to do them one at a time. But you may be taking a backup of an operational database and need to restore all your databases to a particular point in time instead of to the latest. I am doing latest unless I encounter any problems. Figure: Restore each of the databases to either a latest or specific point in time.     Figure: Restore all of the required databases Now that all of your databases are restored you now need to upgrade them to Team Foundation Server 2010. Upgrade Team Foundation Server 2008 Databases This is probably the easiest part of the process. You need to call a fire and forget command that will go off to the database specified, find the TFS 2008 databases and upgrade them to 2010. During this process all of the 6 main TFS 2008 databases are merged into the TfsVersionControl database, upgraded and then the database is renamed to TFS_[CollectionName]. The rename is only the database and not the physical files, so it is worth going back and renaming the physical file as well. This keeps everything neat and tidy. If you plan to keep the old TFS 2008 server around, for example if you are doing a test migration first, then you will need to change the TFS GUID. This GUID is unique to each TFS instance and is preserved when you upgrade. This GUID is used by the clients and they can get a little confused if there are two servers with the same one. To kick of the upgrade you need to open a command prompt and change the path to “C:\Program Files\Microsoft Team Foundation Server 2010\Tools” and run the “import” command in  “tfsconfig”. TfsConfig import /sqlinstance:<Previous TFS Data Tier>                  /collectionName:<Collection Name>                  /confirmed Imports a TFS 2005 or 2008 data tier as a new project collection. Important: This command should only be executed after adequate backups have been performed. After you import, you will need to configure portal and reporting settings via the administration console. EXAMPLES -------- TfsConfig import /sqlinstance:tfs2008sql /collectionName:imported /confirmed TfsConfig import /sqlinstance:tfs2008sql\Instance /collectionName:imported /confirmed OPTIONS: -------- sqlinstance         The sql instance of the TFS 2005 or 2008 data tier. The TFS databases at that location will be modified directly and will no longer be usable as previous version databases.  Ensure you have back-ups. collectionName      The name of the new Team Project Collection. confirmed           Confirm that you have backed-up databases before importing. This command will automatically look for the TfsIntegration database and verify that all the other required databases exist. In this case it took around 5 minutes to complete the upgrade as the total database size was under 700MB. This was unlike the upgrade of SSW’s production database with over 17GB of data which took a few hours. At the end of the process you should get no errors and no warnings. The Upgrade operation on the ApplicationTier feature has completed. There were 0 errors and 0 warnings. As this is a new server and not a pure upgrade there should not be a problem with the GUID. If you think at any point you will be doing this more than once, for example doing a test migration, or merging many TFS 2008 instances into a single one, then you should go back and rename the physical TfsVersionControl.mdf file to the same as the new collection. This will avoid confusion later down the line. To do this, detach the new collection from the server and rename the physical files. Then reattach and change the physical file locations to match the new name. You can follow http://www.mssqltips.com/tip.asp?tip=1122 for a more detailed explanation of how to do this. Figure: Stop the collection so TFS does not take a wobbly when we detach the database. When you try to start the new collection again you will get a conflict with project names and will require to remove the Test Upgrade collection. This is fine and it just needs detached. Figure: Detaching the test upgrade from the new Team Foundation Server 2010 so we can start the new Collection again. You will now be able to start the new upgraded collection and you are ready for testing. Do you remember the stats we took off the TFS 2008 server? TFS2008 File count: Type Count 1 1845 2 15770 Areas & Iterations: 139 Well, now we need to compare them to the TFS 2010 stats, remembering that there will probably be more files under source control. TFS2010 File count: Type Count 1 19288 Areas & Iterations: 139 Lovely, the number of iterations are the same, and the number of files is bigger. Just what we were looking for. Testing the upgraded Team Foundation Server 2010 Project Collection Can we connect to the new collection and project? Figure: We can connect to the new collection and project.   Figure: make sure you can connect to The upgraded projects and that you can see all of the files. Figure: Team Web Access is there and working. Note that for Team Web Access you now use the same port and URL as for TFS 2010. So in this case as I am running on the local box you need to use http://localhost:8080/tfs which will redirect you to http://localhost:8080/tfs/web for the web access. If you need to connect with a Visual Studio 2008 client you will need to use the full path of the new collection, http://[servername]/tfs/[collectionname] and this will work with all of your collections. With Visual Studio 2005 you will only be able to connect to the Default collection and in both VS2008 and VS2005 you will need to install the forward compatibility updates. Visual Studio Team System 2005 Service Pack 1 Forward Compatibility Update for Team Foundation Server 2010 Visual Studio Team System 2008 Service Pack 1 Forward Compatibility Update for Team Foundation Server 2010 To make sure that you have everything up to date, make sure that you run SSW Diagnostics and get all green ticks. Upgrade Done! At this point you can send out a notice to everyone that the upgrade is complete and and give them the connection details. You need to remember that at this stage we have 2008 project upgraded to run under TFS 2010 but it is still running under that same process template that it was running before. You can only “enable” 2010 features in a process template you can’t upgrade. So what to do? Well, you need to create a new project and migrate things you want to keep across. Souse code is easy, you can move or Branch, but Work Items are more difficult as you can’t move them between projects. This instance is complicated more as the old project uses the Conchango/EMC Scrum for Team System template and I will need to write a script/application to get the work items across with their attachments in tact. That is my next task! Technorati Tags: TFS 2010,TFS 2008,VS ALM

    Read the article

  • Professional Scrum Developer (.NET) Training in London

    - by Martin Hinshelwood
    On the 26th - 30th July in Microsoft’s offices in London Adam Cogan from SSW will be presenting the first Professional Scrum Developer course in the UK. I will be teaching this course along side Adam and it is a fantastic experience. You are split into teams and go head-to-head to deliver units of potentially shippable work in four two hour sprints. The Professional Scrum Developer course is the only course endorsed by both Microsoft and Ken Schwaber and they have worked together very effectively in brining this course to fruition. This course is the brain child of Richard Hundhausen, a Microsoft Regional Director, and both Adam and I attending the Trainer Prep in Sydney when he was there earlier this year. He is a fantastic trainer and no matter where you do this course you can be safe in the knowledge that he has trained and vetted all of the teachers. A tools version of Ken if you will Find a course and register Download this syllabus Download the Scrum Guide What is the Professional Scrum Developer course all about? Professional Scrum Developer course is a unique and intensive five-day experience for software developers. The course guides teams on how to turn product requirements into potentially shippable increments of software using the Scrum framework, Visual Studio 2010, and modern software engineering practices. Attendees will work in self-organizing, self-managing teams using a common instance of Team Foundation Server 2010. Who should attend this course? This course is suitable for any member of a software development team – architect, programmer, database developer, tester, etc. Entire teams are encouraged to attend and experience the course together, but individuals are welcome too. Attendees will self-organize to form cross-functional Scrum teams. These teams require an aggregate of skills specific to the selected case study. Please see the last page of this document for specific details. Product Owners, ScrumMasters, and other stakeholders are welcome too, but keep in mind that everyone who attends will be expected to commit to work and pull their weight on a Scrum team. What should you know by the end of the course? Scrum will be experienced through a combination of lecture, demonstration, discussion, and hands-on exercises. Attendees will learn how to do Scrum correctly while being coached and critiqued by the instructor, in the following topic areas: Form effective teams Explore and understand legacy “Brownfield” architecture Define quality attributes, acceptance criteria, and “done” Create automated builds How to handle software hotfixes Verify that bugs are identified and eliminated Plan releases and sprints Estimate product backlog items Create and manage a sprint backlog Hold an effective sprint review Improve your process by using retrospectives Use emergent architecture to avoid technical debt Use Test Driven Development as a design tool Setup and leverage continuous integration Use Test Impact Analysis to decrease testing times Manage SQL Server development in an Agile way Use .NET and T-SQL refactoring effectively Build, deploy, and test SQL Server databases Create and manage test plans and cases Create, run, record, and play back manual tests Setup a branching strategy and branch code Write more maintainable code Identify and eliminate people and process dysfunctions Inspect and improve your team’s software development process What does the week look like? This course is a mix of lecture, demonstration, group discussion, simulation, and hands-on software development. The bulk of the course will be spent working as a team on a case study application delivering increments of new functionality in mini-sprints. Here is the week at a glance: Monday morning and most of the day Friday will be spent with the computers powered off, so you can focus on sharpening your game of Scrum and avoiding the common pitfalls when implementing it. The Sprints Timeboxing is a critical concept in Scrum as well as in this course. We expect each team and student to understand and obey all of the timeboxes. The timebox duration will always be clearly displayed during each activity. Expect the instructor to enforce it. Each of the ½ day sprints will roughly follow this schedule: Component Description Minutes Instruction Presentation and demonstration of new and relevant tools & practices 60 Sprint planning meeting Product owner presents backlog; each team commits to delivering functionality 10 Sprint planning meeting Each team determines how to build the functionality 10 The Sprint The team self-organizes and self-manages to complete their tasks 120 Sprint Review meeting Each team will present their increment of functionality to the other teams = 30 Sprint Retrospective A group retrospective meeting will be held to inspect and adapt 10 Each team is expected to self-organize and manage their own work during the sprint. Pairing is highly encouraged. The instructor/product owner will be available if there are questions or impediments, but will be hands-off by default. You should be prepared to communicate and work with your team members in order to achieve your sprint goal. If you have development-related questions or get stuck, your partner or team should be your first level of support. Module 1: INTRODUCTION This module provides a chance for the attendees to get to know the instructors as well as each other. The Professional Scrum Developer program, as well as the day by day agenda, will be explained. Finally, the Scrum team will be selected and assembled so that the forming, storming, norming, and performing can begin. Trainer and student introductions Professional Scrum Developer program Agenda Logistics Team formation Retrospective Module 2: SCRUMDAMENTALS This module provides a level-setting understanding of the Scrum framework including the roles, timeboxes, and artifacts. The team will then experience Scrum firsthand by simulating a multi-day sprint of product development, including planning, review, and retrospective meetings. Scrum overview Scrum roles Scrum timeboxes (ceremonies) Scrum artifacts Simulation Retrospective It’s required that you read Ken Schwaber’s Scrum Guide in preparation for this module and course. MODULE 3: IMPLEMENTING SCRUM IN VISUAL STUDIO 2010 This module demonstrates how to implement Scrum in Visual Studio 2010 using a Scrum process template*. The team will learn the mapping between the Scrum concepts and how they are implemented in the tool. After connecting to the shared Team Foundation Server, the team members will then return to the simulation – this time using Visual Studio to manage their product development. Mapping Scrum to Visual Studio 2010 User Story work items Task work items Bug work items Demonstration Simulation Retrospective Module 4: THE CASE STUDY In this module the team is introduced to their problem domain for the week. A kickoff meeting by the Product Owner (the instructor) will set the stage for the why and what that will take during the upcoming sprints. The team will then define the quality attributes of the project and their definition of “done.” The legacy application code will be downloaded, built, and explored, so that any bugs can be discovered and reported. Introduction to the case study Download the source code, build, and explore the application Define the quality attributes for the project Define “done” How to file effective bugs in Visual Studio 2010 Retrospective Module 5: HOTFIX This module drops the team directly into a Brownfield (legacy) experience by forcing them to analyze the existing application’s architecture and code in order to locate and fix the Product Owner’s high-priority bug(s). The team will learn best practices around finding, testing, fixing, validating, and closing a bug. How to use Architecture Explorer to visualize and explore Create a unit test to validate the existence of a bug Find and fix the bug Validate and close the bug Retrospective Module 6: PLANNING This short module introduces the team to release and sprint planning within Visual Studio 2010. The team will define and capture their goals as well as other important planning information. Release vs. Sprint planning Release planning and the Product Backlog Product Backlog prioritization Acceptance criteria and tests Sprint planning and the Sprint Backlog Creating and linking Sprint tasks Retrospective At this point the team will have the knowledge of Scrum, Visual Studio 2010, and the case study application to begin developing increments of potentially shippable functionality that meet their definition of done. Module 7: EMERGENT ARCHITECTURE This module introduces the architectural practices and tools a team can use to develop a valid design on which to develop new functionality. The teams will learn how Scrum supports good architecture and design practices. After the discussion, the teams will be presented with the product owner’s prioritized backlog so that they may select and commit to the functionality they can deliver in this sprint. Architecture and Scrum Emergent architecture Principles, patterns, and practices Visual Studio 2010 modeling tools UML and layer diagrams SPRINT 1 Retrospective Module 8: TEST DRIVEN DEVELOPMENT This module introduces Test Driven Development as a design tool and how to implement it using Visual Studio 2010. To maximize productivity and quality, a Scrum team should setup Continuous Integration to regularly build every team member’s code changes and run regression tests. Refactoring will also be defined and demonstrated in combination with Visual Studio’s Test Impact Analysis to efficiently re-run just those tests which were impacted by refactoring. Continuous integration Team Foundation Build Test Driven Development (TDD) Refactoring Test Impact Analysis SPRINT 2 Retrospective Module 9: AGILE DATABASE DEVELOPMENT This module lets the SQL Server database developers in on a little secret – they can be agile too. By using the database projects in Visual Studio 2010, the database developers can join the rest of the team. The students will see how to apply Agile database techniques within Visual Studio to support the SQL Server 2005/2008/2008R2 development lifecycle. Agile database development Visual Studio database projects Importing schema and scripts Building and deploying Generating data Unit testing SPRINT 3 Retrospective Module 10: SHIP IT Teams need to know that just because they like the functionality doesn’t mean the Product Owner will. This module revisits acceptance criteria as it pertains to acceptance testing. By refining acceptance criteria into manual test steps, team members can execute the tests, recording the results and reporting bugs in a number of ways. Manual tests will be defined and executed using the Microsoft Test Manager tool. As the Sprint completes and an increment of functionality is delivered, the team will also learn why and when they should create a branch of the codeline. Acceptance criteria Testing in Visual Studio 2010 Microsoft Test Manager Writing and running manual tests Branching SPRINT 4 Retrospective Module 11: OVERCOMING DYSFUNCTION This module introduces the many types of people, process, and tool dysfunctions that teams face in the real world. Many dysfunctions and scenarios will be identified, along with ideas and discussion for how a team might mitigate them. This module will enable you and your team to move toward independence and improve your game of Scrum when you depart class. Scrum-butts and flaccid Scrum Best practices working as a team Team challenges ScrumMaster challenges Product Owner challenges Stakeholder challenges Course Retrospective What will be expected of you and you team? This is a unique course in that it’s technically-focused, team-based, and employs timeboxes. It demands that the members of the teams self-organize and self-manage their own work to collaboratively develop increments of software. All attendees must commit to: Pay attention to all lectures and demonstrations Participate in team and group discussions Work collaboratively with other team members Obey the timebox for each activity Commit to work and do your best to deliver All teams should have these skills: Understanding of Scrum Familiarity with Visual Studio 201 C#, .NET 4.0 & ASP.NET 4.0 experience*  SQL Server 2008 development experience Software testing experience * Check with the instructor ahead of time for the exact technologies Self-organising teams Another unique attribute of this course is that it’s a technical training class being delivered to teams of developers, not pairs, and not individuals. Ideally, your actual software development team will attend the training to ensure that all necessary skills are covered. However, if you wish to attend an open enrolment course alone or with just a couple of colleagues, realize that you may be placed on a team with other attendees. The instructor will do his or her best to ensure that each team is cross-functional to tackle the case study, but there are no guarantees. You may be required to try a new role, learn a new skill, or pair with somebody unfamiliar to you. This is just good Scrum! Who should NOT take this course? Because of the nature of this course, as explained above, certain types of people should probably not attend this course: Students requiring command and control style instruction – there are no prescriptive/step-by-step (think traditional Microsoft Learning) labs in this course Students who are unwilling to work within a timebox Students who are unwilling to work collaboratively on a team Students who don’t have any skill in any of the software development disciplines Students who are unable to commit fully to their team – not only will this diminish the student’s learning experience, but it will also impact their team’s learning experience Find a course and register Download this syllabus Download the Scrum Guide Technorati Tags: Scrum,SSW,Pro Scrum Dev

    Read the article

  • Silverlight Cream for February 21, 2011 -- #1049

    - by Dave Campbell
    In this Issue: Rob Eisenberg(-2-), Gill Cleeren, Colin Eberhardt, Alex van Beek, Ishai Hachlili, Ollie Riches, Kevin Dockx, WindowsPhoneGeek(-2-), Jesse Liberty(-2-), and John Papa. Above the Fold: Silverlight: "Silverlight 4: Creating useful base classes for your views and viewmodels with PRISM 4" Alex van Beek WP7: "Google Sky on Windows Phone 7" Colin Eberhardt Shoutouts: My friends at SilverlightShow have their top 5 for last week posted: SilverlightShow for Feb 14 - 20, 2011 From SilverlightCream.com: Rob Eisenberg MVVMs Us with Caliburn.Micro! Rob Eisenberg chats with Carl and Richard on .NET Rocks episode 638 about Caliburn.Micro which takes Convention-over-Configuration further, utilizing naming conventions to handle a large number of data binding, validation and other action-based characteristics in your app. Two Caliburn Releases in One Day! Rob Eisenberg also announced that release candidates for both Caliburn 2.0 and Caliburn.Micro 1.0 are now available. Check out the docs and get the bits. Getting ready for Microsoft Silverlight Exam 70-506 (Part 6) Gill Cleeren has Part 6 of his series on getting ready for the Silverlight Exam up at SilverlightShow.... this time out, Gill is discussing app startup, localization, and using resource dictionaries, just to name a few things. Google Sky on Windows Phone 7 Colin Eberhardt has a very cool WP7 app described where he's using Google Sky as the tile source for Bing Maps, and then has a list of 110 Messier Objects.. interesting astronomical objects that you can look at... all with source! Silverlight 4: Creating useful base classes for your views and viewmodels with PRISM 4 Alex van Beek has some Prism4/Unity MVVM goodness up with this discussion of a login module using View and ViewModel base classes. Windows Phone 7 and WCF REST – Authentication Solutions Ishai Hachlili sent me this link to his post about WCF REST web service and authentication for WP7, and he offers up 2 solutions... from the looks of this, I'm also putting his blog on my watch list WP7Contrib: Isolated Storage Cache Provider Ollie Riches has a complete explanation and code example of using the IsolatedStorageCacheProvider in their WP7Contrib library. Using a ChannelFactory in Silverlight, part two: binary cows & new-born calves Kevin Dockx follows-up his post on Channel Factories with this part 2, expanding the knowledge-base into usin parameters and custom binding with binary encoding, both from reader suggestions. All about UriMapping in WP7 WindowsPhoneGeek has a post up about URI mappings in WP7 ... what it is, how to enable it in code behind or XAML, then using it either with a hyperlink button or via the NavigationService class... all with code. Passing WP7 Memory Consumption requirements with the Coding4Fun MemoryCounter tool WindowsPhoneGeek's latest is a tutorial on the use of the Memory Counter control from the Coding4Fun toolkit and WP7 Memory consumption. Getting Started With Linq Jesse Liberty gets into LINQ in his Episode 33 of his WP7 'From Scratch' series... looks like a good LINQ starting point, and he's going to be doing a series on it. Linq with Objects In his second post on LINQ, Jesse Liberty is looking at creating a Linq query against a collection of objects... always good stuff, Jesse! Silverlight TV Silverlight TV 62: The Silverlight 5 Triad Unplugged John Papa is joined by Sam George, Larry Olson, and Vijay Devetha (the Silverlight Triad) on this Silverlight TV episode 62 to discuss how the team works together, and hey... they're hiring! Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • #altnetseattle &ndash; REST Services

    - by GeekAgilistMercenary
    Below are the notes I made in the REST Architecture Session I helped kick off with Andrew. RSS, ATOM, and such needed for better discovery.  i.e. there still is a need for some type of discovery. Difficult is modeling behaviors in a RESTful way.  ??  Invoking some type of state against an object.  For instance in the case of a POST vs. a GET.  The GET is easy, comes back as is, but what about a POST, which often changes some state or something. Challenge is doing multiple workflows with stateful workflows.  How does batch work.  Maybe model the batch as a resource. Frameworks aren’t particularly part of REST, REST is REST.  But point argued that REST is modeled, or part of modeling a state machine of some sort… ? Nothing is 100% reliable w/ REST – comparisons drawn with TCP/IP.  Sufficient probability is made however for the communications, but the idea of a possible failure has to be built into the usage model of REST. Ruby on Rails / RESTfully, and others used.  What were their issues, what do they do.  ATOM feeds, object serialized, using LINQ to XML w/ this.  No state machine libraries. Idempotent areas around REST and single change POST changes are inherent in the architecture. REST – one of the constrained languages is for the interaction w/ the system.  Limiting what can be done on the resources.  - disagreement, there is no agreed upon REST verbs. Sam Ruby – RESTful services.  Expanded the verbs within REST/HTTP pushes you off the web.  Of the existing verbs POST leaves the most up for debate. Robert Reem used Factory to deal with the POST to handle the new state.  The POST identifying what it just did by the return. Different states are put into POST, so that new prospective verbs, without creating verbs for REST/HTTP can be used to advantage without breaking universal clients. Biggest issue with REST services is their lack of state, yet it is also one of their biggest strengths.  What happens is that the client takes up the often onerous task of handling all state, state machines, and other extraneous resource management.  All the GETs, POSTs, DELETEs, INSERTs get all pushed into abstraction.  My 2 cents is that this in a way ends up pushing a huge proprietary burden onto the REST services often removing the point of REST to be simple and to the point. WADL does provide discovery and some state control (sort of?) Statement made, "WADL" isn't needed.  The JSON, XML, or other client side returned data handles this. I then applied the law of 2 feet rule for myself and headed to finish up these notes, post to the Wiki, and figure out what I was going to do next.  For the original Wiki entry check it out here. I will be adding more to this post with a subsequent post.  Please do feel free to post your thoughts and ideas about this, as I am sure everyone in the session will have more for elaboration.

    Read the article

  • Welcome To The Nashorn Blog

    - by jlaskey
    Welcome to all.  Time to break the ice and instantiate The Nashorn Blog.  I hope to contribute routinely, but we are very busy, at this point, preparing for the next development milestone and, of course, getting ready for open source. So, if there are long gaps between postings please forgive. We're just coming back from JavaOne and are stoked by the positive response to all the Nashorn sessions. It was great for the team to have the front and centre slide from Georges Saab early in the keynote. It seems we have support coming from all directions. Most of the session videos are posted. Check out the links. Nashorn: Optimizing JavaScript and Dynamic Language Execution on the JVM. Unfortunately, Marcus - the code generation juggernaut,  got saddled with the first session of the first day. Still, he had a decent turnout. The talk focused on issues relating to optimizations we did to get good performance from the JVM. Much yet to be done but looking good. Nashorn: JavaScript on the JVM. This was the main talk about Nashorn. I delivered the little bit of this and a little bit of that session with an overview, a follow up on the open source announcement, a run through a few of the Nashorn features and some demos. The room was SRO, about 250±. High points: Sam Pullara, from Twitter, came forward to describe how painless it was to get Mustache.js up and running (20x over Rhino), and,  John Ceccarelli, from NetBeans came forward to describe how Nashorn has become an integral part of Netbeans. A healthy Q & A at the end was very encouraging. Meet the Nashorn JavaScript Team. Michel, Attila, Marcus and myself hosted a Q & A. There was only a handful of people in the room (we assume it was because of a conflicting session ;-) .) Most of the questions centred around Node.jar, which leads me to believe, Nashorn + Node.jar is what has the most interest. Akhil, Mr. Node.jar, sitting in the audience, fielded the Node.jar questions. Nashorn, Node, and Java Persistence. Doug Clarke, Akhil and myself, discussed the title topics, followed by a lengthy Q & A (security had to hustle us out.) 80 or so in the room. Lots of questions about Node.jar. It was great to see Doug's use of Nashorn + JPA. Nashorn in action, with such elegance and grace. Putting the Metaobject Protocol to Work: Nashorn’s Java Bindings. Attila discussed how he applied Dynalink to Nashorn. Good turn out for this session as well. I have a feeling that once people discover and embrace this hidden gem, great things will happen for all languages running on the JVM. Finally, there were quite a few JavaOne sessions that focused on non-Java languages and their impact on the JVM. I've always believed that one's tool belt should carry a variety of programming languages, not just for domain/task applicability, but also to enhance your thinking and approaches to problem solving. For the most part, future blog entries will focus on 'how to' in Nashorn, but if you have any suggestions for topics you want discussed, please drop a line.  Cheers. 

    Read the article

  • Python sorting list of dictionaries by multiple keys

    - by simi
    I have a list of dicts: b = [{u'TOT_PTS_Misc': u'Utley, Alex', u'Total_Points': 96.0}, {u'TOT_PTS_Misc': u'Russo, Brandon', u'Total_Points': 96.0}, {u'TOT_PTS_Misc': u'Chappell, Justin', u'Total_Points': 96.0}, {u'TOT_PTS_Misc': u'Foster, Toney', u'Total_Points': 80.0}, {u'TOT_PTS_Misc': u'Lawson, Roman', u'Total_Points': 80.0}, {u'TOT_PTS_Misc': u'Lempke, Sam', u'Total_Points': 80.0}, {u'TOT_PTS_Misc': u'Gnezda, Alex', u'Total_Points': 78.0}, {u'TOT_PTS_Misc': u'Kirks, Damien', u'Total_Points': 78.0}, {u'TOT_PTS_Misc': u'Worden, Tom', u'Total_Points': 78.0}, {u'TOT_PTS_Misc': u'Korecz, Mike', u'Total_Points': 78.0}, {u'TOT_PTS_Misc': u'Swartz, Brian', u'Total_Points': 66.0}, {u'TOT_PTS_Misc': u'Burgess, Randy', u'Total_Points': 66.0}, {u'TOT_PTS_Misc': u'Smugala, Ryan', u'Total_Points': 66.0}, {u'TOT_PTS_Misc': u'Harmon, Gary', u'Total_Points': 66.0}, {u'TOT_PTS_Misc': u'Blasinsky, Scott', u'Total_Points': 60.0}, {u'TOT_PTS_Misc': u'Carter III, Laymon', u'Total_Points': 60.0}, {u'TOT_PTS_Misc': u'Coleman, Johnathan', u'Total_Points': 60.0}, {u'TOT_PTS_Misc': u'Venditti, Nick', u'Total_Points': 60.0}, {u'TOT_PTS_Misc': u'Blackwell, Devon', u'Total_Points': 60.0}, {u'TOT_PTS_Misc': u'Kovach, Alex', u'Total_Points': 60.0}, {u'TOT_PTS_Misc': u'Bolden, Antonio', u'Total_Points': 60.0}, {u'TOT_PTS_Misc': u'Smith, Ryan', u'Total_Points': 60.0}] and I need to use a multi key sort reversed by Total_Points, then not reversed by TOT_PTS_Misc. This can be done at the command prompt like so: a = sorted(b, key=lambda d: (-d['Total_Points'], d['TOT_PTS_Misc'])) But I have to run this through a function, where I pass in the list and the sort keys. For example, def multikeysort(dict_list, sortkeys):. How can the lambda line be used which will sort the list, for an arbitrary number of keys that are passed in to the multikeysort function, and take into consideration that the sortkeys may have any number of keys and those that need reversed sorts will be identified with a '-' before it?

    Read the article

  • YUI DataTable - Howto have just one paginator?

    - by Rollo Tomazzi
    Hello, I'm using the YUI DataTable in a Grails 1.1 project using the Grails UI plugin 1.0.2 (YUI being 2.6.1). By default, the DataTable displays 2 paginators: one above and another one below the table. Looking up the YUI API documentation, I could see that I can pass an array of YUI containers as a config parameter but - what are the names of these containers? I've tried loooking at the HTML of the page using Firebug. The ID of the divs containing the paginators are: yui-dt0-paginator0 (above) and yui-dt0-paginator1 (below). If I use them to configure the containers for the navigator, then the navigator is just not displayed at all. Here's the relevant extract of the GSP page containing the Datatable element. <div class="body"> <h1>This is the List of Control Accounts</h1> <g:if test="${flash.message}"> <div class="message">${flash.message}</div> </g:if> <div class="yui-skin-sam"> <gui:dataTable controller="controlAccount" action="enhancedListDataTableJSON" columnDefs="[ [key:'id', label:'ID'], [key:'col1', label:'Col 1', sortable: true, resizeable: true], [key:'col2', label:'Col 2', sortable: true, resizeable: true] ]" sortedBy="col1" rowsPerPage="20" paginatorConfig="[ template:'{PreviousPageLink} {PageLinks} {NextPageLink} {CurrentPageReport}', pageReportTemplate:'{totalRecords} total accounts', alwaysVisible:true, containers:'yui-dt0-paginator1' ]" rowExpansion="true" /> </div> </div> Any help? Thanks! Rollo

    Read the article

  • Does my API design violate RESTful principles?

    - by peta
    Hello everybody, I'm currently (I try to) designing a RESTful API for a social network. But I'm not sure if my current approach does still accord to the RESTful principles. I'd be glad if some brighter heads could give me some tips. Suppose the following URI represents the name field of a user account: people/{UserID}/profile/fields/name But there are almost hundred possible fields. So I want the client to create its own field views or use predefined ones. Let's suppose that the following URI represents a predefined field view that includes the fields "name", "age", "gender": utils/views/field-views/myFieldView And because field views are kind of higher logic I don't want to mix support for field views into the "people/{UserID}/profile/fields" resource. Instead I want to do the following: utils/views/field-views/myFieldView/{UserID} Though Leonard Richardson & Sam Ruby state in their book "RESTful Web Services" that a RESTful design is somehow like an "extreme object oriented" approach, I think that my approach is object oriented and therefore accords to RESTful principles. Or am I wrong? When not: Are such "object oriented" approaches generally encouraged when used with care and in order to avoid query-based REST-RPC hybrids? Thanks for your feedback in advance, peta

    Read the article

  • How do you jump to a particular row in a DataGridView by typing (a la Windows Explorer details view)

    - by russcollier
    I have a .NET Winforms app in C# with a DataGridView that's read-only and populated with some number of rows. I'd like to have functionality similar to Windows Explorer's (and many other applications) details view for example. I'd like the DataGridView to behave such that when it has focus if you start typing, the current row selection will jump to the row where the (string) value of cell 0 (i.e. the first column in the row) starts with the characters you typed in. For example, if I have a DataGridView with 1 column and the following rows: Bob Jane Jason John Leroy Sam If the DataGridView has focus and I hit the 'b' key on my keyboard, the selected row is now "Bob". If I quickly type in the keys 'ja', the selected row is Jane. If I quickly type in the letters 'jas', the selected row is Jane. If I hit the 'z' key, nothing is selected (since nothing starts with Z). Likewise if Jane is currently selected and I keep typing the letter 'j', the selection will cycle through to Jason, then John, then back to Jane, each time I hit the 'j' key. I've been doing some Googling (and "stackoverflowing" :-)) for awhile and cannot find any examples of this type of functionality. I have a rough idea in my head to do this via some sort of short lived timer thread, collecting the keystrokes on KeyPress events for the DataGridView, and selecting rows based on those collected keystrokes matching a Cells[0].Value.StartsWith() type of condition. But it seems like there has to be an easier way that I'm just not seeing. Any ideas would be much appreciated. Thanks!

    Read the article

  • How do you concat multiple rows into one column in SQL Server?

    - by Jason
    I've searched high and low for the answer to this, but I can't figure it out. I'm relatively new to SQL Server and don't quite have the syntax down yet. I have this datastructure (simplified): Table "Users" | Table "Tags": UserID UserName | TagID UserID PhotoID 1 Bob | 1 1 1 2 Bill | 2 2 1 3 Jane | 3 3 1 4 Sam | 4 2 2 ----------------------------------------------------- Table "Photos": | Table "Albums": PhotoID UserID AlbumID | AlbumID UserID 1 1 1 | 1 1 2 1 1 | 2 3 3 1 1 | 3 2 4 3 2 | 5 3 2 | I'm looking for a way to get the all the photo info (easy) plus all the tags for that photo concatenated like CONCAT(username, ', ') AS Tags of course with the last comma removed. I'm having a bear of a time trying to do this. I've tried the method in this article but I get an error when I try to run the query saying that I can't use DECLARE statements... do you guys have any idea how this can be done? I'm using VS08 and whatever DB is installed in it (I normally use MySQL so I don't know what flavor of DB this really is... it's an .mdf file?)

    Read the article

  • mysql prevent displaying a row ONE which has reference in another row TWO but no reference in row THREE

    - by Jayapal Chandran
    I have a table like the following id | name | pid 1 | sam | NULL 2 | sams ref | 1 3 | pam | NULL For the first time the first row gets inserted which will have pid as null I insert a row which is related to the first row and then i insert a row which is new and which may be referred by another row in future. now i want only the third row to be displayed and not the first and second row as the second row contains the reference of first row. so if any row has a reference to another row then both the rows should not be displayed. Only rows which is not having any reference should be displayed. BESIDES, IS IT A GOOD PRACTICE? PLEASE ADVICE ON THIS. Edited When i updated in server the query is always giving empty result. here is what i have and this one When pid is NULL then that row should appear but when another entry in the same table with pid as its parent id or any other rows id appears then both the rows should not appear. so if any pid has been referred then both the rows should not appear. here only one row will refer another row and not more than that. in my localhost i have mysql version 5.0.1 or something like that but when i installed xampp in another system it had 5.5 and in the live server it was 5.3 so in version around 5.0 the query is returning rows but in higher versions it is returning empty rows. so now i this case how to make a query?

    Read the article

  • Not getting the right expected output for my Mysql Query?

    - by user1878107
    i've 4 tables as shown below doctors id name ------------ 1 Mathew 2 Praveen 3 Rosie 4 Arjun 5 Denis doctors_appointments id doctors_id patient_name contact date status -------------------------------------------------------------------------------------- 1 5 Nidhin 9876543210 2012-12-10 15:39:41 Registered 2 5 Sunny 9876543210 2012-12-18 15:39:48 Registered 3 5 Mani 9876543210 2012-12-12 15:39:57 Registered 4 2 John 9876543210 2012-12-24 15:40:09 Registered 5 4 Raj 9876543210 2012-12-05 15:41:57 Registered 6 3 Samuel 9876543210 2012-12-14 15:41:33 Registered 7 2 Louis 9876543210 2012-12-24 15:40:23 Registered 8 1 Federick 9876543210 2012-12-28 15:41:05 Registered 9 2 Sam 9876543210 2012-12-12 15:40:38 Registered 10 4 Sita 9876543210 2012-12-12 15:41:00 Registered doctors_dutyplan id doctor_id weeks time no_of_patients ------------------------------------------------------------------ 1 1 3,6,7 9:00am-1:00pm 10 2 2 3,4,5 1:00pm-4:00pm 7 3 3 3,6,7 10:00am-2:00pm 10 4 4 3,4,5,6 8:30am-12:30pm 12 5 5 3,4,5,6,7 9:00am-4:00pm 30 emp_leave id empid leavedate -------------------------------- 1 2 2012-12-05 14:42:36 2 2 2012-12-03 14:42:59 3 3 2012-12-03 14:43:06 4 3 2012-12-06 14:43:14 5 5 2012-12-04 14:43:24 My task is to find all the days in a month in which the doctor is available excluding the leave dates. My query what is wrote is given below: SELECT DATE_ADD( '2012-12-01', INTERVAL ROW DAY ) AS Date, ROW +1 AS DayOfMonth FROM ( SELECT @row := @row +1 AS ROW FROM ( SELECT 0 UNION ALL SELECT 1 UNION ALL SELECT 3 UNION ALL SELECT 4 UNION ALL SELECT 5 UNION ALL SELECT 6 )t1, ( SELECT 0 UNION ALL SELECT 1 UNION ALL SELECT 3 UNION ALL SELECT 4 UNION ALL SELECT 5 UNION ALL SELECT 6 )t2, ( SELECT @row := -1 )t3 LIMIT 31 )b WHERE DATE_ADD( '2012-12-01', INTERVAL ROW DAY ) BETWEEN '2012-12-01' AND '2012-12-31' AND DAYOFWEEK( DATE_ADD( '2012-12-01', INTERVAL ROW DAY ) ) =2 AND DATE_ADD( '2012-12-01', INTERVAL ROW DAY ) NOT IN ( SELECT DATE_FORMAT( l.leavedate, '%Y-%m-%d' ) AS date FROM doctors_dutyplan d LEFT JOIN emp_leave AS l ON d.doctor_id = l.empid WHERE doctor_id =2 ) This works fine for all doctors who took any leave in a particular day in a month (here in the example it is Decemeber 2012). and the result is shown below: Date DayOfMonth ----------------------- 2012-12-10 10 2012-12-17 17 2012-12-24 24 2012-12-31 31 But on the other hand for the doctors who did'nt took any leave , for that my query is showing empty table, example for the doctor Mathew whose id is 1, my query returns an empty result can anyone please tell a solution for this problem. Thanks in advance.

    Read the article

  • How can I create a horizontal table in a single foreach loop in MVC?

    - by GenericTypeTea
    Is there any way, in ASP.Net MVC, to condense the following code to a single foreach loop? <table class="table"> <tr> <td> Name </td> <% foreach (var item in Model) { %> <td> <%= item.Name %> </td> <% } %> </tr> <tr> <td> Item </td> <% foreach (var item in Model) { %> <td> <%= item.Company %> </td> <% } %> </tr> </table> Where model is a simple object: public class SomeObject { public virtual Name {get;set;} public virtual Company {get;set;} } This would output a table as follows: Name | Bob | Sam | Bill | Steve | Company | Builder | Fireman | MS | Apple | I know I could probably use an extension method to write out each row, but is it possible to build all rows using a single iteration over the model? This is a follow on from this question as I'm unhappy with my accepted answer and cannot believe I've provided the best solution.

    Read the article

  • grailsui plugin for accordion not working

    - by inquirer
    Hi! I have been trying to figure out what's wrong with grailsui accordion. I have checked out the sample source files (http://guidemo.googlecode.com/svn/trunk/) from different ui tags, like menu, datatable and etc. they are working fine but the accordion is not working properly. All i can see is all of the divs shown. the panels are not moving since they are already shown. I've also followed the screencast for it. I've used the yui-skin-sam class name and I've downloaded the grails plugins.grails-ui=1.2-SNAPSHOT. I am not sure what's wrong.I used the rich-ui plugin and it is working great. But there are still features that are not in rich-ui compared to the possible configurations for the grails-ui tags. Anyway. if you have any suggestions to what js library I may use for a grails application or if you have a working grails-ui accordion sample, I would really appreciate it. Thanks in advance. Hope you can help me since I am still new to Grails.

    Read the article

  • SQL to CodeIgniter Array Missing Data Issue

    - by SamD
    $query = $this->db->query("SELECT t1.numberofbets, t1.profit, t2.seven_profit, t3.28profit, user.user_id, username, password, email, balance, user.date_added, activation_code, activated FROM user LEFT JOIN (SELECT user_id, SUM(amount_won) AS profit, count(tip_id) AS numberofbets FROM tip GROUP BY user_id) as t1 ON user.user_id = t1.user_id LEFT JOIN (SELECT user_id, SUM(amount_won) AS seven_profit FROM tip WHERE date_settled > '$seven_daystime' GROUP BY user_id) as t2 ON user.user_id = t2.user_id LEFT JOIN (SELECT user_id, SUM(amount_won) AS 28profit FROM tip WHERE date_settled > '$twoeight_daystime' GROUP BY user_id) as t3 ON user.user_id = t3.user_id where activated = 1 GROUP BY user.user_id ORDER BY user.date_added DESC"); return $query->result_array(); The query works fine running it in phpMyAdmin and returns complete results (in image attached). However, printing the array in CodeIgniter, it has no value for one field ,seven_profit, where it is there in the SQL query ran in phpMyAdmin, just the discrepancy in this one field, from sql to php array... I just can’t see why, when printing the array, that one field, which should have value of 26, contains nothing? Any ideas? I changed the field name from starting with a number in attempt to fix it, but no difference. I know this is complex and looks horrible, any help or just people coming across something similar would be great to know about, thanks. Sam

    Read the article

  • Can i use microsoft ajax tab controls SKIN only ?

    - by Anil Namde
    I like YUI's Tab, to use its look and feel for own tab like implementation we just have to include yui tabs CSS and use markups and css classes and done. example below, <link rel="stylesheet" type="text/css" href="http://yui.yahooapis.com/2.8.0r4/build/tabview/assets/skins/sam/tabview.css" /> <ul class="yui-nav"> <li><a href="#tab1"><em>Tab One Label</em></a></li> <li class="selected"><a href="#tab2"><em>Tab Two Label</em></a></li> <li><a href="#tab3"><em>Tab Three Label</em></a></li> </ul> Now i have a site where Microsort's tab tab controls are used and there are some which are old simple tabs. Now i would like to change these old tabs to Microsoft tab structures just by using CSS if possible as we have done above for YUI. Is that possible to do that using CSS/file? How it can be done for Microsofts tab?

    Read the article

  • Releasing Autoreleasepool crashes on iOS 4.0 (and only on 4.0)

    - by samsam
    Hi there. I'm wondering what could cause this. I have several methods in my code that i call using performSelectorInBackground. Within each of these methods i have an Autoreleasepool that is being alloced/initialized at the beginning and released at the end of the method. this perfectly works on iOS 3.1.3 / 3.2 / 4.2 / 4.2.1 but it fataly crashes on iOS 4.0 with a EXC_BAD_ACCESS Exception that happens after calling [myPool release]. After I noticed this strange behaviour I was thinking about rewriting portions of my code and to make my app "less parallel" in case that the client os is 4.0. After I did that, the next point where the app crashed was within the ReachabilityCallback-Method from Apples Reachability "Framework". well, now I'm not quite sure what to do. The things i do within my threaded methods is pretty simple xml parsing (no cocoa calls or stuff that would affect the UI). After each method finishes it posts a notification which the coordinating-thread listens to and once all the parallelized methods have finished, the coordinating thread calls viewcontrollers etc... I have absolutely no clue what could cause this weird behaviour. Especially because Apples Code fails as well. any help is greatly appreciated! thanks, sam

    Read the article

  • LINQ query to find if items in a list are contained in another list

    - by cjohns
    I have the following code: List<string> test1 = new List<string> { "@bob.com", "@tom.com" }; List<string> test2 = new List<string> { "[email protected]", "[email protected]" }; I need to remove anyone in test2 that has @bob.com or @tom.com. What I have tried is this: bool bContained1 = test1.Contains(test2); bool bContained2 = test2.Contains(test1); bContained1 = false but bContained2 = true. I would prefer not to loop through each list but instead use a Linq query to retrieve the data. bContained1 is the same condition for the Linq query that I have created below: List<string> test3 = test1.Where(w => !test2.Contains(w)).ToList(); The query above works on an exact match but not partial matches. I have looked at other queries but I can find a close comparison to this with Linq. Any ideas or anywhere you can point me to would be a great help.

    Read the article

  • Can redirection of screen output to file change the result of a C++ code?

    - by Biga
    I am having this very weird behaviour with a C++ code: It gives me different results when running with and without redirecting the screen output to a file (reproducible in cygwin and linux). I mean, if I get the same executable and run it like ./run or run it like ./run >out.log, I get different results! I use std::cout to output to screen, all lines ending with endl; I use ifstream for the input file; I use ofstream for output, all lines ending with endl. I am using g++ 4. Any idea what is going on? UPDATE: I have hard-coded the input data, so 'ifstream' is not used, and problem persists. UPDATE 2: That's getting interesting. I have probed three variables that are computed initially, and that's what I get when using with and without redirecting the output to file redirected to file: 0 -0.02 0 direct to screen: 0 -0.02 1.04083e-17 So there's a round-off difference in the code variables with and without redirecting the output! Now, why redirecting would interefere with an internal computation of the code? UPDATE 3: If I redirect to /dev/null, I get the sam behaviour as outputing direct to screen, instead of redirecting to file.

    Read the article

  • Releasing Autopool crashes on iOS 4.0 (and only on 4.0)

    - by samsam
    Hi there. I'm wondering what could cause this. I have several methods in my code that i call using performSelectorInBackground. Within each of these methods i have an Autoreleasepool that is being alloced/initialized at the beginning and released at the end of the method. this perfectly works on iOS 3.1.3 / 3.2 / 4.2 / 4.2.1 but it fataly crashes on iOS 4.0 with a EXC_BAD_ACCESS Exception that happens after calling [myPool release]. After I noticed this strange behaviour I was thinking about rewriting portions of my code and to make my app "less parallel" in case that the client os is 4.0. After I did that, the next point where the app crashed was within the ReachabilityCallback-Method from Apples Reachability "Framework". well, now I'm not quite sure what to do. The things i do within my threaded methods is pretty simple xml parsing (no cocoa calls or stuff that would affect the UI). After each method finishes it posts a notification which the coordinating-thread listens to and once all the parallelized methods have finished, the coordinating thread calls viewcontrollers etc... I have absolutely no clue what could cause this weird behaviour. Especially because Apples Code fails as well. any help is greatly appreciated! thanks, sam

    Read the article

  • initWithContentsOfURL seems to have issues with "long" URLs

    - by samsam
    Hi there I'm facing a rather strange Issue when trying to load data from an XML-Webservice. The webservice allows me to pass separated identifiers within the URL-Request. It is therefore possible for the URL to become rather long (240 characters). If I open said URL in firefox the response arrives as planned, if I execute the following code xmlData remains empty. NSString *baseUrl = [[NSString alloc] initWithString:[[[[kSearchDateTimeRequestTV stringByReplacingOccurrencesOfString:@"{LANG}" withString:appLanguageCode] stringByReplacingOccurrencesOfString:@"{IDENTIFIERS}" withString:myIdentifiers] stringByReplacingOccurrencesOfString:@"{STARTTICKS}" withString:[NSString stringWithFormat:@"%@", [[startTime getTicks] descriptionWithLocale:nil]]] stringByReplacingOccurrencesOfString:@"{ENDTICKS}" withString:[NSString stringWithFormat:@"%@", [[endTime getTicks] descriptionWithLocale:nil]]]]; NSLog(baseUrl); //looks good, if openend in browser, returnvalue is ok urlRequest = [NSURL URLWithString:baseUrl]; NSString *xmlData = [NSString stringWithContentsOfURL:urlRequest encoding:NSUTF8StringEncoding error:&err]; //err is nil, therefore i guess everything must be ok... :( NSLog(xmlData); //nothing... is there any sort of URL-Length restriction, does the same problem happened to anyone of you as well? whats a good workaround? thanks for your help sam

    Read the article

< Previous Page | 44 45 46 47 48 49 50  | Next Page >