Search Results

Search found 8815 results on 353 pages for 'task scheduler'.

Page 67/353 | < Previous Page | 63 64 65 66 67 68 69 70 71 72 73 74  | Next Page >

  • How Visual Studio 2010 and Team Foundation Server enable Compliance

    - by Martin Hinshelwood
    One of the things that makes Team Foundation Server (TFS) the most powerful Application Lifecycle Management (ALM) platform is the traceability it provides to those that use it. This traceability is crucial to enable many companies to adhere to many of the Compliance regulations to which they are bound (e.g. CFR 21 Part 11 or Sarbanes–Oxley.)   From something as simple as relating Tasks to Check-in’s or being able to see the top 10 files in your codebase that are causing the most Bugs, to identifying which Bugs and Requirements are in which Release. All that information is available and more in TFS. Although all of this tradability is available within TFS you do need to understand that it is not for free. Well… I say that, but if you are using TFS properly you will have this information with no additional work except for firing up the reporting. Using Visual Studio ALM and Team Foundation Server you can relate every line of code changes all the way up to requirements and back down through Test Cases to the Test Results. Figure: The only thing missing is Build In order to build the relationship model below we need to examine how each of the relationships get there. Each member of your team from programmer to tester and Business Analyst to Business have their roll to play to knit this together. Figure: The relationships required to make this work can get a little confusing If Build is added to this to relate Work Items to Builds and with knowledge of which builds are in which environments you can easily identify what is contained within a Release. Figure: How are things progressing Along with the ability to produce the progress and trend reports the tractability that is built into TFS can be used to fulfil most audit requirements out of the box, and augmented to fulfil the rest. In order to understand the relationships, lets look at each of the important Artifacts and how they are associated with each other… Requirements – The root of all knowledge Requirements are the thing that the business cares about delivering. These could be derived as User Stories or Business Requirements Documents (BRD’s) but they should be what the Business asks for. Requirements can be related to many of the Artifacts in TFS, so lets look at the model: Figure: If the centre of the world was a requirement We can track which releases Requirements were scheduled in, but this can change over time as more details come to light. Figure: Who edited the Requirement and when There is also the ability to query Work Items based on the History of changed that were made to it. This is particularly important with Requirements. It might not be enough to say what Requirements were completed in a given but also to know which Requirements were ever assigned to a particular release. Figure: Some magic required, but result still achieved As an augmentation to this it is also possible to run a query that shows results from the past, just as if we had a time machine. You can take any Query in the system and add a “Asof” clause at the end to query historical data in the operational store for TFS. select <fields> from WorkItems [where <condition>] [order by <fields>] [asof <date>] Figure: Work Item Query Language (WIQL) format In order to achieve this you do need to save the query as a *.wiql file to your local computer and edit it in notepad, but one imported into TFS you run it any time you want. Figure: Saving Queries locally can be useful All of these Audit features are available throughout the Work Item Tracking (WIT) system within TFS. Tasks – Where the real work gets done Tasks are the work horse of the development team, but they only as useful as Excel if you do not relate them properly to other Artifacts. Figure: The Task Work Item Type has its own relationships Requirements should be broken down into Tasks that the development team work from to build what is required by the business. This may be done by a small dedicated group or by everyone that will be working on the software team but however it happens all of the Tasks create should be a Child of a Requirement Work Item Type. Figure: Tasks are related to the Requirement Tasks should be used to track the day-to-day activities of the team working to complete the software and as such they should be kept simple and short lest developers think they are more trouble than they are worth. Figure: Task Work Item Type has a narrower purpose Although the Task Work Item Type describes the work that will be done the actual development work involves making changes to files that are under Source Control. These changes are bundled together in a single atomic unit called a Changeset which is committed to TFS in a single operation. During this operation developers can associate Work Item with the Changeset. Figure: Tasks are associated with Changesets   Changesets – Who wrote this crap Changesets themselves are just an inventory of the changes that were made to a number of files to complete a Task. Figure: Changesets are linked by Tasks and Builds   Figure: Changesets tell us what happened to the files in Version Control Although comments can be changed after the fact, the inventory and Work Item associations are permanent which allows us to Audit all the way down to the individual change level. Figure: On Check-in you can resolve a Task which automatically associates it Because of this we can view the history on any file within the system and see how many changes have been made and what Changesets they belong to. Figure: Changes are tracked at the File level What would be even more powerful would be if we could view these changes super imposed over the top of the lines of code. Some people call this a blame tool because it is commonly used to find out which of the developers introduced a bug, but it can also be used as another method of Auditing changes to the system. Figure: Annotate shows the lines the Annotate functionality allows us to visualise the relationship between the individual lines of code and the Changesets. In addition to this you can create a Label and apply it to a version of your version control. The problem with Label’s is that they can be changed after they have been created with no tractability. This makes them practically useless for any sort of compliance audit. So what do you use? Branches – And why we need them Branches are a really powerful tool for development and release management, but they are most important for audits. Figure: One way to Audit releases The R1.0 branch can be created from the Label that the Build creates on the R1 line when a Release build was created. It can be created as soon as the Build has been signed of for release. However it is still possible that someone changed the Label between this time and its creation. Another better method can be to explicitly link the Build output to the Build. Builds – Lets tie some more of this together Builds are the glue that helps us enable the next level of tractability by tying everything together. Figure: The dashed pieces are not out of the box but can be enabled When the Build is called and starts it looks at what it has been asked to build and determines what code it is going to get and build. Figure: The folder identifies what changes are included in the build The Build sets a Label on the Source with the same name as the Build, but the Build itself also includes the latest Changeset ID that it will be building. At the end of the Build the Build Agent identifies the new Changesets it is building by looking at the Check-ins that have occurred since the last Build. Figure: What changes have been made since the last successful Build It will then use that information to identify the Work Items that are associated with all of the Changesets Changesets are associated with Build and change the “Integrated In” field of those Work Items . Figure: Find all of the Work Items to associate with The “Integrated In” field of all of the Work Items identified by the Build Agent as being integrated into the completed Build are updated to reflect the Build number that successfully integrated that change. Figure: Now we know which Work Items were completed in a build Now that we can link a single line of code changed all the way back through the Task that initiated the action to the Requirement that started the whole thing and back down to the Build that contains the finished Requirement. But how do we know wither that Requirement has been fully tested or even meets the original Requirements? Test Cases – How we know we are done The only way we can know wither a Requirement has been completed to the required specification is to Test that Requirement. In TFS there is a Work Item type called a Test Case Test Cases enable two scenarios. The first scenario is the ability to track and validate Acceptance Criteria in the form of a Test Case. If you agree with the Business a set of goals that must be met for a Requirement to be accepted by them it makes it both difficult for them to reject a Requirement when it passes all of the tests, but also provides a level of tractability and validation for audit that a feature has been built and tested to order. Figure: You can have many Acceptance Criteria for a single Requirement It is crucial for this to work that someone from the Business has to sign-off on the Test Case moving from the  “Design” to “Ready” states. The Second is the ability to associate an MS Test test with the Test Case thereby tracking the automated test. This is useful in the circumstance when you want to Track a test and the test results of a Unit Test designed to test the existence of and then re-existence of a a Bug. Figure: Associating a Test Case with an automated Test Although it is possible it may not make sense to track the execution of every Unit Test in your system, there are many Integration and Regression tests that may be automated that it would make sense to track in this way. Bug – Lets not have regressions In order to know wither a Bug in the application has been fixed and to make sure that it does not reoccur it needs to be tracked. Figure: Bugs are the centre of their own world If the fix to a Bug is big enough to require that it is broken down into Tasks then it is probably a Requirement. You can associate a check-in with a Bug and have it tracked against a Build. You would also have one or more Test Cases to prove the fix for the Bug. Figure: Bugs have many associations This allows you to track Bugs / Defects in your system effectively and report on them. Change Request – I am not a feature In the CMMI Process template Change Requests can also be easily tracked through the system. In some cases it can be very important to track Change Requests separately as an Auditor may want to know what was changed and who authorised it. Again and similar to Bugs, if the Change Request is big enough that it would require to be broken down into Tasks it is in reality a new feature and should be tracked as a Requirement. Figure: Make sure your Change Requests only Affect Requirements and not rewrite them Conclusion Visual Studio 2010 and Team Foundation Server together provide an exceptional Application Lifecycle Management platform that can help your team comply with even the harshest of Compliance requirements while still enabling them to be Agile. Most Audits are heavy on required documentation but most of that information is captured for you as long a you do it right. You don’t even need every team member to understand it all as each of the Artifacts are relevant to a different type of team member. Business Analysts manage Requirements and Change Requests Programmers manage Tasks and check-in against Change Requests and Bugs Testers manage Bugs and Test Cases Build Masters manage Builds Although there is some crossover there are still rolls or “hats” that are worn. Do you thing this is all achievable? Have I missed anything that you think should be there?

    Read the article

  • PostSharp deployment to build machine- use Setup installation, not NuGet package.

    - by Michael Freidgeim
    PostSharp has well documented different methods of installation. I've chosen installing NuGet packages, because according to  Deploying PostSharp into a Source Repository NuGet is the easiest way to add PostSharp to a project without installing the product on every machine. However it didn't work well for me. I've added PostSharp NuGet package to one project in the solution.  When I wanted to use PostSharp in other project, Visual Studio tab showed that PostSharp is not enabled for this project I've added the NuGet package to the new project, which installed a new version of the package in the new Packages subfolder. When I wanted to refer PostSharp from the third project, I've ended up with another version of PostSharp installed. Additionally multiple versions of Diagnostics were created. It definitely causes confusion and errors.   More problems we experienced on build server. According to Using PostSharp on a Build Server "If you chose to deploy PostSharp in the source repository, it does not need to be installed specifically on the build server. " It didn't work on our build server. I kept getting errors "The "AddIns" parameter is not supported by the "PostSharp21" task." and "The "DisableSystemBindingPolicies" parameter is not supported by the "PostSharp21" task."   From my experience the only way to have the latest version of PostSharp working on the build server is to install it using Setup as described in Deploying PostSharp with the Setup Program     Gael acknowledged the issues with possible version conflicts. see http://support.sharpcrafters.com/discussions/problems/388-the-postsharp21-task-failed-unexpectedly

    Read the article

  • Top Fusion Apps User Experience Guidelines & Patterns That Every Apps Developer Should Know About

    - by ultan o'broin
    We've announced the availability of the Oracle Fusion Applications user experience design patterns. Developers can get going on these using the Design Filter Tool (or DeFT) to select the best pattern for their context of use. As you drill into the patterns you will discover more guidelines from the Applications User Experience team and some from the Rich Client User Interface team too that are also leveraged in Fusion Apps. All are based on the Oracle Application Development Framework components. To accelerate your Fusion apps development and tailoring, here's some inside insight into the really important patterns and guidelines that every apps developer needs to know about. They start at a broad Fusion Apps information architecture level and then become more granular at the page and task level. Information Architecture: These guidelines explain how the UI of an Oracle Fusion application is constructed. This enables you to understand where the changes that you want to make fit into the oveall application's information architecture. Begin with the UI Shell and Navigation guidelines, and then move onto page-level design using the Work Areas and Dashboards guidelines. UI Shell Guideline Navigation Guideline Introduction to Work Areas Guideline Dashboards Guideline Page Content: These patterns and guidelines cover the most common interactions used to complete tasks productively, beginning with the core interactions common across all pages, and then moving onto task-specific ones. Core Across All Pages Icons Guideline Page Actions Guideline Save Model Guideline Messages Pattern Set Embedded Help Pattern Set Task Dependent Add Existing Object Pattern Set Browse Pattern Set Create Pattern Set Detail on Demand Pattern Set Editing Objects Pattern Set Guided Processes Pattern Set Hierarchies Pattern Set Information Entry Forms Pattern Set Record Navigation Pattern Set Transactional Search and Results Pattern Group Now, armed with all this great insider information, get developing some great-looking, highly usable apps! Let me know in the comments how things go!

    Read the article

  • Clients with multiple proxy and multithreading callbacks

    - by enzom83
    I created a sessionful web service using WCF, and in particular I used the NetTcpBinding binding. In addition to methods to initiate and terminate a session, other methods allow the client to send to one or more tasks to be performed (the results are returned via callback, so the service is duplex), but they also allow you to know the status of the service. Assuming you activate the same service on multiple endpoints, and assuming that the client knows these endpoints (for example, it could maintain a List of endpoints), the client should connect with one or more replicas of the same service. The client periodically updates the status of the service, so when it needs to perform a new task (the task is submitted by the user via UI), it selects the service currently less loaded and sends the task to it. Periodically, the client also initiates a maintenance procedure in order to disconnect from one or more overloaded service and in order to connect with new services. I created a client proxy using the svcutil tool. I wish each proxy can be used simultaneously by different threads, for example, in addition to the thread that submits the tasks using a proxy, there are also the following two threads which act periodically: a thread that periodically sends a request to the service in order to obtain the updated state; a thread that periodically selects a proxy to close and instantiates a new proxy to replace the closed one. To achieve these objectives, is it sufficient to create an array of proxies and manage their opening and closing in separate threads? I think I read that the proxy method calls are thread safe, so I would not need to perform a lock before requesting updates to the service. However, when the maintenance procedure (which is activated on its own thread) decides to close a proxy, should I perform a lock? Finally, each proxy is also associated with an object that implements the callback interface for the service: are the callbacks (invoked on the client) executed on different threads on the client? I would like to wrap the management of the proxy in one or more classes so that it can then easily manage within a WPF application.

    Read the article

  • Mobile App Notifications in the Enterprise Space: UX Considerations

    - by ultan o'broin
    Here is a really super website of UX patterns for Android: Android Patterns. I was particularly interested in the event-driven notification patterns (aka status bar notifications to developers). Android - unlike iOS (i.e., the iPhone) - offers a superior centralized notifications system for users.   (Figure copyright Android Patterns)   Research in the enterprise applications space shows how users on-the-go, prefer this approach, as: Users can manage their notification alerts centrally, across all media, apps and for device activity, and decide the order in which to deal with them, and when. Notifications, unlike messages in a dialog or information message in the UI, do not block a task flow (and we need to keep task completion to under three minutes). See the Anti-Patterns slideshare presentation on this blocking point too. These notifications must never interrupt a task flow by launching an activity from the background. Instead, the user can launch an activity from the notification. What users do need is the ability to filter this centralized approach, and to personalize the experience of which notifications are added, what the reminder is, ability to turn off, and so on. A related point concerning notifications is when used to provide users with a record of actions then you can lighten up on lengthy confirmation messages that pop up (toasts in the Android world) used when transactions or actions are sent for processing or into a workflow. Pretty much all the confirmation needs to say is the action is successful along with key data such as dollar amount, customer name, or whatever. I am a user of Android (Nexus S), BlackBerry (Curve), and iOS devices (iPhone 3GS and 4). In my opinion, the best notifications user experience for the enterprise user is offered by Android. Blackberry is good, but not as polished and way clunkier than Android’s. What you get on the iPhone, out of the box, is useless in the enterprise. Technorati Tags: Android,iPhone,Blackerry,messages,usablility,user assistance,userexperience,Oracle,patterns,notifications,alerts

    Read the article

  • How can you achieve and maintain flow while pair programming?

    - by bizso09
    Flow is a concept introduced by Mihaly Csikszentmihalyi; in short, it means to get into the "zone". You feel immersed in your task, focused; the task can be difficult but challenging at the same time. When people achieve flow their productivity shoots up. Programming requires a great deal of mental focus because we often need to juggle several things in our minds at once. Many like to work in a quiet environment where they can direct their full attention to the task. If they are interrupted, it may take several minutes or even hours to get back into flow. I understand there's a practice in agile development and extreme programming called pair programming. It means you put the whole software development team in one room so that communication is seamless. You do write code with your pair because this way you get instant code reviews and fewer bugs get through. I've always had problems achieving flow while doing pair programming because of constant interruptions. I'm thinking deep about an issue then all of sudden someone asks me a question from another pair. My train of thought is lost. How can you achieve and maintain flow while pair programming?

    Read the article

  • Towards Ultra-Reusability for ADF - Adaptive Bindings

    - by Duncan Mills
    The task flow mechanism embodies one of the key value propositions of the ADF Framework, it's primary contribution being the componentization of your applications and implicitly the introduction of a re-use culture, particularly in large applications. However, what if we could do more? How could we make task flows even more re-usable than they are today? Well one great technique is to take advantage of a feature that is already present in the framework, a feature which I will call, for want of a better name, "adaptive bindings". What's an adaptive binding? well consider a simple use case.  I have several screens within my application which display tabular data which are all essentially identical, the only difference is that they happen to be based on different data collections (View Objects, Bean collections, whatever) , and have a different set of columns. Apart from that, however, they happen to be identical; same toolbar, same key functions and so on. So wouldn't it be nice if I could have a single parametrized task flow to represent that type of UI and reuse it? Hold on you say, great idea, however, to do that we'd run into problems. Each different collection that I want to display needs different entries in the pageDef file and: I want to continue to use the ADF Bindings mechanism rather than dropping back to passing the whole collection into the taskflow   If I do use bindings, there is no way I want to have to declare iterators and tree bindings for every possible collection that I might want the flow to handle  Ah, joy! I reply, no need to panic, you can just use adaptive bindings. Defining an Adaptive Binding  It's easiest to explain with a simple before and after use case.  Here's a basic pageDef definition for our familiar Departments table.  <executables> <iterator Binds="DepartmentsView1" DataControl="HRAppModuleDataControl" RangeSize="25"             id="DepartmentsView1Iterator"/> </executables> <bindings> <tree IterBinding="DepartmentsView1Iterator" id="DepartmentsView1">   <nodeDefinition DefName="oracle.demo.model.vo.DepartmentsView" Name="DepartmentsView10">     <AttrNames>       <Item Value="DepartmentId"/>         <Item Value="DepartmentName"/>         <Item Value="ManagerId"/>         <Item Value="LocationId"/>       </AttrNames>     </nodeDefinition> </tree> </bindings>  Here's the adaptive version: <executables> <iterator Binds="${pageFlowScope.voName}" DataControl="HRAppModuleDataControl" RangeSize="25"             id="TableSourceIterator"/> </executables> <bindings> <tree IterBinding="TableSourceIterator" id="GenericView"> <nodeDefinition Name="GenericViewNode"/> </tree> </bindings>  You'll notice three changes here.   Most importantly, you'll see that the hard-coded View Object name  that formally populated the iterator Binds attribute is gone and has been replaced by an expression (${pageFlowScope.voName}). This of course, is key, you can see that we can pass a parameter to the task flow, telling it exactly what VO to instantiate to populate this table! I've changed the IDs of the iterator and the tree binding, simply to reflect that they are now re-usable The tree binding itself has simplified and the node definition is now empty.  Now what this effectively means is that the #{node} map exposed through the tree binding will expose every attribute of the underlying iterator's collection - neat! (kudos to Eugene Fedorenko at this point who reminded me that this was even possible in his excellent "deep dive" session at OpenWorld  this year) Using the adaptive binding in the UI Now we have a parametrized  binding we have to make changes in the UI as well, first of all to reflect the new ID that we've assigned to the binding (of course) but also to change the column list from being a fixed known list to being a generic metadata driven set: <af:table value="#{bindings.GenericView.collectionModel}" rows="#{bindings.GenericView.rangeSize}"         fetchSize="#{bindings.GenericView.rangeSize}"           emptyText="#{bindings.GenericView.viewable ? 'No data to display.' : 'Access Denied.'}"           var="row" rowBandingInterval="0"           selectedRowKeys="#{bindings.GenericView.collectionModel.selectedRow}"           selectionListener="#{bindings.GenericView.collectionModel.makeCurrent}"           rowSelection="single" id="t1"> <af:forEach items="#{bindings.GenericView.attributeDefs}" var="def">   <af:column headerText="#{bindings.GenericView.labels[def.name]}" sortable="true"            sortProperty="#{def.name}" id="c1">     <af:outputText value="#{row[def.name]}" id="ot1"/>     </af:column>   </af:forEach> </af:table> Of course you are not constrained to a simple read only table here.  It's a normal tree binding and iterator that you are using behind the scenes so you can do all the usual things, but you can see the value of using ADFBC as the back end model as you have the rich pantheon of UI hints to use to derive things like labels (and validators and converters...)  One Final Twist  To finish on a high note I wanted to point out that you can take this even further and achieve the ultra-reusability I promised. Here's the new version of the pageDef iterator, see if you can notice the subtle change? <iterator Binds="{pageFlowScope.voName}"  DataControl="${pageFlowScope.dataControlName}" RangeSize="25"           id="TableSourceIterator"/>  Yes, as well as parametrizing the collection (VO) name, we can also parametrize the name of the data control. So your task flow can graduate from being re-usable within an application to being truly generic. So if you have some really common patterns within your app you can wrap them up and reuse then across multiple developments without having to dictate data control names, or connection names. This also demonstrates the importance of interacting with data only via the binding layer APIs. If you keep any code in the task flow generic in that way you can deal with data from multiple types of data controls, not just one flavour. Enjoy!

    Read the article

  • How to learn to deliver quality software designs when working on a tight deadline?

    - by chester89
    I read many books about how to design great software, but I kind of struggle to come up with a good design decisions when it comes to business apps, especially when the timeframe is tough. In the company I currently work for, the following situation happen all the time: my teamlead tells me that there's a task to do, I call some guy or a girl from business who tells me exactly what is it they want, and then I start coding. The task always fits in some existing application (we do only web apps or web services), usually it's purpose is to pull data from one datasource and put into the other one, with some business logic attached in the process. I start coding and then, after spending some time on a problem, my code didn't work as expected - either because of technical mistake or my lack of knowledge of the domain. The business is ringing me 2-3 times a day to hurry me up. I ask my team lead to help, he comes up, sees my code and goes like 'What's this?'. Then he throws away about half of my code, including all the design decisions I made, writes 2-3 methods that does the job (each of them usually 200-300 lines long or more, by the way), and task is complete, code works as it should have. The guy is smarter than me, obviously, and I'm aware of that. My goal is to be better software developer, that means write better code, not finish the job quicker with some crappy code. And the thing is, when I have enough time to tackle a problem, I can come up with a design that is good (in my opinion, of course), but I fall short to do so when I'm on a tight deadline. What should I do? I am fully aware that it's rather vague explanation, but please bear with me

    Read the article

  • Architecture for dashboard showing aggregated stats [on hold]

    - by soulnafein
    I'd like to know what are common architectural pattern for the following problem. Web application A has information on sales, users, responsiveness score, etc. Some of this information are computationally intensive and or have a complex business logic (e.g. responsiveness score). I'm building a separate application (B) for internal admin tasks that modifies data in web application A and report on data from web application A. For writing I'm planning to use a restful api. E.g. create a new entity, update entity, etc. In application B I'd like to show some graphs and other aggregate data for the previous 12 months. I'm planning to store the aggregate data for each month in redis. Some data should update more often, e.g every 10 minutes. I can think of 3 ways of doing this. A scheduled task in app B that connects to an api of app A that provides some aggregated data. Then app B stores it in Redis and use that to visualise pages. Cons: it makes complex calculation within a web request, requires lot's of work e.g. api server and client, storing, etc., pros: business logic still lives in app A. A scheduled task in app A that aggregates data in an non-web process and stores it directly in Redis to be accessed by app B. A scheduled task in app A that aggregates data in a non-web process and uses an api in app B to save it. I'd like to know if there is a well known architectural solution to this type of problems and if not what are other pros/cons for the solution I've suggested?

    Read the article

  • EC2 instance suddenly refusing SSH connections and won't respond to ping

    - by Chris
    My instance was running fine and this morning I was able to access a Ruby on Rails app hosted on it. An hour later I suddenly wasn't able to access my site, my SSH connection attempts were refused and the server wasn't even responding to ping. I didn't change anything on my system during that hour and reboots aren't fixing it. I've never had any problems connecting or pinging the system before. Can someone please help? This is on my production system! OS: CentOS 5 AMI ID: ami-10b55379 Type: m1.small [] ~% ssh -v *****@meeteor.com OpenSSH_5.2p1, OpenSSL 0.9.8l 5 Nov 2009 debug1: Reading configuration data /etc/ssh_config debug1: Connecting to meeteor.com [184.73.235.191] port 22. debug1: connect to address 184.73.235.191 port 22: Connection refused ssh: connect to host meeteor.com port 22: Connection refused [] ~% ping meeteor.com PING meeteor.com (184.73.235.191): 56 data bytes Request timeout for icmp_seq 0 Request timeout for icmp_seq 1 Request timeout for icmp_seq 2 ^C --- meeteor.com ping statistics --- 4 packets transmitted, 0 packets received, 100.0% packet loss [] ~% ========= System Log ========= Restarting system. Linux version 2.6.16-xenU ([email protected]) (gcc version 4.0.1 20050727 (Red Hat 4.0.1-5)) #1 SMP Mon May 28 03:41:49 SAST 2007 BIOS-provided physical RAM map: Xen: 0000000000000000 - 000000006a400000 (usable) 980MB HIGHMEM available. 727MB LOWMEM available. NX (Execute Disable) protection: active IRQ lockup detection disabled Built 1 zonelists Kernel command line: root=/dev/sda1 ro 4 Enabling fast FPU save and restore... done. Enabling unmasked SIMD FPU exception support... done. Initializing CPU#0 PID hash table entries: 4096 (order: 12, 65536 bytes) Xen reported: 2599.998 MHz processor. Dentry cache hash table entries: 131072 (order: 7, 524288 bytes) Inode-cache hash table entries: 65536 (order: 6, 262144 bytes) Software IO TLB disabled vmalloc area: ee000000-f53fe000, maxmem 2d7fe000 Memory: 1718700k/1748992k available (1958k kernel code, 20948k reserved, 620k data, 144k init, 1003528k highmem) Checking if this processor honours the WP bit even in supervisor mode... Ok. Calibrating delay using timer specific routine.. 5202.30 BogoMIPS (lpj=26011526) Mount-cache hash table entries: 512 CPU: L1 I Cache: 64K (64 bytes/line), D cache 64K (64 bytes/line) CPU: L2 Cache: 1024K (64 bytes/line) Checking 'hlt' instruction... OK. Brought up 1 CPUs migration_cost=0 Grant table initialized NET: Registered protocol family 16 Brought up 1 CPUs xen_mem: Initialising balloon driver. highmem bounce pool size: 64 pages VFS: Disk quotas dquot_6.5.1 Dquot-cache hash table entries: 1024 (order 0, 4096 bytes) Initializing Cryptographic API io scheduler noop registered io scheduler anticipatory registered (default) io scheduler deadline registered io scheduler cfq registered i8042.c: No controller found. RAMDISK driver initialized: 16 RAM disks of 4096K size 1024 blocksize Xen virtual console successfully installed as tty1 Event-channel device installed. netfront: Initialising virtual ethernet driver. mice: PS/2 mouse device common for all mice md: md driver 0.90.3 MAX_MD_DEVS=256, MD_SB_DISKS=27 md: bitmap version 4.39 NET: Registered protocol family 2 Registering block device major 8 IP route cache hash table entries: 65536 (order: 6, 262144 bytes) TCP established hash table entries: 262144 (order: 9, 2097152 bytes) TCP bind hash table entries: 65536 (order: 7, 524288 bytes) TCP: Hash tables configured (established 262144 bind 65536) TCP reno registered TCP bic registered NET: Registered protocol family 1 NET: Registered protocol family 17 NET: Registered protocol family 15 Using IPI No-Shortcut mode md: Autodetecting RAID arrays. md: autorun ... md: ... autorun DONE. kjournald starting. Commit interval 5 seconds EXT3-fs: mounted filesystem with ordered data mode. VFS: Mounted root (ext3 filesystem) readonly. Freeing unused kernel memory: 144k freed *************************************************************** *************************************************************** ** WARNING: Currently emulating unsupported memory accesses ** ** in /lib/tls glibc libraries. The emulation is ** ** slow. To ensure full performance you should ** ** install a 'xen-friendly' (nosegneg) version of ** ** the library, or disable tls support by executing ** ** the following as root: ** ** mv /lib/tls /lib/tls.disabled ** ** Offending process: init (pid=1) ** *************************************************************** *************************************************************** Pausing... 5Pausing... 4Pausing... 3Pausing... 2Pausing... 1Continuing... INIT: version 2.86 booting Welcome to CentOS release 5.4 (Final) Press 'I' to enter interactive startup. Setting clock : Fri Oct 1 14:35:26 EDT 2010 [ OK ] Starting udev: [ OK ] Setting hostname localhost.localdomain: [ OK ] No devices found Setting up Logical Volume Management: [ OK ] Checking filesystems Checking all file systems. [/sbin/fsck.ext3 (1) -- /] fsck.ext3 -a /dev/sda1 /dev/sda1: clean, 275424/1310720 files, 1161123/2621440 blocks [ OK ] Remounting root filesystem in read-write mode: [ OK ] Mounting local filesystems: [ OK ] Enabling local filesystem quotas: [ OK ] Enabling /etc/fstab swaps: [ OK ] INIT: Entering runlevel: 4 Entering non-interactive startup Starting background readahead: [ OK ] Applying ip6tables firewall rules: modprobe: FATAL: Module ip6_tables not found. ip6tables-restore v1.3.5: ip6tables-restore: unable to initializetable 'filter' Error occurred at line: 3 Try `ip6tables-restore -h' or 'ip6tables-restore --help' for more information. [FAILED] Applying iptables firewall rules: [ OK ] Loading additional iptables modules: ip_conntrack_netbios_ns [ OK ] Bringing up loopback interface: [ OK ] Bringing up interface eth0: Determining IP information for eth0... done. [ OK ] Starting auditd: [FAILED] Starting irqbalance: [ OK ] Starting portmap: [ OK ] FATAL: Module lockd not found. Starting NFS statd: [ OK ] Starting RPC idmapd: FATAL: Module sunrpc not found. FATAL: Error running install command for sunrpc Error: RPC MTAB does not exist. Starting system message bus: [ OK ] Starting Bluetooth services:[ OK ] [ OK ] Can't open RFCOMM control socket: Address family not supported by protocol Mounting other filesystems: [ OK ] Starting PC/SC smart card daemon (pcscd): [ OK ] Starting hidd: Can't open HIDP control socket: Address family not supported by protocol [FAILED] Starting autofs: Starting automount: automount: test mount forbidden or incorrect kernel protocol version, kernel protocol version 5.00 or above required. [FAILED] [FAILED] Starting sshd: [ OK ] Starting cups: [ OK ] Starting sendmail: [ OK ] Starting sm-client: [ OK ] Starting console mouse services: no console device found[FAILED] Starting crond: [ OK ] Starting xfs: [ OK ] Starting anacron: [ OK ] Starting atd: [ OK ] % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 390 100 390 0 0 58130 0 --:--:-- --:--:-- --:--:-- 58130 100 390 100 390 0 0 56984 0 --:--:-- --:--:-- --:--:-- 0 Starting yum-updatesd: [ OK ] Starting Avahi daemon... [ OK ] Starting HAL daemon: [ OK ] Starting OSSEC: [ OK ] Starting smartd: [ OK ] c CentOS release 5.4 (Final) Kernel 2.6.16-xenU on an i686 domU-12-31-39-00-C4-97 login: INIT: Id "2" respawning too fast: disabled for 5 minutes INIT: Id "3" respawning too fast: disabled for 5 minutes INIT: Id "4" respawning too fast: disabled for 5 minutes INIT: Id "5" respawning too fast: disabled for 5 minutes INIT: Id "6" respawning too fast: disabled for 5 minutes

    Read the article

  • Return Json causes save file dialog in asp.net mvc

    - by Eran
    Hi, I'm integrating jquery fullcalendar into my application. Here is the code i'm using: in index.aspx: <script type="text/javascript"> $(document).ready(function() { $('#calendar').fullCalendar({ events: "/Scheduler/CalendarData" }); }); </script> <div id="calendar"> </div> Here is the code for Scheduler/CalendarData: public ActionResult CalendarData() { IList<CalendarDTO> tasksList = new List<CalendarDTO>(); tasksList.Add(new CalendarDTO { id = 1, title = "Google search", start = ToUnixTimespan(DateTime.Now), end = ToUnixTimespan(DateTime.Now.AddHours(4)), url = "www.google.com" }); tasksList.Add(new CalendarDTO { id = 1, title = "Bing search", start = ToUnixTimespan(DateTime.Now.AddDays(1)), end = ToUnixTimespan(DateTime.Now.AddDays(1).AddHours(4)), url = "www.bing.com" }); return Json(tasksList,JsonRequestBehavior.AllowGet); } private long ToUnixTimespan(DateTime date) { TimeSpan tspan = date.ToUniversalTime().Subtract( new DateTime(1970, 1, 1, 0, 0, 0)); return (long)Math.Truncate(tspan.TotalSeconds); } public ActionResult Index() { return View("Index"); } I also have the following code inside head tag in site.master: <asp:ContentPlaceHolder ID="ContentPlaceHolder1" runat="server" /> <link href="<%= Url.Content("~/Content/jquery-ui-1.7.2.custom.css") %>" rel="stylesheet" type="text/css" /> <link href="~Perspectiva/Content/Site.css" rel="stylesheet" type="text/css" /> <link href="~Perspectiva/Content/fullcalendar.css" rel="stylesheet" type="text/css" /> <script src="~Perspectiva/Scripts/jquery-1.4.2.js" type="text/javascript"></script> <script src="~Perspectiva/Scripts/fullcalendar.js" type="text/javascript"></script> <script src="/Scripts/MicrosoftAjax.debug.js" type="text/javascript"></script> <script src="/Scripts/MicrosoftMvcAjax.debug.js" type="text/javascript"></script> Everything I did was pretty much copied from http://szahariev.blogspot.com/2009/08/jquery-fullcalendar-and-aspnet-mvc.html When navigating to /scheduler/calendardata I get a prompt for saving the json data which contents are exactly what I created in the CalendarData function. What do I need to do in order to render the page correctly? Thanks in advance, Eran

    Read the article

  • jquery Fullcalendar : dynamic events function asp.net mvc

    - by Eran
    Hi, I'm integrating Fullcalendar into my app. Consider a manager interface where he can select an employee and then view this employee's calendar. Now basically I'm using the following jquery code in my view: <script type="text/javascript"> $(document).ready(function() { $("#calendar").fullCalendar({ defaultView: 'agendaWeek', isRTL: true, axisFormat: 'HH:mm', editable: true, events: "/Scheduler/CalendarData" }); }); </script> Now I would like to have the controller function assigned to the events to retrieve the specific user selected by the manager: events: "/Scheduler/CalendarData/<current_user_name> Is there any way to retrieve the selected employee user name from the view (or rather pass it to the view from the controler) and then pass it onto the bound events function? I hope I was clear enough... Thanks in Advance, Eran

    Read the article

  • WCF Duplex net.tcp issues on win7

    - by Tom
    We have a WCF service with multiple clients to schedule operations amongst clients. It worked great on XP. Moving to win7, I can only connect a client to the server on the same machine. At this point, I'm thinking it's something to do with IPv6, but I'm stumped as to how to proceed. Client trying to connect to a remote server gives the following exception: System.ServiceModel.EndpointNotFoundException: Could not connect to net.tcp://10.7.11.14:18297/zetec/Service/SchedulerService/Scheduler. The connection attempt lasted for a time span of 00:00:21.0042014. TCP error code 10060: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond 10.7.11.14:18297. --- System.Net.Sockets.SocketException: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond 10.7.11.14:18297 The service is configured like so: <system.serviceModel> <services> <service name="SchedulerService" behaviorConfiguration="SchedulerServiceBehavior"> <host> <baseAddresses> <add baseAddress="net.tcp://localhost/zetec/Service/SchedulerService"/> </baseAddresses> </host> <endpoint address="net.tcp://localhost:18297/zetec/Service/SchedulerService/Scheduler" binding="netTcpBinding" bindingConfiguration = "ConfigBindingNetTcp" contract="IScheduler" /> <endpoint address="net.tcp://localhost:18297/zetec/Service/SchedulerService/Scheduler" binding="netTcpBinding" bindingConfiguration = "ConfigBindingNetTcp" contract="IProcessingNodeControl" /> </service> </services> <bindings> <netTcpBinding> <binding name = "ConfigBindingNetTcp" portSharingEnabled="True"> <security mode="None"/> </binding> </netTcpBinding > </bindings> <behaviors> <serviceBehaviors> <behavior name="SchedulerServiceBehavior"> <serviceDebug includeExceptionDetailInFaults="true" /> <serviceThrottling maxConcurrentSessions="100"/> </behavior> </serviceBehaviors> </behaviors> </system.serviceModel> I've checked my firewall about a dozen times, but I guess there could be something I'm missing. Tried disabling windows firewall. I tried changing localhost to my ipv4 address to try to keep away from ipv6, I've tried removing any anti-ipv6 code.

    Read the article

  • Cannot integrate Gallio MBUnit with Team City

    - by Bernard Larouche
    I have been trying to get my MBUnit tests suite to work on Team City for many days now without any success. My solution builds no problem. The program is with my tests. After googling for Gallio integration with Team City I tried many ways to make this thing work and I think I am close but need help. I have included the gallio bin directory to my repository and also on my TC Server. Here is my build runner set up in Team City : Build runner : MSBuild Build file path : Myproject.msbuild Targets : RebuildSolution RunTests Here is Myproject.msbuild file I created and included in the Source control trunk directory : <Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> <!-- This is needed by MSBuild to locate the Gallio task --> <UsingTask AssemblyFile="C:\Gallio\bin\Gallio.MSBuildTasks.dll" TaskName="Gallio" /> <!-- Specify the tests assemblies --> <ItemGroup> <TestAssemblies Include="C:\_CBL\CBL\CoderForTraders\Source\trunk\UnitTest\DomainModel.Tests\bin\Debug\CBL.CoderForTraders.DomainModel.Tests.dll" /> </ItemGroup> <Target Name="RunTests"> <Gallio IgnoreFailures="false" Assemblies="@(TestAssemblies)" RunnerExtensions="TeamCityExtension,Gallio.TeamCityIntegration"> <!-- This tells MSBuild to store the output value of the task's ExitCode property into the project's ExitCode property --> <Output TaskParameter="ExitCode" PropertyName="ExitCode"/> </Gallio> <Error Text="Tests execution failed" Condition="'$(ExitCode)' != 0" /> </Target> <Target Name="RebuildSolution"> <Message Text="Starting to Build"/> <MSBuild Projects="CoderForTraders.sln" Properties="Configuration=Debug" Targets="Rebuild" /> </Target> </Project> Here are the errors displayed by Team City : error MSB4064: The "Assemblies" parameter is not supported by the "Gallio" task. Verify the parameter exists on the task, and it is a settable public instance property error MSB4063: The "Gallio" task could not be initialized with its input parameters. Thanks for your help

    Read the article

  • How do I copy all referenced jars of an eclipse project using ant4eclipse?

    - by interstellar
    I've tried like this (link): <taskdef resource="net/sf/antcontrib/antlib.xml" /> <taskdef resource="net/sf/ant4eclipse/antlib.xml" /> <target name="copy_jars"> <getEclipseClasspath workspace="${basedir}/.." projectname="MyProject" property="classpath" relative="false" runtime="true" pathseparator="#" /> <!-- iterate over all classpath entries --> <foreach list="${classpath}" delimiter="#" target="copy_jar_file" param="classpath.entry" /> </target> <target name="copy_jar_file"> <!-- check if current is a .jar-file ... --> <if> <isfileselected file="${classpath.entry}"> <filename name="**/*.jar" /> </isfileselected> <then> <!-- copy the jar file to a destination directory --> <copy file="${classpath.entry}" tofile="${dest.dir}"/> </then> </if> </target> But I get the exception: [getEclipseClasspath] net.sf.ant4eclipse.model.FileParserException: Could not parse plugin project 'E:\...\MyProject' since it contains neither a Bundle-Manifest nor a plugin.xml! [getEclipseClasspath] at net.sf.ant4eclipse.model.pdesupport.plugin.PluginDescriptorParser.parseEclipseProject(Unknown Source) [getEclipseClasspath] at net.sf.ant4eclipse.model.pdesupport.plugin.PluginProjectRoleIdentifier.applyRole(Unknown Source) [getEclipseClasspath] at net.sf.ant4eclipse.model.roles.RoleIdentifierRegistry.applyRoles(Unknown Source) [getEclipseClasspath] at net.sf.ant4eclipse.tools.ProjectFactory.readProjectFromWorkspace(Unknown Source) [getEclipseClasspath] at net.sf.ant4eclipse.tools.resolver.AbstractClasspathResolver.resolveEclipseClasspathEntry(Unknown Source) [getEclipseClasspath] at net.sf.ant4eclipse.tools.resolver.AbstractClasspathResolver.resolveProjectClasspath(Unknown Source) [getEclipseClasspath] at net.sf.ant4eclipse.tools.resolver.ProjectClasspathResolver.resolveProjectClasspath(Unknown Source) [getEclipseClasspath] at net.sf.ant4eclipse.ant.task.project.GetEclipseClassPathTask.resolvePath(Unknown Source) [getEclipseClasspath] at net.sf.ant4eclipse.ant.task.project.AbstractGetProjectPathTask.execute(Unknown Source) [getEclipseClasspath] at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:288) [getEclipseClasspath] at sun.reflect.GeneratedMethodAccessor1.invoke(Unknown Source) [getEclipseClasspath] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) [getEclipseClasspath] at java.lang.reflect.Method.invoke(Method.java:597) [getEclipseClasspath] at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:105) [getEclipseClasspath] at org.apache.tools.ant.Task.perform(Task.java:348) [getEclipseClasspath] at org.apache.tools.ant.Target.execute(Target.java:357) [getEclipseClasspath] at org.apache.tools.ant.Target.performTasks(Target.java:385) [getEclipseClasspath] at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1329) [getEclipseClasspath] at org.apache.tools.ant.Project.executeTarget(Project.java:1298) [getEclipseClasspath] at org.apache.tools.ant.helper.DefaultExecutor.executeTargets(DefaultExecutor.java:41) [getEclipseClasspath] at org.eclipse.ant.internal.ui.antsupport.EclipseDefaultExecutor.executeTargets(EclipseDefaultExecutor.java:32) [getEclipseClasspath] at org.apache.tools.ant.Project.executeTargets(Project.java:1181) [getEclipseClasspath] at org.eclipse.ant.internal.ui.antsupport.InternalAntRunner.run(InternalAntRunner.java:423) [getEclipseClasspath] at org.eclipse.ant.internal.ui.antsupport.InternalAntRunner.main(InternalAntRunner.java:137) BUILD FAILED E:\...\build.xml:132: Exception whilst resolving the classpath of project MyProject! Reason: Could not parse plugin project 'E:\...\MyProject' since it contains neither a Bundle-Manifest nor a plugin.xml! I wan't to copy just the jars, not the referenced projects. Is there a way to parametrize the task getEclipseClasspath so it only gets the jars, not the projects?

    Read the article

  • Converting to async,await using async targeting package

    - by e4rthdog
    I have this: private void BtnCheckClick(object sender, EventArgs e) { var a = txtLot.Text; var b = cmbMcu.SelectedItem.ToString(); var c = cmbLocn.SelectedItem.ToString(); btnCheck.BackColor = Color.Red; var task = Task.Factory.StartNew(() => Dal.GetLotAvailabilityF41021(a, b, c)); task.ContinueWith(t => { btnCheck.BackColor = Color.Transparent; lblDescriptionValue.Text = t.Result.Description; lblItemCodeValue.Text = t.Result.Code; lblQuantityValue.Text = t.Result.AvailableQuantity.ToString(); },TaskScheduler .FromCurrentSynchronizationContext() ); LotFocus(true); } and i followed J. Skeet's advice to move into async,await in my .NET 4.0 app. I converted into this: private async void BtnCheckClick(object sender, EventArgs e) { var a = txtLot.Text; var b = cmbMcu.SelectedItem.ToString(); var c = cmbLocn.SelectedItem.ToString(); btnCheck.BackColor = Color.Red; JDEItemLotAvailability itm = await Task.Factory.StartNew(() => Dal.GetLotAvailabilityF41021(a, b, c)); btnCheck.BackColor = Color.Transparent; lblDescriptionValue.Text = itm.Description; lblItemCodeValue.Text = itm.Code; lblQuantityValue.Text = itm.AvailableQuantity.ToString(); LotFocus(true); } It works fine. What confuses me is that i could do it without using Task but just the method of my Dal. But that means that i must have modified my Dal method, which is something i dont want? I would appreciate if someone would explain to me in "plain" words if what i did is optimal or not and why. Thanks P.s. My dal method public bool CheckLotExistF41021(string _lot, string _mcu, string _locn) { using (OleDbConnection con = new OleDbConnection(this.conString)) { OleDbCommand cmd = new OleDbCommand(); cmd.CommandText = "select lilotn from proddta.f41021 " + "where lilotn = ? and trim(limcu) = ? and lilocn= ?"; cmd.Parameters.AddWithValue("@lotn", _lot); cmd.Parameters.AddWithValue("@mcu", _mcu); cmd.Parameters.AddWithValue("@locn", _locn); cmd.Connection = con; con.Open(); OleDbDataReader rdr = cmd.ExecuteReader(); bool _retval = rdr.HasRows; rdr.Close(); con.Close(); return _retval; } }

    Read the article

  • drag and drop working funny when using variable draggables and droppables

    - by Lina
    Hi, i have some containers that contain some divs like: <div id="container1"> <div id="task1" onMouseOver="DragDrop("+1+");">&nbsp;</div> <div id="task2" onMouseOver="DragDrop("+2+");">&nbsp;</div> <div id="task3" onMouseOver="DragDrop("+3+");">&nbsp;</div> <div id="task4" onMouseOver="DragDrop("+4+");">&nbsp;</div> </div> <div id="container2"> <div id="task5" onMouseOver="DragDrop("+5+");">&nbsp;</div> <div id="task6" onMouseOver="DragDrop("+6+");">&nbsp;</div> </div> <div id="container3"> <div id="task7" onMouseOver="DragDrop("+7+");">&nbsp;</div> <div id="task8" onMouseOver="DragDrop("+8+");">&nbsp;</div> <div id="task9" onMouseOver="DragDrop("+9+");">&nbsp;</div> <div id="task10" onMouseOver="DragDrop("+10+");">&nbsp;</div> </div> i'm trying to drag tasks and drop them in one of the container divs, then reposition the dropped task so that it doesn't affect the other divs nor fall outside one of them and to do that i'm using the event onMouseOver to call the following function: function DragDrop(id) { $("#task" + id).draggable({ revert: 'invalid' }); for (var i = 0; i < nameList.length; i++) { $("#" + nameList[i]).droppable({ drop: function (ev, ui) { var pos = $("#task" + id).position(); if (pos.left <= 0) { $("#task" + id).css("left", "5px"); } else { var day = parseInt(parseInt(pos.left) / 42); var leftPos = (day * 42) + 5; $("#task" + id).css("left", "" + leftPos + "px"); } } }); } } where: nameList = [container1, container2, container3]; the drag is working fine, but the drop is not really, it's just a mess! any help please?? when i hardcode the id and the container, then it works beautifully, but as soon as i use id in drop then it begins to work funny! any suggestions??? thanks a million in advance Lina

    Read the article

  • Simple reminder for Android

    - by anta40
    I'm trying to make a simple timer. package com.anta40.reminder; import java.util.Timer; import java.util.TimerTask; import android.app.Activity; import android.os.Bundle; import android.widget.RadioGroup; import android.widget.TabHost; import android.widget.TextView; import android.widget.RadioGroup.OnCheckedChangeListener; import android.widget.TabHost.TabSpec; public class Reminder extends Activity{ public final int TIMER_DELAY = 1000; public final int TIMER_ONE_MINUTE = 60000; public final int TIMER_ONE_SECOND = 1000; Timer timer; TimerTask task; TextView tv; @Override public void onCreate(Bundle icicle) { super.onCreate(icicle); setContentView(R.layout.main); timer = new Timer(); task = new TimerTask() { @Override public void run() { tv = (TextView) findViewById(R.id.textview1); tv.setText("BOOM!!!!"); tv.setVisibility(TextView.VISIBLE); try { this.wait(TIMER_DELAY); } catch (InterruptedException e){ } tv.setVisibility(TextView.INVISIBLE); } }; TabHost tabs=(TabHost)findViewById(R.id.tabhost); tabs.setup(); TabSpec spec = tabs.newTabSpec("tag1"); spec.setContent(R.id.tab1); spec.setIndicator("Clock"); tabs.addTab(spec); spec=tabs.newTabSpec("tag2"); spec.setContent(R.id.tab2); spec.setIndicator("Settings"); tabs.addTab(spec); tabs.setCurrentTab(0); RadioGroup rgroup = (RadioGroup) findViewById(R.id.rgroup); rgroup.setOnCheckedChangeListener(new OnCheckedChangeListener() { @Override public void onCheckedChanged(RadioGroup group, int checkedId) { if (checkedId == R.id.om){ timer.schedule(task, TIMER_DELAY, 3*TIMER_ONE_SECOND); } else if (checkedId == R.id.twm){ timer.schedule(task, TIMER_DELAY, 6*TIMER_ONE_SECOND); } else if (checkedId == R.id.thm){ timer.schedule(task, TIMER_DELAY, 9*TIMER_ONE_SECOND); } } }); } } Each time I click a radio button, the timer should start, right? But why it doesn't start?

    Read the article

  • calender with multiple data sources at once??

    - by Sune
    Im looking for a (free) solution to a bookingcalender. Among many candidates Ive found the Jquery Full Calender which has many cool features. One of the reason that I like this one is because it has "drag and drop" and resize, which I also need. But I also need to have the dayview where I the columms monday, tuesday, wedensday, thursday friday .... gets replaced with fx. Room1, Room2, Room3, Room4, Room5, Room6, Room7. Ive found some examples on: http://www.devexpress.com/Products/NET/Controls/ASP/Scheduler/resources.xml http://demos.telerik.com/aspnet-ajax/scheduler/examples/resourceavailability/defaultcs.aspx which has the functionality im after. Ive read others who has raised the same quistion on: http://code.google.com/p/jquery-week-calendar/issues/detail?id=35 but with no final solution. Are there any out there who has or knows some good links for want Im looking for?

    Read the article

  • transforming binary data using ssis and sql server 2008

    - by Rick
    Hello All - I have a task to import/transform and extract zipped binary files that contain both text data as well as embeded binary data. Within the data is data that is relational in nature and needs to be processed into a defined database structure. Currently I have a C# single threaded app that essentially grabs all the files from the directory (currently there is 13K files of varying sizes) and extracts the data on a single thread line by line inserts to the database. As you could imagine this is a very slow process and unacceptable. There are several different parsing routines used depending on the header record in the file. There are potentially upto a million rows per file when all the data is extracted to the row level of detail. Follow on task is to parse those rows into their appropriate tables based on is content. i.e. the textual content has to be parsed further into "buckets" of like data in the database. That about sums up the big picture. Now for the problem task list. How do i iterate through a packet of data using SSIS? In the app the file is decompressed and then is parsed using streams data type and byte arrays and is routed to the required parsing routine based on the header data of each packet. There is bit swapping involved as well. Should i wrap up the app code into a script task(s) and let it do the custom processing? The data is seperated by year and the sql server tables is partitioned by year as well. I need to be able to "catch" bad file data as well and process by hand most likely. Should i simply load the zipped file to sql as a blob and parse the file with T-SQL? Would that be multi threaded if done that way? Not sure how to do the parsing in tsql that is involved here. Which do you think would be faster? Potentially the data that is currently processed via files could come to us via a socket. Can SSIS collect that data in real time? How would i go about setting that up? Processing these new files from the directorys will become a daily task. I can manage the data once i get it to sql server. Getting it there in a timely fashion seems to be the long pole in the tent for me. I would appreciate any comments or suggestions from the group. Rick

    Read the article

  • JPA entity design / cannot delete entity

    - by timaschew
    I though its simple what I want, but I cannot find any solution for my problem. I'm using playframework 1.2.3 and it's using Hibernate as JPA. So I think playframework has nothing to do with the problem. I have some classes (I omit the nonrelevant fields) public class User { ... } public class Task { public DataContainer dataContainer; } public class DataContainer { public Session session; public User user; } public class Session { ... } So I have association from Task to DataContainer and from DataContainer to Sesssion and the DataContainer belongs to a User. The DataContainers can have always the same User, but the Session have to be different for each instance. And the DataContainer of a Task have also to be different in each instance. A DataContainer can have a Sesesion or not (it's optinal). I use only unidirectional assoc. It should be sufficient. In other words: Every Task must has one DataContainer. Every DataContainer must has one/the same User and can have one Session. To create a DB schema I use JPA annotations: @Entity public class User extends Model { ... } @Entity public class Task extends Model { @OneToOne(optional = false, cascade = CascadeType.ALL) public DataContainer dataContainer; } @Entity public class DataContainer extends Model { @OneToOne(optional = true, cascade = CascadeType.ALL) public Session session; @ManyToOne(optional = false, cascade = CascadeType.ALL) public User user; } @Entity public class Session extends Model { ... } BTW: Model is a play class and provides the primary id as long type. When I create some for each entity a object and 'connect them', I mean the associations, it works fine. But when I try to delete a Session, I get a constraint violation exception, because a DataContainer still refers to the Session I want to delete. I want that the Session (field) of the DataContainer will be set to null respectively the foreign key (session_id) should be unset in the database. This will be okay, because its optional. I don't know, I think I have multiple problems. Am I using the right annotation @OneToOne ? I found on the internet some additional annotation and attributes: @JoinColumn and a mappedBy attribute for the inverse relationship. But I don't have it, because its not bidirectional. Or is a bidirectional assoc. essentially? Another try was to use @OnDelete(action = OnDeleteAction.CASCADE) the the contraint changed from NO ACTIONs when update or delete to: ADD CONSTRAINT fk4745c17e6a46a56 FOREIGN KEY (session_id) REFERENCES annotation_session (id) MATCH SIMPLE ON UPDATE NO ACTION ON DELETE CASCADE; But in this case, when I delete a session, the DataContainer and User is deleted. That's wrong for me. EDIT: I'm using postgresql 9, the jdbc stuff is included in play, my only db config is db=postgres://app:app@localhost:5432/app

    Read the article

  • Is it possible to invoke NSDictionary's valueForKeyPath: when a key contains periods?

    - by Jonukas
    I'm trying to get the value of the repeatInterval key in the com.apple.scheduler plist. I'd like to just use NSDictionary's valueForKeyPath: method like so: CFPropertyListRef value; value = CFPreferencesCopyValue(CFSTR("AbsoluteSchedule"), CFSTR("com.apple.scheduler"), kCFPreferencesCurrentUser, kCFPreferencesCurrentHost); NSNumber *repeatInterval = [(NSDictionary *)value valueForKeyPath:@"com.apple.SoftwareUpdate.SUCheckSchedulerTag.Timer.repeatInterval"]; But the problem with this is that the first key is really "com.apple.SoftwareUpdate", not just "com". I can get around this by getting that first value separately: NSDictionary *dict = [(NSDictionary *)value valueForKey:@"com.apple.SoftwareUpdate"]; NSNumber *repeatInterval = [dict valueForKeyPath:@"SUCheckSchedulerTag.Timer.repeatInterval"]; I just wanted to know if there is a way to escape periods in a keypath so I can eliminate this extra step.

    Read the article

  • Is it correct or incorrect for a Java JAR to contain its own dependencies?

    - by 4herpsand7derpsago
    I guess this is a two-part question. I am trying to write my own Ant task (MyFirstTask) that can be used in other project's build.xml buildfiles. To do this, I need to compile and package my Ant task inside its own JAR. Because this Ant task that I have written is fairly complicated, it has about 20 dependencies (other JAR files), such as using XStream for OX-mapping, Guice for DI, etc. I am currently writing the package task in the build.xml file inside the MyFirstTask project (the buildfile that will package myfirsttask.jar, which is the reusable Ant task). I am suddenly realizing that I don't fully understand the intention of a Java JAR. Is it that a JAR should not contain dependencies, and leave it to the runtime configuration (the app container, the runtime environment, etc.) to supply it with the dependencies it needs? I would assume if this is the case, an executable JAR is an exception to the rule, yes? Or, is it the intention for Java JARs to also include their dependencies? Either way, I don't want to be forcing my users to be copying-n-pasting 25+ JARs into their Ant libs; that's just cruel. I like the way WAR files are set up, where the classpath for dependencies is defined under the classes/ directory. I guess, ultimately, I'd like my JAR structure to look like: myfirsttask.jar/ com/ --> the root package of my compiled binaries config/ --> config files, XML, XSD, etc. classes/ --> all dependencies, guice-3.0.jar, xstream-1.4.3.jar, etc. META-INF/ MANIFEST.MF I assume that in order to accomplish this (and get the runtime classpath to also look into the classes/ directory), I'll need to modify the MANIFEST.MF somehow (I know there's a manifest attribute called ClassPath, I believe?). I'm just having a tough time putting everything together, and have a looming/lingering question about the very intent of JARs to begin with. Can someone please confirm whether Oracle intends for JARs to contain their dependencies or not? And, either way, what I would have to do in the manifest (or anywhere else) to make sure that, at runtime, the classpath can find the dependencies stored under the classes/ directory? Thanks in advance!

    Read the article

  • codeIgniter: pass parameter to a select query from previous query

    - by krike
    I'm creating a little management tool for the browser game travian. So I select all the villages from the database and I want to display some content that's unique to each of the villages. But in order to query for those unique details I need to pass the id of the village. How should I do this? this is my code (controller): function members_area() { global $site_title; $this->load->model('membership_model'); if($this->membership_model->get_villages()) { $data['rows'] = $this->membership_model->get_villages(); $id = 1;//this should be dynamic, but how? if($this->membership_model->get_tasks($id)): $data['tasks'] = $this->membership_model->get_tasks($id); endif; } $data['title'] = $site_title." | Your account"; $data['main_content'] = 'account'; $this->load->view('template', $data); } and this is the 2 functions I'm using in the model: function get_villages() { $q = $this->db->get('villages'); if($q->num_rows() > 0) { foreach ($q->result() as $row) { $data[] = $row; } return $data; } } function get_tasks($id) { $this->db->select('name'); $this->db->from('tasks'); $this->db->where('villageid', $id); $q = $this->db->get(); if($q->num_rows() > 0) { foreach ($q->result() as $task) { $data[] = $task; } return $data; } } and of course the view: <?php foreach($rows as $r) : ?> <div class="village"> <h3><?php echo $r->name; ?></h3> <ul> <?php foreach($tasks as $task): ?> <li><?php echo $task->name; ?></li> <?php endforeach; ?> </ul> <?php echo anchor('site/add_village/'.$r->id.'', '+ add new task'); ?> </div> <?php endforeach; ?> ps: please do not remove the comment in the first block of code!

    Read the article

  • calling startActivity() inside of a instance method - causing a NullPointerException

    - by Cole
    Heya - I'm trying to call startActivity() from a class that extends AsyncTask in the onPostExecute(). Here's the flow: Class that extends AsyncTask: protected void onPostExecute() { Login login = new Login(); login.pushCreateNewOrChooseExistingFormActivity(); } Class that extends Activity: public void pushCreateNewOrChooseExistingFormActivity() { // start the CreateNewOrChooseExistingForm Activity Intent intent = new Intent(Intent.ACTION_VIEW); **ERROR_HERE*** intent.setClassName(this, CreateNewOrChooseExistingForm.class.getName()); startActivity(intent); } And I get this error… every time: 03-17 16:04:29.579: ERROR/AndroidRuntime(1503): FATAL EXCEPTION: main 03-17 16:04:29.579: ERROR/AndroidRuntime(1503): java.lang.NullPointerException 03-17 16:04:29.579: ERROR/AndroidRuntime(1503): at android.content.ContextWrapper.getPackageName(ContextWrapper.java:120) 03-17 16:04:29.579: ERROR/AndroidRuntime(1503): at android.content.ComponentName.(ComponentName.java:62) 03-17 16:04:29.579: ERROR/AndroidRuntime(1503): at android.content.Intent.setClassName(Intent.java:4850) 03-17 16:04:29.579: ERROR/AndroidRuntime(1503): at com.att.AppName.Login.pushCreateNewOrChooseExistingFormActivity(Login.java:47) For iOS developers - I'm just trying to push a new view controller on to a navigational controller's stack a la pushViewController:animated:. Which apparently - is hard to do on this platform. Any ideas? Thanks in advance! UPDATE - FIXED: per @Falmarri advice, i managed to resolve this issue. first of all, i'm no longer calling Login login = new Login(); to create a new login object. bad. bad. bad. no cookie. instead, when preparing to call .execute(), this tutorial suggests passing the applicationContext to the class the executes the AsyncTask, for my purposes, as shown below: CallWebServiceTask task = new CallWebServiceTask(); // pass the login object to the task task.applicationContext = login; // execute the task in the background, passing the required params task.execute(login); now, in onPostExecute(), i can get to my Login objects methods like so: ((Login) applicationContext).pushCreateNewOrChooseExistingFormActivity(); ((Login) applicationContext).showLoginFailedAlert(result.get("httpResponseCode").toString()); ... hope this helps someone else out there! especially iOS developers transistioning over to Android...

    Read the article

< Previous Page | 63 64 65 66 67 68 69 70 71 72 73 74  | Next Page >