Search Results

Search found 9464 results on 379 pages for 'cron task'.

Page 81/379 | < Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >

  • Displaying the Saved Pictures in the Windows Phone 8 emulator

    - by Laurent Bugnion
    One cool feature of the Windows Phone emulator is that it allows you to select pictures from your app (using the PhotoChooserTask) without having to try your app on a physical device. For example, this code (which I used in some of my recent presentations) will trigger the Photo Chooser UI to be displayed on the emulator too: private Action<IEnumerable<IImageFileInfo>> _callback; public void SelectFiles(Action<IEnumerable<IImageFileInfo>> callback) { var task = new PhotoChooserTask { ShowCamera = true }; task.Completed += TaskCompleted; _callback = callback; task.Show(); } void TaskCompleted(object sender, PhotoResult e) { if (e.Error == null && e.ChosenPhoto != null && _callback != null) { var fileName = e.OriginalFileName .Substring(e.OriginalFileName.LastIndexOf("\\") + 1); var info = new FileViewModel(e.ChosenPhoto, fileName); var infos = new List<IImageFileInfo> { info }; _callback(infos); } } In Windows Phone 8 however, when you execute this code, you will be shown an almost empty Photo Chooser UI: Notice that the “Saved Pictures” album is missing. At first I thought it was just not there at all, but you can actually restore it with the following steps: Press on the Windows button On the main screen, press on Photos Press on Albums Open the so called “8” photo album Press Back until you are back into your app and try again. This time you will see the saved pictures, and can perform your tests in more realistic conditions! Happy coding! Laurent   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • How Visual Studio 2010 and Team Foundation Server enable Compliance

    - by Martin Hinshelwood
    One of the things that makes Team Foundation Server (TFS) the most powerful Application Lifecycle Management (ALM) platform is the traceability it provides to those that use it. This traceability is crucial to enable many companies to adhere to many of the Compliance regulations to which they are bound (e.g. CFR 21 Part 11 or Sarbanes–Oxley.)   From something as simple as relating Tasks to Check-in’s or being able to see the top 10 files in your codebase that are causing the most Bugs, to identifying which Bugs and Requirements are in which Release. All that information is available and more in TFS. Although all of this tradability is available within TFS you do need to understand that it is not for free. Well… I say that, but if you are using TFS properly you will have this information with no additional work except for firing up the reporting. Using Visual Studio ALM and Team Foundation Server you can relate every line of code changes all the way up to requirements and back down through Test Cases to the Test Results. Figure: The only thing missing is Build In order to build the relationship model below we need to examine how each of the relationships get there. Each member of your team from programmer to tester and Business Analyst to Business have their roll to play to knit this together. Figure: The relationships required to make this work can get a little confusing If Build is added to this to relate Work Items to Builds and with knowledge of which builds are in which environments you can easily identify what is contained within a Release. Figure: How are things progressing Along with the ability to produce the progress and trend reports the tractability that is built into TFS can be used to fulfil most audit requirements out of the box, and augmented to fulfil the rest. In order to understand the relationships, lets look at each of the important Artifacts and how they are associated with each other… Requirements – The root of all knowledge Requirements are the thing that the business cares about delivering. These could be derived as User Stories or Business Requirements Documents (BRD’s) but they should be what the Business asks for. Requirements can be related to many of the Artifacts in TFS, so lets look at the model: Figure: If the centre of the world was a requirement We can track which releases Requirements were scheduled in, but this can change over time as more details come to light. Figure: Who edited the Requirement and when There is also the ability to query Work Items based on the History of changed that were made to it. This is particularly important with Requirements. It might not be enough to say what Requirements were completed in a given but also to know which Requirements were ever assigned to a particular release. Figure: Some magic required, but result still achieved As an augmentation to this it is also possible to run a query that shows results from the past, just as if we had a time machine. You can take any Query in the system and add a “Asof” clause at the end to query historical data in the operational store for TFS. select <fields> from WorkItems [where <condition>] [order by <fields>] [asof <date>] Figure: Work Item Query Language (WIQL) format In order to achieve this you do need to save the query as a *.wiql file to your local computer and edit it in notepad, but one imported into TFS you run it any time you want. Figure: Saving Queries locally can be useful All of these Audit features are available throughout the Work Item Tracking (WIT) system within TFS. Tasks – Where the real work gets done Tasks are the work horse of the development team, but they only as useful as Excel if you do not relate them properly to other Artifacts. Figure: The Task Work Item Type has its own relationships Requirements should be broken down into Tasks that the development team work from to build what is required by the business. This may be done by a small dedicated group or by everyone that will be working on the software team but however it happens all of the Tasks create should be a Child of a Requirement Work Item Type. Figure: Tasks are related to the Requirement Tasks should be used to track the day-to-day activities of the team working to complete the software and as such they should be kept simple and short lest developers think they are more trouble than they are worth. Figure: Task Work Item Type has a narrower purpose Although the Task Work Item Type describes the work that will be done the actual development work involves making changes to files that are under Source Control. These changes are bundled together in a single atomic unit called a Changeset which is committed to TFS in a single operation. During this operation developers can associate Work Item with the Changeset. Figure: Tasks are associated with Changesets   Changesets – Who wrote this crap Changesets themselves are just an inventory of the changes that were made to a number of files to complete a Task. Figure: Changesets are linked by Tasks and Builds   Figure: Changesets tell us what happened to the files in Version Control Although comments can be changed after the fact, the inventory and Work Item associations are permanent which allows us to Audit all the way down to the individual change level. Figure: On Check-in you can resolve a Task which automatically associates it Because of this we can view the history on any file within the system and see how many changes have been made and what Changesets they belong to. Figure: Changes are tracked at the File level What would be even more powerful would be if we could view these changes super imposed over the top of the lines of code. Some people call this a blame tool because it is commonly used to find out which of the developers introduced a bug, but it can also be used as another method of Auditing changes to the system. Figure: Annotate shows the lines the Annotate functionality allows us to visualise the relationship between the individual lines of code and the Changesets. In addition to this you can create a Label and apply it to a version of your version control. The problem with Label’s is that they can be changed after they have been created with no tractability. This makes them practically useless for any sort of compliance audit. So what do you use? Branches – And why we need them Branches are a really powerful tool for development and release management, but they are most important for audits. Figure: One way to Audit releases The R1.0 branch can be created from the Label that the Build creates on the R1 line when a Release build was created. It can be created as soon as the Build has been signed of for release. However it is still possible that someone changed the Label between this time and its creation. Another better method can be to explicitly link the Build output to the Build. Builds – Lets tie some more of this together Builds are the glue that helps us enable the next level of tractability by tying everything together. Figure: The dashed pieces are not out of the box but can be enabled When the Build is called and starts it looks at what it has been asked to build and determines what code it is going to get and build. Figure: The folder identifies what changes are included in the build The Build sets a Label on the Source with the same name as the Build, but the Build itself also includes the latest Changeset ID that it will be building. At the end of the Build the Build Agent identifies the new Changesets it is building by looking at the Check-ins that have occurred since the last Build. Figure: What changes have been made since the last successful Build It will then use that information to identify the Work Items that are associated with all of the Changesets Changesets are associated with Build and change the “Integrated In” field of those Work Items . Figure: Find all of the Work Items to associate with The “Integrated In” field of all of the Work Items identified by the Build Agent as being integrated into the completed Build are updated to reflect the Build number that successfully integrated that change. Figure: Now we know which Work Items were completed in a build Now that we can link a single line of code changed all the way back through the Task that initiated the action to the Requirement that started the whole thing and back down to the Build that contains the finished Requirement. But how do we know wither that Requirement has been fully tested or even meets the original Requirements? Test Cases – How we know we are done The only way we can know wither a Requirement has been completed to the required specification is to Test that Requirement. In TFS there is a Work Item type called a Test Case Test Cases enable two scenarios. The first scenario is the ability to track and validate Acceptance Criteria in the form of a Test Case. If you agree with the Business a set of goals that must be met for a Requirement to be accepted by them it makes it both difficult for them to reject a Requirement when it passes all of the tests, but also provides a level of tractability and validation for audit that a feature has been built and tested to order. Figure: You can have many Acceptance Criteria for a single Requirement It is crucial for this to work that someone from the Business has to sign-off on the Test Case moving from the  “Design” to “Ready” states. The Second is the ability to associate an MS Test test with the Test Case thereby tracking the automated test. This is useful in the circumstance when you want to Track a test and the test results of a Unit Test designed to test the existence of and then re-existence of a a Bug. Figure: Associating a Test Case with an automated Test Although it is possible it may not make sense to track the execution of every Unit Test in your system, there are many Integration and Regression tests that may be automated that it would make sense to track in this way. Bug – Lets not have regressions In order to know wither a Bug in the application has been fixed and to make sure that it does not reoccur it needs to be tracked. Figure: Bugs are the centre of their own world If the fix to a Bug is big enough to require that it is broken down into Tasks then it is probably a Requirement. You can associate a check-in with a Bug and have it tracked against a Build. You would also have one or more Test Cases to prove the fix for the Bug. Figure: Bugs have many associations This allows you to track Bugs / Defects in your system effectively and report on them. Change Request – I am not a feature In the CMMI Process template Change Requests can also be easily tracked through the system. In some cases it can be very important to track Change Requests separately as an Auditor may want to know what was changed and who authorised it. Again and similar to Bugs, if the Change Request is big enough that it would require to be broken down into Tasks it is in reality a new feature and should be tracked as a Requirement. Figure: Make sure your Change Requests only Affect Requirements and not rewrite them Conclusion Visual Studio 2010 and Team Foundation Server together provide an exceptional Application Lifecycle Management platform that can help your team comply with even the harshest of Compliance requirements while still enabling them to be Agile. Most Audits are heavy on required documentation but most of that information is captured for you as long a you do it right. You don’t even need every team member to understand it all as each of the Artifacts are relevant to a different type of team member. Business Analysts manage Requirements and Change Requests Programmers manage Tasks and check-in against Change Requests and Bugs Testers manage Bugs and Test Cases Build Masters manage Builds Although there is some crossover there are still rolls or “hats” that are worn. Do you thing this is all achievable? Have I missed anything that you think should be there?

    Read the article

  • PostSharp deployment to build machine- use Setup installation, not NuGet package.

    - by Michael Freidgeim
    PostSharp has well documented different methods of installation. I've chosen installing NuGet packages, because according to  Deploying PostSharp into a Source Repository NuGet is the easiest way to add PostSharp to a project without installing the product on every machine. However it didn't work well for me. I've added PostSharp NuGet package to one project in the solution.  When I wanted to use PostSharp in other project, Visual Studio tab showed that PostSharp is not enabled for this project I've added the NuGet package to the new project, which installed a new version of the package in the new Packages subfolder. When I wanted to refer PostSharp from the third project, I've ended up with another version of PostSharp installed. Additionally multiple versions of Diagnostics were created. It definitely causes confusion and errors.   More problems we experienced on build server. According to Using PostSharp on a Build Server "If you chose to deploy PostSharp in the source repository, it does not need to be installed specifically on the build server. " It didn't work on our build server. I kept getting errors "The "AddIns" parameter is not supported by the "PostSharp21" task." and "The "DisableSystemBindingPolicies" parameter is not supported by the "PostSharp21" task."   From my experience the only way to have the latest version of PostSharp working on the build server is to install it using Setup as described in Deploying PostSharp with the Setup Program     Gael acknowledged the issues with possible version conflicts. see http://support.sharpcrafters.com/discussions/problems/388-the-postsharp21-task-failed-unexpectedly

    Read the article

  • Top Fusion Apps User Experience Guidelines & Patterns That Every Apps Developer Should Know About

    - by ultan o'broin
    We've announced the availability of the Oracle Fusion Applications user experience design patterns. Developers can get going on these using the Design Filter Tool (or DeFT) to select the best pattern for their context of use. As you drill into the patterns you will discover more guidelines from the Applications User Experience team and some from the Rich Client User Interface team too that are also leveraged in Fusion Apps. All are based on the Oracle Application Development Framework components. To accelerate your Fusion apps development and tailoring, here's some inside insight into the really important patterns and guidelines that every apps developer needs to know about. They start at a broad Fusion Apps information architecture level and then become more granular at the page and task level. Information Architecture: These guidelines explain how the UI of an Oracle Fusion application is constructed. This enables you to understand where the changes that you want to make fit into the oveall application's information architecture. Begin with the UI Shell and Navigation guidelines, and then move onto page-level design using the Work Areas and Dashboards guidelines. UI Shell Guideline Navigation Guideline Introduction to Work Areas Guideline Dashboards Guideline Page Content: These patterns and guidelines cover the most common interactions used to complete tasks productively, beginning with the core interactions common across all pages, and then moving onto task-specific ones. Core Across All Pages Icons Guideline Page Actions Guideline Save Model Guideline Messages Pattern Set Embedded Help Pattern Set Task Dependent Add Existing Object Pattern Set Browse Pattern Set Create Pattern Set Detail on Demand Pattern Set Editing Objects Pattern Set Guided Processes Pattern Set Hierarchies Pattern Set Information Entry Forms Pattern Set Record Navigation Pattern Set Transactional Search and Results Pattern Group Now, armed with all this great insider information, get developing some great-looking, highly usable apps! Let me know in the comments how things go!

    Read the article

  • Clients with multiple proxy and multithreading callbacks

    - by enzom83
    I created a sessionful web service using WCF, and in particular I used the NetTcpBinding binding. In addition to methods to initiate and terminate a session, other methods allow the client to send to one or more tasks to be performed (the results are returned via callback, so the service is duplex), but they also allow you to know the status of the service. Assuming you activate the same service on multiple endpoints, and assuming that the client knows these endpoints (for example, it could maintain a List of endpoints), the client should connect with one or more replicas of the same service. The client periodically updates the status of the service, so when it needs to perform a new task (the task is submitted by the user via UI), it selects the service currently less loaded and sends the task to it. Periodically, the client also initiates a maintenance procedure in order to disconnect from one or more overloaded service and in order to connect with new services. I created a client proxy using the svcutil tool. I wish each proxy can be used simultaneously by different threads, for example, in addition to the thread that submits the tasks using a proxy, there are also the following two threads which act periodically: a thread that periodically sends a request to the service in order to obtain the updated state; a thread that periodically selects a proxy to close and instantiates a new proxy to replace the closed one. To achieve these objectives, is it sufficient to create an array of proxies and manage their opening and closing in separate threads? I think I read that the proxy method calls are thread safe, so I would not need to perform a lock before requesting updates to the service. However, when the maintenance procedure (which is activated on its own thread) decides to close a proxy, should I perform a lock? Finally, each proxy is also associated with an object that implements the callback interface for the service: are the callbacks (invoked on the client) executed on different threads on the client? I would like to wrap the management of the proxy in one or more classes so that it can then easily manage within a WPF application.

    Read the article

  • Mobile App Notifications in the Enterprise Space: UX Considerations

    - by ultan o'broin
    Here is a really super website of UX patterns for Android: Android Patterns. I was particularly interested in the event-driven notification patterns (aka status bar notifications to developers). Android - unlike iOS (i.e., the iPhone) - offers a superior centralized notifications system for users.   (Figure copyright Android Patterns)   Research in the enterprise applications space shows how users on-the-go, prefer this approach, as: Users can manage their notification alerts centrally, across all media, apps and for device activity, and decide the order in which to deal with them, and when. Notifications, unlike messages in a dialog or information message in the UI, do not block a task flow (and we need to keep task completion to under three minutes). See the Anti-Patterns slideshare presentation on this blocking point too. These notifications must never interrupt a task flow by launching an activity from the background. Instead, the user can launch an activity from the notification. What users do need is the ability to filter this centralized approach, and to personalize the experience of which notifications are added, what the reminder is, ability to turn off, and so on. A related point concerning notifications is when used to provide users with a record of actions then you can lighten up on lengthy confirmation messages that pop up (toasts in the Android world) used when transactions or actions are sent for processing or into a workflow. Pretty much all the confirmation needs to say is the action is successful along with key data such as dollar amount, customer name, or whatever. I am a user of Android (Nexus S), BlackBerry (Curve), and iOS devices (iPhone 3GS and 4). In my opinion, the best notifications user experience for the enterprise user is offered by Android. Blackberry is good, but not as polished and way clunkier than Android’s. What you get on the iPhone, out of the box, is useless in the enterprise. Technorati Tags: Android,iPhone,Blackerry,messages,usablility,user assistance,userexperience,Oracle,patterns,notifications,alerts

    Read the article

  • Many small scripts, one repository or multiple?

    - by The Jug
    A co-worker and myself have run into an issue that we have multiple opinions on. Currently we have a git repository that we are keeping all of our cronjobs in. There are about 20 crons and they are not really related except for the fact that they are all small python scripts and essential for some activity. We are using a fabric.py file to deploy and a requirements.txt file to manage requirements for all of the scripts. Our issue is basically, do we keep all of these scripts in one git repository or should we be separating them out into their own repositories? By keeping them in one repository it is easier to deploy them onto one server. We can use just one cron file for all the scripts. However this feels wrong, as the 20 cronjobs are not logically related. Additionally, when using one requirements.txt file for all the scripts, it's hard to figure out what the dependencies are for a particular script and they all have to use the same versions of packages. We could separate all of the scripts out into their own repositories but this creates 20 different repositories that need to be remembered and dealt with. Most of these scripts are not very large and that solution seems to be overkill. A related question is, do we use one big crontab file for all cronjobs, or a separate file for each? If each has their own, how does one crontab's installation avoid overwriting the other 19? This also seems like a pain as there would then by 20 different cron files to keep track of. In short, our main question and issue is do we keep them all closely bundled as one repository or do we separate them out into their own repository with their own requirements.txt and fabfile.py? We feel like we're also probably looking over some really simple solution. Is there an easier way to deal with this issue?

    Read the article

  • How can you achieve and maintain flow while pair programming?

    - by bizso09
    Flow is a concept introduced by Mihaly Csikszentmihalyi; in short, it means to get into the "zone". You feel immersed in your task, focused; the task can be difficult but challenging at the same time. When people achieve flow their productivity shoots up. Programming requires a great deal of mental focus because we often need to juggle several things in our minds at once. Many like to work in a quiet environment where they can direct their full attention to the task. If they are interrupted, it may take several minutes or even hours to get back into flow. I understand there's a practice in agile development and extreme programming called pair programming. It means you put the whole software development team in one room so that communication is seamless. You do write code with your pair because this way you get instant code reviews and fewer bugs get through. I've always had problems achieving flow while doing pair programming because of constant interruptions. I'm thinking deep about an issue then all of sudden someone asks me a question from another pair. My train of thought is lost. How can you achieve and maintain flow while pair programming?

    Read the article

  • Towards Ultra-Reusability for ADF - Adaptive Bindings

    - by Duncan Mills
    The task flow mechanism embodies one of the key value propositions of the ADF Framework, it's primary contribution being the componentization of your applications and implicitly the introduction of a re-use culture, particularly in large applications. However, what if we could do more? How could we make task flows even more re-usable than they are today? Well one great technique is to take advantage of a feature that is already present in the framework, a feature which I will call, for want of a better name, "adaptive bindings". What's an adaptive binding? well consider a simple use case.  I have several screens within my application which display tabular data which are all essentially identical, the only difference is that they happen to be based on different data collections (View Objects, Bean collections, whatever) , and have a different set of columns. Apart from that, however, they happen to be identical; same toolbar, same key functions and so on. So wouldn't it be nice if I could have a single parametrized task flow to represent that type of UI and reuse it? Hold on you say, great idea, however, to do that we'd run into problems. Each different collection that I want to display needs different entries in the pageDef file and: I want to continue to use the ADF Bindings mechanism rather than dropping back to passing the whole collection into the taskflow   If I do use bindings, there is no way I want to have to declare iterators and tree bindings for every possible collection that I might want the flow to handle  Ah, joy! I reply, no need to panic, you can just use adaptive bindings. Defining an Adaptive Binding  It's easiest to explain with a simple before and after use case.  Here's a basic pageDef definition for our familiar Departments table.  <executables> <iterator Binds="DepartmentsView1" DataControl="HRAppModuleDataControl" RangeSize="25"             id="DepartmentsView1Iterator"/> </executables> <bindings> <tree IterBinding="DepartmentsView1Iterator" id="DepartmentsView1">   <nodeDefinition DefName="oracle.demo.model.vo.DepartmentsView" Name="DepartmentsView10">     <AttrNames>       <Item Value="DepartmentId"/>         <Item Value="DepartmentName"/>         <Item Value="ManagerId"/>         <Item Value="LocationId"/>       </AttrNames>     </nodeDefinition> </tree> </bindings>  Here's the adaptive version: <executables> <iterator Binds="${pageFlowScope.voName}" DataControl="HRAppModuleDataControl" RangeSize="25"             id="TableSourceIterator"/> </executables> <bindings> <tree IterBinding="TableSourceIterator" id="GenericView"> <nodeDefinition Name="GenericViewNode"/> </tree> </bindings>  You'll notice three changes here.   Most importantly, you'll see that the hard-coded View Object name  that formally populated the iterator Binds attribute is gone and has been replaced by an expression (${pageFlowScope.voName}). This of course, is key, you can see that we can pass a parameter to the task flow, telling it exactly what VO to instantiate to populate this table! I've changed the IDs of the iterator and the tree binding, simply to reflect that they are now re-usable The tree binding itself has simplified and the node definition is now empty.  Now what this effectively means is that the #{node} map exposed through the tree binding will expose every attribute of the underlying iterator's collection - neat! (kudos to Eugene Fedorenko at this point who reminded me that this was even possible in his excellent "deep dive" session at OpenWorld  this year) Using the adaptive binding in the UI Now we have a parametrized  binding we have to make changes in the UI as well, first of all to reflect the new ID that we've assigned to the binding (of course) but also to change the column list from being a fixed known list to being a generic metadata driven set: <af:table value="#{bindings.GenericView.collectionModel}" rows="#{bindings.GenericView.rangeSize}"         fetchSize="#{bindings.GenericView.rangeSize}"           emptyText="#{bindings.GenericView.viewable ? 'No data to display.' : 'Access Denied.'}"           var="row" rowBandingInterval="0"           selectedRowKeys="#{bindings.GenericView.collectionModel.selectedRow}"           selectionListener="#{bindings.GenericView.collectionModel.makeCurrent}"           rowSelection="single" id="t1"> <af:forEach items="#{bindings.GenericView.attributeDefs}" var="def">   <af:column headerText="#{bindings.GenericView.labels[def.name]}" sortable="true"            sortProperty="#{def.name}" id="c1">     <af:outputText value="#{row[def.name]}" id="ot1"/>     </af:column>   </af:forEach> </af:table> Of course you are not constrained to a simple read only table here.  It's a normal tree binding and iterator that you are using behind the scenes so you can do all the usual things, but you can see the value of using ADFBC as the back end model as you have the rich pantheon of UI hints to use to derive things like labels (and validators and converters...)  One Final Twist  To finish on a high note I wanted to point out that you can take this even further and achieve the ultra-reusability I promised. Here's the new version of the pageDef iterator, see if you can notice the subtle change? <iterator Binds="{pageFlowScope.voName}"  DataControl="${pageFlowScope.dataControlName}" RangeSize="25"           id="TableSourceIterator"/>  Yes, as well as parametrizing the collection (VO) name, we can also parametrize the name of the data control. So your task flow can graduate from being re-usable within an application to being truly generic. So if you have some really common patterns within your app you can wrap them up and reuse then across multiple developments without having to dictate data control names, or connection names. This also demonstrates the importance of interacting with data only via the binding layer APIs. If you keep any code in the task flow generic in that way you can deal with data from multiple types of data controls, not just one flavour. Enjoy!

    Read the article

  • Eliminating zero-length files

    - by RhZ
    I have been having multiple crashes recently. 4-5 last night within a few hours. I posted about it before, and got an answer but not sure how to proceed. The messages in my logs right before the crash are multiple complaints about valid eCryptfs headers. But the chron might not be related, I don't think I saw that in previous crashes: xxx-desktop kernel: [ 1112.274474] Valid eCryptfs headers not found in file header region or xattr region, inode 32376924 xxx-desktop CRON[4212]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) So I was sent to an answer providing this script: for i in find $(mount | grep " on $HOME type ecryptfs" | awk '{print $1}') -size 0c; do if ! fuser -v $i; then rm -f $i fi done I did find some zero byte files, not in the exactly right place (a folder called .private as I remember), but I need to fix this, its too bad right now. So I need to delete any of them that are not in use. I am a little too clueless, can someone walk me through executing this script? I don't know how.

    Read the article

  • Upgraded from 11.4 to 11.10, There was an error, now the system won't initiialize

    - by Eric
    This morning the system gave me a message that my Version (11.4) was no longer supported, and I took the 'upgrade' option (- 11.10). While installing the various components I encountered a message to the effect that the there was an error and the system may have become unusable. Among the messges: E:Sup-process /usr/bin/dpkg received a segmentation fault...returned an error code (1). I was given an option to do several things, one of which seemed to mean that it would attempt to roll back to the previous version (the default), which I took. After the process ran it said the upgrade process had finished, but there were errors. I attempted to initialize a console so I could enter ubuntu-bug update-manager /var/log/dist-upgrade, per the instructions I received when I received the error message, but the console failed during initialization. I restarted the machine, and the screen has stopped with the following contents: * Starting bluetooth * Stopping save kernel messages * Starting CUPS printing spooler/server * PulseAudio configured per-user sessions saned disabled: edit /etc/default/saned $starting up Cisco VPN daemon *Starting anac(h)ronistic cron *Stopping anac(h)ronistic cron Each of these steps followed by [ OK ] What are my options? Any help appreciated!

    Read the article

  • Cannot usermod -L in LightDM scripts

    - by user95723
    I'm running Xubuntu 12.04 and use the LightDM. I want to restrict access to the machine as a kind of parental control. This is how it should work I hook in a script that executes just before the greeter comes up. Within that script some awk processing will read an entry in a config file and will trigger a usermod -L or usermod -U depending on whether the user is allowed to login. While user is logged, a cron job will count down the entry in the config and forces a xfce4-session-logout if time is up. A cron job running on a server will upload the "credits" on a daily base. How is this idea? That's theory, now for the problems It appears for some unknown reason, the usermod command is not executed, neither as part of a display-setup-script nor within the greeter-setup-script. I wrote a small sandbox script usermod -L johndoe 2error.txt touch /etc/blabla 2error.txt The script is executing, cause the blabla file is existing. That means that the script must have been executed with root privileges. error.txt is empty but the usermod command has just no effect. Is this a bug or a feature. What's wrong? Best regards and thank you Oli

    Read the article

  • How to learn to deliver quality software designs when working on a tight deadline?

    - by chester89
    I read many books about how to design great software, but I kind of struggle to come up with a good design decisions when it comes to business apps, especially when the timeframe is tough. In the company I currently work for, the following situation happen all the time: my teamlead tells me that there's a task to do, I call some guy or a girl from business who tells me exactly what is it they want, and then I start coding. The task always fits in some existing application (we do only web apps or web services), usually it's purpose is to pull data from one datasource and put into the other one, with some business logic attached in the process. I start coding and then, after spending some time on a problem, my code didn't work as expected - either because of technical mistake or my lack of knowledge of the domain. The business is ringing me 2-3 times a day to hurry me up. I ask my team lead to help, he comes up, sees my code and goes like 'What's this?'. Then he throws away about half of my code, including all the design decisions I made, writes 2-3 methods that does the job (each of them usually 200-300 lines long or more, by the way), and task is complete, code works as it should have. The guy is smarter than me, obviously, and I'm aware of that. My goal is to be better software developer, that means write better code, not finish the job quicker with some crappy code. And the thing is, when I have enough time to tackle a problem, I can come up with a design that is good (in my opinion, of course), but I fall short to do so when I'm on a tight deadline. What should I do? I am fully aware that it's rather vague explanation, but please bear with me

    Read the article

  • Architecture for dashboard showing aggregated stats [on hold]

    - by soulnafein
    I'd like to know what are common architectural pattern for the following problem. Web application A has information on sales, users, responsiveness score, etc. Some of this information are computationally intensive and or have a complex business logic (e.g. responsiveness score). I'm building a separate application (B) for internal admin tasks that modifies data in web application A and report on data from web application A. For writing I'm planning to use a restful api. E.g. create a new entity, update entity, etc. In application B I'd like to show some graphs and other aggregate data for the previous 12 months. I'm planning to store the aggregate data for each month in redis. Some data should update more often, e.g every 10 minutes. I can think of 3 ways of doing this. A scheduled task in app B that connects to an api of app A that provides some aggregated data. Then app B stores it in Redis and use that to visualise pages. Cons: it makes complex calculation within a web request, requires lot's of work e.g. api server and client, storing, etc., pros: business logic still lives in app A. A scheduled task in app A that aggregates data in an non-web process and stores it directly in Redis to be accessed by app B. A scheduled task in app A that aggregates data in a non-web process and uses an api in app B to save it. I'd like to know if there is a well known architectural solution to this type of problems and if not what are other pros/cons for the solution I've suggested?

    Read the article

  • Cannot integrate Gallio MBUnit with Team City

    - by Bernard Larouche
    I have been trying to get my MBUnit tests suite to work on Team City for many days now without any success. My solution builds no problem. The program is with my tests. After googling for Gallio integration with Team City I tried many ways to make this thing work and I think I am close but need help. I have included the gallio bin directory to my repository and also on my TC Server. Here is my build runner set up in Team City : Build runner : MSBuild Build file path : Myproject.msbuild Targets : RebuildSolution RunTests Here is Myproject.msbuild file I created and included in the Source control trunk directory : <Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> <!-- This is needed by MSBuild to locate the Gallio task --> <UsingTask AssemblyFile="C:\Gallio\bin\Gallio.MSBuildTasks.dll" TaskName="Gallio" /> <!-- Specify the tests assemblies --> <ItemGroup> <TestAssemblies Include="C:\_CBL\CBL\CoderForTraders\Source\trunk\UnitTest\DomainModel.Tests\bin\Debug\CBL.CoderForTraders.DomainModel.Tests.dll" /> </ItemGroup> <Target Name="RunTests"> <Gallio IgnoreFailures="false" Assemblies="@(TestAssemblies)" RunnerExtensions="TeamCityExtension,Gallio.TeamCityIntegration"> <!-- This tells MSBuild to store the output value of the task's ExitCode property into the project's ExitCode property --> <Output TaskParameter="ExitCode" PropertyName="ExitCode"/> </Gallio> <Error Text="Tests execution failed" Condition="'$(ExitCode)' != 0" /> </Target> <Target Name="RebuildSolution"> <Message Text="Starting to Build"/> <MSBuild Projects="CoderForTraders.sln" Properties="Configuration=Debug" Targets="Rebuild" /> </Target> </Project> Here are the errors displayed by Team City : error MSB4064: The "Assemblies" parameter is not supported by the "Gallio" task. Verify the parameter exists on the task, and it is a settable public instance property error MSB4063: The "Gallio" task could not be initialized with its input parameters. Thanks for your help

    Read the article

  • How do I copy all referenced jars of an eclipse project using ant4eclipse?

    - by interstellar
    I've tried like this (link): <taskdef resource="net/sf/antcontrib/antlib.xml" /> <taskdef resource="net/sf/ant4eclipse/antlib.xml" /> <target name="copy_jars"> <getEclipseClasspath workspace="${basedir}/.." projectname="MyProject" property="classpath" relative="false" runtime="true" pathseparator="#" /> <!-- iterate over all classpath entries --> <foreach list="${classpath}" delimiter="#" target="copy_jar_file" param="classpath.entry" /> </target> <target name="copy_jar_file"> <!-- check if current is a .jar-file ... --> <if> <isfileselected file="${classpath.entry}"> <filename name="**/*.jar" /> </isfileselected> <then> <!-- copy the jar file to a destination directory --> <copy file="${classpath.entry}" tofile="${dest.dir}"/> </then> </if> </target> But I get the exception: [getEclipseClasspath] net.sf.ant4eclipse.model.FileParserException: Could not parse plugin project 'E:\...\MyProject' since it contains neither a Bundle-Manifest nor a plugin.xml! [getEclipseClasspath] at net.sf.ant4eclipse.model.pdesupport.plugin.PluginDescriptorParser.parseEclipseProject(Unknown Source) [getEclipseClasspath] at net.sf.ant4eclipse.model.pdesupport.plugin.PluginProjectRoleIdentifier.applyRole(Unknown Source) [getEclipseClasspath] at net.sf.ant4eclipse.model.roles.RoleIdentifierRegistry.applyRoles(Unknown Source) [getEclipseClasspath] at net.sf.ant4eclipse.tools.ProjectFactory.readProjectFromWorkspace(Unknown Source) [getEclipseClasspath] at net.sf.ant4eclipse.tools.resolver.AbstractClasspathResolver.resolveEclipseClasspathEntry(Unknown Source) [getEclipseClasspath] at net.sf.ant4eclipse.tools.resolver.AbstractClasspathResolver.resolveProjectClasspath(Unknown Source) [getEclipseClasspath] at net.sf.ant4eclipse.tools.resolver.ProjectClasspathResolver.resolveProjectClasspath(Unknown Source) [getEclipseClasspath] at net.sf.ant4eclipse.ant.task.project.GetEclipseClassPathTask.resolvePath(Unknown Source) [getEclipseClasspath] at net.sf.ant4eclipse.ant.task.project.AbstractGetProjectPathTask.execute(Unknown Source) [getEclipseClasspath] at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:288) [getEclipseClasspath] at sun.reflect.GeneratedMethodAccessor1.invoke(Unknown Source) [getEclipseClasspath] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) [getEclipseClasspath] at java.lang.reflect.Method.invoke(Method.java:597) [getEclipseClasspath] at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:105) [getEclipseClasspath] at org.apache.tools.ant.Task.perform(Task.java:348) [getEclipseClasspath] at org.apache.tools.ant.Target.execute(Target.java:357) [getEclipseClasspath] at org.apache.tools.ant.Target.performTasks(Target.java:385) [getEclipseClasspath] at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1329) [getEclipseClasspath] at org.apache.tools.ant.Project.executeTarget(Project.java:1298) [getEclipseClasspath] at org.apache.tools.ant.helper.DefaultExecutor.executeTargets(DefaultExecutor.java:41) [getEclipseClasspath] at org.eclipse.ant.internal.ui.antsupport.EclipseDefaultExecutor.executeTargets(EclipseDefaultExecutor.java:32) [getEclipseClasspath] at org.apache.tools.ant.Project.executeTargets(Project.java:1181) [getEclipseClasspath] at org.eclipse.ant.internal.ui.antsupport.InternalAntRunner.run(InternalAntRunner.java:423) [getEclipseClasspath] at org.eclipse.ant.internal.ui.antsupport.InternalAntRunner.main(InternalAntRunner.java:137) BUILD FAILED E:\...\build.xml:132: Exception whilst resolving the classpath of project MyProject! Reason: Could not parse plugin project 'E:\...\MyProject' since it contains neither a Bundle-Manifest nor a plugin.xml! I wan't to copy just the jars, not the referenced projects. Is there a way to parametrize the task getEclipseClasspath so it only gets the jars, not the projects?

    Read the article

  • Converting to async,await using async targeting package

    - by e4rthdog
    I have this: private void BtnCheckClick(object sender, EventArgs e) { var a = txtLot.Text; var b = cmbMcu.SelectedItem.ToString(); var c = cmbLocn.SelectedItem.ToString(); btnCheck.BackColor = Color.Red; var task = Task.Factory.StartNew(() => Dal.GetLotAvailabilityF41021(a, b, c)); task.ContinueWith(t => { btnCheck.BackColor = Color.Transparent; lblDescriptionValue.Text = t.Result.Description; lblItemCodeValue.Text = t.Result.Code; lblQuantityValue.Text = t.Result.AvailableQuantity.ToString(); },TaskScheduler .FromCurrentSynchronizationContext() ); LotFocus(true); } and i followed J. Skeet's advice to move into async,await in my .NET 4.0 app. I converted into this: private async void BtnCheckClick(object sender, EventArgs e) { var a = txtLot.Text; var b = cmbMcu.SelectedItem.ToString(); var c = cmbLocn.SelectedItem.ToString(); btnCheck.BackColor = Color.Red; JDEItemLotAvailability itm = await Task.Factory.StartNew(() => Dal.GetLotAvailabilityF41021(a, b, c)); btnCheck.BackColor = Color.Transparent; lblDescriptionValue.Text = itm.Description; lblItemCodeValue.Text = itm.Code; lblQuantityValue.Text = itm.AvailableQuantity.ToString(); LotFocus(true); } It works fine. What confuses me is that i could do it without using Task but just the method of my Dal. But that means that i must have modified my Dal method, which is something i dont want? I would appreciate if someone would explain to me in "plain" words if what i did is optimal or not and why. Thanks P.s. My dal method public bool CheckLotExistF41021(string _lot, string _mcu, string _locn) { using (OleDbConnection con = new OleDbConnection(this.conString)) { OleDbCommand cmd = new OleDbCommand(); cmd.CommandText = "select lilotn from proddta.f41021 " + "where lilotn = ? and trim(limcu) = ? and lilocn= ?"; cmd.Parameters.AddWithValue("@lotn", _lot); cmd.Parameters.AddWithValue("@mcu", _mcu); cmd.Parameters.AddWithValue("@locn", _locn); cmd.Connection = con; con.Open(); OleDbDataReader rdr = cmd.ExecuteReader(); bool _retval = rdr.HasRows; rdr.Close(); con.Close(); return _retval; } }

    Read the article

  • drag and drop working funny when using variable draggables and droppables

    - by Lina
    Hi, i have some containers that contain some divs like: <div id="container1"> <div id="task1" onMouseOver="DragDrop("+1+");">&nbsp;</div> <div id="task2" onMouseOver="DragDrop("+2+");">&nbsp;</div> <div id="task3" onMouseOver="DragDrop("+3+");">&nbsp;</div> <div id="task4" onMouseOver="DragDrop("+4+");">&nbsp;</div> </div> <div id="container2"> <div id="task5" onMouseOver="DragDrop("+5+");">&nbsp;</div> <div id="task6" onMouseOver="DragDrop("+6+");">&nbsp;</div> </div> <div id="container3"> <div id="task7" onMouseOver="DragDrop("+7+");">&nbsp;</div> <div id="task8" onMouseOver="DragDrop("+8+");">&nbsp;</div> <div id="task9" onMouseOver="DragDrop("+9+");">&nbsp;</div> <div id="task10" onMouseOver="DragDrop("+10+");">&nbsp;</div> </div> i'm trying to drag tasks and drop them in one of the container divs, then reposition the dropped task so that it doesn't affect the other divs nor fall outside one of them and to do that i'm using the event onMouseOver to call the following function: function DragDrop(id) { $("#task" + id).draggable({ revert: 'invalid' }); for (var i = 0; i < nameList.length; i++) { $("#" + nameList[i]).droppable({ drop: function (ev, ui) { var pos = $("#task" + id).position(); if (pos.left <= 0) { $("#task" + id).css("left", "5px"); } else { var day = parseInt(parseInt(pos.left) / 42); var leftPos = (day * 42) + 5; $("#task" + id).css("left", "" + leftPos + "px"); } } }); } } where: nameList = [container1, container2, container3]; the drag is working fine, but the drop is not really, it's just a mess! any help please?? when i hardcode the id and the container, then it works beautifully, but as soon as i use id in drop then it begins to work funny! any suggestions??? thanks a million in advance Lina

    Read the article

  • Simple reminder for Android

    - by anta40
    I'm trying to make a simple timer. package com.anta40.reminder; import java.util.Timer; import java.util.TimerTask; import android.app.Activity; import android.os.Bundle; import android.widget.RadioGroup; import android.widget.TabHost; import android.widget.TextView; import android.widget.RadioGroup.OnCheckedChangeListener; import android.widget.TabHost.TabSpec; public class Reminder extends Activity{ public final int TIMER_DELAY = 1000; public final int TIMER_ONE_MINUTE = 60000; public final int TIMER_ONE_SECOND = 1000; Timer timer; TimerTask task; TextView tv; @Override public void onCreate(Bundle icicle) { super.onCreate(icicle); setContentView(R.layout.main); timer = new Timer(); task = new TimerTask() { @Override public void run() { tv = (TextView) findViewById(R.id.textview1); tv.setText("BOOM!!!!"); tv.setVisibility(TextView.VISIBLE); try { this.wait(TIMER_DELAY); } catch (InterruptedException e){ } tv.setVisibility(TextView.INVISIBLE); } }; TabHost tabs=(TabHost)findViewById(R.id.tabhost); tabs.setup(); TabSpec spec = tabs.newTabSpec("tag1"); spec.setContent(R.id.tab1); spec.setIndicator("Clock"); tabs.addTab(spec); spec=tabs.newTabSpec("tag2"); spec.setContent(R.id.tab2); spec.setIndicator("Settings"); tabs.addTab(spec); tabs.setCurrentTab(0); RadioGroup rgroup = (RadioGroup) findViewById(R.id.rgroup); rgroup.setOnCheckedChangeListener(new OnCheckedChangeListener() { @Override public void onCheckedChanged(RadioGroup group, int checkedId) { if (checkedId == R.id.om){ timer.schedule(task, TIMER_DELAY, 3*TIMER_ONE_SECOND); } else if (checkedId == R.id.twm){ timer.schedule(task, TIMER_DELAY, 6*TIMER_ONE_SECOND); } else if (checkedId == R.id.thm){ timer.schedule(task, TIMER_DELAY, 9*TIMER_ONE_SECOND); } } }); } } Each time I click a radio button, the timer should start, right? But why it doesn't start?

    Read the article

  • transforming binary data using ssis and sql server 2008

    - by Rick
    Hello All - I have a task to import/transform and extract zipped binary files that contain both text data as well as embeded binary data. Within the data is data that is relational in nature and needs to be processed into a defined database structure. Currently I have a C# single threaded app that essentially grabs all the files from the directory (currently there is 13K files of varying sizes) and extracts the data on a single thread line by line inserts to the database. As you could imagine this is a very slow process and unacceptable. There are several different parsing routines used depending on the header record in the file. There are potentially upto a million rows per file when all the data is extracted to the row level of detail. Follow on task is to parse those rows into their appropriate tables based on is content. i.e. the textual content has to be parsed further into "buckets" of like data in the database. That about sums up the big picture. Now for the problem task list. How do i iterate through a packet of data using SSIS? In the app the file is decompressed and then is parsed using streams data type and byte arrays and is routed to the required parsing routine based on the header data of each packet. There is bit swapping involved as well. Should i wrap up the app code into a script task(s) and let it do the custom processing? The data is seperated by year and the sql server tables is partitioned by year as well. I need to be able to "catch" bad file data as well and process by hand most likely. Should i simply load the zipped file to sql as a blob and parse the file with T-SQL? Would that be multi threaded if done that way? Not sure how to do the parsing in tsql that is involved here. Which do you think would be faster? Potentially the data that is currently processed via files could come to us via a socket. Can SSIS collect that data in real time? How would i go about setting that up? Processing these new files from the directorys will become a daily task. I can manage the data once i get it to sql server. Getting it there in a timely fashion seems to be the long pole in the tent for me. I would appreciate any comments or suggestions from the group. Rick

    Read the article

  • JPA entity design / cannot delete entity

    - by timaschew
    I though its simple what I want, but I cannot find any solution for my problem. I'm using playframework 1.2.3 and it's using Hibernate as JPA. So I think playframework has nothing to do with the problem. I have some classes (I omit the nonrelevant fields) public class User { ... } public class Task { public DataContainer dataContainer; } public class DataContainer { public Session session; public User user; } public class Session { ... } So I have association from Task to DataContainer and from DataContainer to Sesssion and the DataContainer belongs to a User. The DataContainers can have always the same User, but the Session have to be different for each instance. And the DataContainer of a Task have also to be different in each instance. A DataContainer can have a Sesesion or not (it's optinal). I use only unidirectional assoc. It should be sufficient. In other words: Every Task must has one DataContainer. Every DataContainer must has one/the same User and can have one Session. To create a DB schema I use JPA annotations: @Entity public class User extends Model { ... } @Entity public class Task extends Model { @OneToOne(optional = false, cascade = CascadeType.ALL) public DataContainer dataContainer; } @Entity public class DataContainer extends Model { @OneToOne(optional = true, cascade = CascadeType.ALL) public Session session; @ManyToOne(optional = false, cascade = CascadeType.ALL) public User user; } @Entity public class Session extends Model { ... } BTW: Model is a play class and provides the primary id as long type. When I create some for each entity a object and 'connect them', I mean the associations, it works fine. But when I try to delete a Session, I get a constraint violation exception, because a DataContainer still refers to the Session I want to delete. I want that the Session (field) of the DataContainer will be set to null respectively the foreign key (session_id) should be unset in the database. This will be okay, because its optional. I don't know, I think I have multiple problems. Am I using the right annotation @OneToOne ? I found on the internet some additional annotation and attributes: @JoinColumn and a mappedBy attribute for the inverse relationship. But I don't have it, because its not bidirectional. Or is a bidirectional assoc. essentially? Another try was to use @OnDelete(action = OnDeleteAction.CASCADE) the the contraint changed from NO ACTIONs when update or delete to: ADD CONSTRAINT fk4745c17e6a46a56 FOREIGN KEY (session_id) REFERENCES annotation_session (id) MATCH SIMPLE ON UPDATE NO ACTION ON DELETE CASCADE; But in this case, when I delete a session, the DataContainer and User is deleted. That's wrong for me. EDIT: I'm using postgresql 9, the jdbc stuff is included in play, my only db config is db=postgres://app:app@localhost:5432/app

    Read the article

  • Is it correct or incorrect for a Java JAR to contain its own dependencies?

    - by 4herpsand7derpsago
    I guess this is a two-part question. I am trying to write my own Ant task (MyFirstTask) that can be used in other project's build.xml buildfiles. To do this, I need to compile and package my Ant task inside its own JAR. Because this Ant task that I have written is fairly complicated, it has about 20 dependencies (other JAR files), such as using XStream for OX-mapping, Guice for DI, etc. I am currently writing the package task in the build.xml file inside the MyFirstTask project (the buildfile that will package myfirsttask.jar, which is the reusable Ant task). I am suddenly realizing that I don't fully understand the intention of a Java JAR. Is it that a JAR should not contain dependencies, and leave it to the runtime configuration (the app container, the runtime environment, etc.) to supply it with the dependencies it needs? I would assume if this is the case, an executable JAR is an exception to the rule, yes? Or, is it the intention for Java JARs to also include their dependencies? Either way, I don't want to be forcing my users to be copying-n-pasting 25+ JARs into their Ant libs; that's just cruel. I like the way WAR files are set up, where the classpath for dependencies is defined under the classes/ directory. I guess, ultimately, I'd like my JAR structure to look like: myfirsttask.jar/ com/ --> the root package of my compiled binaries config/ --> config files, XML, XSD, etc. classes/ --> all dependencies, guice-3.0.jar, xstream-1.4.3.jar, etc. META-INF/ MANIFEST.MF I assume that in order to accomplish this (and get the runtime classpath to also look into the classes/ directory), I'll need to modify the MANIFEST.MF somehow (I know there's a manifest attribute called ClassPath, I believe?). I'm just having a tough time putting everything together, and have a looming/lingering question about the very intent of JARs to begin with. Can someone please confirm whether Oracle intends for JARs to contain their dependencies or not? And, either way, what I would have to do in the manifest (or anywhere else) to make sure that, at runtime, the classpath can find the dependencies stored under the classes/ directory? Thanks in advance!

    Read the article

  • codeIgniter: pass parameter to a select query from previous query

    - by krike
    I'm creating a little management tool for the browser game travian. So I select all the villages from the database and I want to display some content that's unique to each of the villages. But in order to query for those unique details I need to pass the id of the village. How should I do this? this is my code (controller): function members_area() { global $site_title; $this->load->model('membership_model'); if($this->membership_model->get_villages()) { $data['rows'] = $this->membership_model->get_villages(); $id = 1;//this should be dynamic, but how? if($this->membership_model->get_tasks($id)): $data['tasks'] = $this->membership_model->get_tasks($id); endif; } $data['title'] = $site_title." | Your account"; $data['main_content'] = 'account'; $this->load->view('template', $data); } and this is the 2 functions I'm using in the model: function get_villages() { $q = $this->db->get('villages'); if($q->num_rows() > 0) { foreach ($q->result() as $row) { $data[] = $row; } return $data; } } function get_tasks($id) { $this->db->select('name'); $this->db->from('tasks'); $this->db->where('villageid', $id); $q = $this->db->get(); if($q->num_rows() > 0) { foreach ($q->result() as $task) { $data[] = $task; } return $data; } } and of course the view: <?php foreach($rows as $r) : ?> <div class="village"> <h3><?php echo $r->name; ?></h3> <ul> <?php foreach($tasks as $task): ?> <li><?php echo $task->name; ?></li> <?php endforeach; ?> </ul> <?php echo anchor('site/add_village/'.$r->id.'', '+ add new task'); ?> </div> <?php endforeach; ?> ps: please do not remove the comment in the first block of code!

    Read the article

  • calling startActivity() inside of a instance method - causing a NullPointerException

    - by Cole
    Heya - I'm trying to call startActivity() from a class that extends AsyncTask in the onPostExecute(). Here's the flow: Class that extends AsyncTask: protected void onPostExecute() { Login login = new Login(); login.pushCreateNewOrChooseExistingFormActivity(); } Class that extends Activity: public void pushCreateNewOrChooseExistingFormActivity() { // start the CreateNewOrChooseExistingForm Activity Intent intent = new Intent(Intent.ACTION_VIEW); **ERROR_HERE*** intent.setClassName(this, CreateNewOrChooseExistingForm.class.getName()); startActivity(intent); } And I get this error… every time: 03-17 16:04:29.579: ERROR/AndroidRuntime(1503): FATAL EXCEPTION: main 03-17 16:04:29.579: ERROR/AndroidRuntime(1503): java.lang.NullPointerException 03-17 16:04:29.579: ERROR/AndroidRuntime(1503): at android.content.ContextWrapper.getPackageName(ContextWrapper.java:120) 03-17 16:04:29.579: ERROR/AndroidRuntime(1503): at android.content.ComponentName.(ComponentName.java:62) 03-17 16:04:29.579: ERROR/AndroidRuntime(1503): at android.content.Intent.setClassName(Intent.java:4850) 03-17 16:04:29.579: ERROR/AndroidRuntime(1503): at com.att.AppName.Login.pushCreateNewOrChooseExistingFormActivity(Login.java:47) For iOS developers - I'm just trying to push a new view controller on to a navigational controller's stack a la pushViewController:animated:. Which apparently - is hard to do on this platform. Any ideas? Thanks in advance! UPDATE - FIXED: per @Falmarri advice, i managed to resolve this issue. first of all, i'm no longer calling Login login = new Login(); to create a new login object. bad. bad. bad. no cookie. instead, when preparing to call .execute(), this tutorial suggests passing the applicationContext to the class the executes the AsyncTask, for my purposes, as shown below: CallWebServiceTask task = new CallWebServiceTask(); // pass the login object to the task task.applicationContext = login; // execute the task in the background, passing the required params task.execute(login); now, in onPostExecute(), i can get to my Login objects methods like so: ((Login) applicationContext).pushCreateNewOrChooseExistingFormActivity(); ((Login) applicationContext).showLoginFailedAlert(result.get("httpResponseCode").toString()); ... hope this helps someone else out there! especially iOS developers transistioning over to Android...

    Read the article

  • SQL SERVER – 3 Challenges for DBA and Smart Solutions

    - by Pinal Dave
    Developer’s life is never easy. DBA’s life is even crazier. DBA’s Life When a developer wakes up in the morning, most of the time have no idea what different challenges they are going to face that day. Of course, most of the developers know the project and roadmap, which they are working on. However, developers have no clue what coding challenges which they are going face for that day. DBA’s life is even crazier. When DBA wakes up in the morning – they often thank that they were not disturbed during the night due to server issues. The very next thing they wish is that they do not want to challenge which they can’t solve for that day. The problems DBA face every single day are mostly unpredictable and they just have to solve them as they come during the day. Though the life of DBA is not always bad. There are always ways and methods how one can overcome various challenges. Let us see three of the challenges and how a DBA can use various tools to overcome them. Challenge #1 Synchronize Data Across Server A Very common challenge DBA receive is that they have to synchronize the data across the servers. If you try to manually write that up, it may take forever to accomplish the task. It is nearly impossible to do the same with the help of the T-SQL. However, thankfully there are tools like dbForge Studio which can save a day and synchronize data across servers. Read my detailed blog post about the same over here: SQL SERVER – Synchronize Data Exclusively with T-SQL. Challenge #2 SQL Report Builder DBA’s are often asked to build reports on the go. It really annoys DBA’s, but hardly people care about it. No matter how busy a DBA is, they are just called upon to build reports on things on very short notice. I personally like to avoid any task which is given to me accidently and personally building report can be boring. I rather spend time with High Availability, disaster recovery, performance tuning rather than building report. I use SQL third party tool when I have to work with SQL Report. Others have extended reporting capabilities. The latter group of products includes the SQL report builder built-in todbForge Studio for SQL Server. I have blogged about this earlier over here: SQL SERVER – SQL Report Builder in dbForge Studio for SQL Server. Challenge #3 Work with the OTHER Database The manager does not understand that MySQL is different from SQL Server and SQL Server is different from Oracle. For them everything is same. In my career hundreds of times I have faced a situation that I am given a database to manage or do some task when their regular DBA is on vacation or leave. When I try to explain I do not understand the underlying the technology, I have been usually told that my manager has trust on me and I can do anything. Honestly, I can’t but I hardly dare to argue. I fall back on the third party tool to manage database when it is not in my comfort zone. For example, I was once given MySQL performance tuning task (at that time I did not know MySQL so well). To simplify search for a problem query let us use MySQL Profiler in dbForge Studio for MySQL. It provides such commands as a Query Profiling Mode and Generate Execution Plan. Here is the blog post discussing about the same: MySQL – Profiler : A Simple and Convenient Tool for Profiling SQL Queries. Well, that’s it! There were many different such occasions when I have been saved by the tool. May be some other day I will write part 2 of this blog post. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: MySQL, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL Tagged: Devart, SQL Tool

    Read the article

< Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >