Search Results

Search found 40429 results on 1618 pages for 'change password'.

Page 655/1618 | < Previous Page | 651 652 653 654 655 656 657 658 659 660 661 662  | Next Page >

  • Managing software projects - advice needed

    - by Callum
    I work for a large government department as part of an IT team that manages and develops websites as well as stand alone web applications. We’re running in to problems somewhere in the SDLC that don’t rear their ugly head until time and budget are starting to run out. We try to be “Agile” (software specifications are not as thorough as possible, clients have direct access to the developers any time they want) and we are also in a reasonably peculiar position in that we are not allowed to make profit from the services we provide. We only service the divisions within our government department, and can only charge for the time and effort we actually put in to a project. So if we deliver a project that we have over-quoted on, we will only invoice for the actual time spent. Our software specifications are not as thorough as they could be, but they always include at a minimum: Wireframe mockups for every form view A data dictionary of all field inputs Descriptions of any business rules that affect the system Descriptions of the outputs I’m new to software management, but I’ve overseen enough software projects now to know that as soon as users start observing demos of the system, they start making a huge amount of requests like “Can we add a few more fields to this report.. can we redesign the look of this interface.. can we send an email at this part of the workflow.. can we take this button off this view.. can we make this function redirect to a different screen.. can we change some text on this screen… can we create a special account where someone can log in and get access to X… this report takes too long to run can it be optimised.. can we remove this step in the workflow… there’s got to be a better image we can put here…” etc etc etc. Some changes are tiny and can be implemented reasonably quickly.. but there could be up to 50-100 or so of such requests during the course of the SDLC. Other change requests are what clients claim they “just assumed would be part of the system” even if not explicitly spelled out in the spec. We are having a lot of difficulty managing this process. With no experienced software project managers in our team, we need to come up with a better way to both internally identify whether work being requested is “out of spec”, and be able to communicate this to a client in such a manner that they can understand why what they are asking for is “extra” work. We need a way to track this work and be transparent with it. In the spirit of Agile development where we are not spec'ing software systems in to the ground and back again before development begins, and bearing in mind that clients have access to any developer any time they want it, I am looking for some tips and pointers from experienced software project managers on how to handle this sort of "scope creep" problem, in tracking it, being transparent with it, and communicating it to clients such that they understand it. Happy to clarify anything as needed. I really appreciate anyone who takes the time to offer some advice. Thanks.

    Read the article

  • How can I best implement 'cache until further notice' with memcache in multiple tiers?

    - by ajreal
    the term "client" used here is not referring to client's browser, but client server Before cache workflow 1. client make a HTTP request --> 2. server process --> 3. store parsed results into memcache for next use (cache indefinitely) --> 4. return results to client --> 5. client get the result, store into client's local memcache with TTL After cache workflow 1. another client make a HTTP request --> 2. memcache found return memcache results to client --> 3. client get the result, store into client's local memcache with TTL TTL = time to live Is possible for me to know when the data was updated, and to expire relevant memcache(s) accordingly. However, the pitfalls on client site cache TTL Any data update before the TTL is not pick-up by client memcache. In reverse manner, where there is no update, client memcache still expire after the TTL First request (or concurrent requests) after cache TTL will get throttle as it need to repeat the "Before cache workflow" In the event where client require several HTTP requests on a single web page, it could be very bad in performance. Ideal solution should be client to cache indefinitely until further notice. Here are the three proposals about futher notice Proposal 1 : Make use on HTTP header (current implementation) 1. client sent HTTP request last modified time header 2. server check if last data modified time=last cache time return status 304 3. client based on header to decide further processing GOOD? ---- - save some parsing for client - lesser data transfer BAD? ---- - fire a HTTP request is still slow - server end still need to process lots of requests Proposal 2 : Consistently issue a HTTP request to check all data group last modified time 1. client fire a HTTP request 2. server to return last modified time for all data group 3. client compare local last cache time with the result 4. if data group last cache time < server last modified time then request again for that data group only GOOD? ---- - only fetch what is no up-to-date - less requests for server BAD? ---- - every web page require a HTTP request Proposal 3 : Tell client when new data is available (Push) 1. when server end notice there is a change on a data group 2. notify clients on the changes 3. help clients to fetch again data 4. then reset client local memcache after data is parsed GOOD? ---- - let the cache act/behave like a true cache BAD? ---- - encourage race condition My preference is on proposal 3, and something like Gearman could be ideal Where there is a change, Gearman server to sent the task to multiple clients (workers). Am I crazy? (I know my first question is a bit crazy)

    Read the article

  • Apache config that uses two document roots based on whether the requested resource exists in the first [closed]

    - by mattalexx
    Background I have a client site that consists of a CakePHP installation and a Magento installation: /web/example.com/ /web/example.com/app/ <== CakePHP /web/example.com/app/webroot/ <== DocumentRoot /web/example.com/app/webroot/store/ <== Magento /web/example.com/config/ <== Site-wide config /web/example.com/vendors/ <== Site-wide libraries The server runs Apache 2.2.3. The problem The whole company has FTP access and got used to clogging up the /web/example.com/, /web/example.com/app/webroot/, and /web/example.com/app/webroot/store/ directories with their own files. Sometimes these files need HTTP access and sometimes they don't. In any case, this mess makes my job harder when it comes to maintaining the site. Code merges, tarring the live code, etc, is very complicated and usually requires a bunch of filters. Abandoned solution At first, I thought I would set up a new subdomain on the same server, move all of their files there, and change their FTP chroot. But that wouldn't work for these reasons: Firstly, I have no idea (and neither do they remember) what marketing materials they've sent out that contain URLs to certain resources they've uploaded to the server, using the main domain, and also using abstract subdomains that use the main virtual host because it has ServerAlias *.example.com. So suddenly having them only use static.example.com isn't feasible. Secondly, The PHP scripts in their projects are potentially very non-portable. I want their files to stay in as similar an environment as they were built as I can. Also, I do not want to debug their code to make it portable. Half-baked solution After some thought, I decided to find a way to section off the actual website files into another directory that they would not touch. The company's uploaded files would stay where they were. This would ensure that I didn't break any of their projects that needed HTTP access. It would look something like this: /web/example.com/ <== A bunch of their files are in here /web/example.com/app/webroot/ <== 1st DocumentRoot; A bunch of their files are in here /web/example.com/app/webroot/store/ <== Some more are in here /web/example.com/site/ <== New dir; Contains only site files /web/example.com/site/app/ <== CakePHP /web/example.com/site/app/webroot/ <== 2nd DocumentRoot /web/example.com/site/app/webroot/store/ <== Magento /web/example.com/site/config/ <== Site-wide config /web/example.com/site/vendors/ <== Site-wide libraries After I made this change, I would not need to pay attention to anything except for the stuff within /web/example.com/site/ and my job would be a lot easier. I would be the only one changing stuff in there. So here's where the Apache magic would happen: I need an HTTP request to http://www.example.com/ to first use /web/example.com/app/webroot/ as the document root. If nothing is found (no miscellaneous uploaded company projects are found), try finding something within /web/example.com/site/app/webroot/. Another thing to keep in mind is, the site might have some problems if the $_SERVER['DOCUMENT_ROOT'] variable reads /web/example.com/app/webroot/ but the actual files are within /web/example.com/site/app/webroot/. It would be better if the DOCUMENT_ROOT environment variable could be /web/example.com/site/app/webroot/ for anything within the /web/example.com/site/app/webroot/ directory. Conclusion Is my half-baked solution possible with Apache 2.2.3? Is there a better way to solve this problem?

    Read the article

  • Kinect losing tracked players with Beta2 SDK

    - by Eric B
    So i'm creating a game using the Beta2 SDK for Kinect. The issue i am having is that in the middle of gameplay if another person enters the Kinects FOV it stops tracking the player and will not track anyone else for several minutes. Same deal if the player leaves the FOV and reenters it. Here is what im using to detect players. void nui_SkeletonFrameReady(object sender, SkeletonFrameReadyEventArgs e) { int playersAlive = 0; // reset lists skeletons = new Dictionary<int, SkeletonData>(); //create a new list for skeletons menuSkeleton = new List<SkeletonData>(); initialPlayers = new Dictionary<float, SkeletonData>(); //create a new list for initialPlayers foreach (SkeletonData s in e.SkeletonFrame.Skeletons) //for each skeleton the kinect has detected { if (s.TrackingState == SkeletonTrackingState.Tracked) // players found { menuSkeleton.Add(s); if (initialized) // after initialization { skeletons.Add(s.TrackingID, s); } else // before initialization initialPlayers.Add(s.Joints[JointID.ShoulderCenter].Position.X, s); //if we are not initialized then add this player to the inital player list. playersAlive++; } } if (playersAlive == TOTAL_PLAYERS_ALLOWED) // If there is one player { if (!inMiniGame) // Before the game starts gameStart = DateTime.Now; // Reset initialization timer if (!initialized) // Before initialization // NOTE TO SELF I TOOK OUT && inMenu { InitializePlayers(); if (DateTime.Now.Subtract(gameStart).TotalMilliseconds > INITIALIZATION_WAIT_TIME) { initialized = true; // initialize timers from fixed starting time if (inMiniGame) //if the game has started { gamePause = gameStart; //TODO ERIC: Initialize any Timers Here } } } } } /// <summary> /// this function initializes the players adding them to a list /// and making one of the players the menu controller, for LIM we will need to change the code so that the /// game only recognizes and supports one player at a time /// variable names will need to be change as well. /// </summary> private void InitializePlayers() { List<float> initialPos = new List<float>(); // used to track starting positions players = new Dictionary<int, Player>(); foreach (float pos in initialPlayers.Keys) { initialPos.Add(pos); //add position of each inital player to list } float first = initialPos[0]; // left player first, right second Player player = new Player(initialPlayers[first].TrackingID, true); player.PlayerNumber = PLAYER_ONE; player.Skeleton = initialPlayers[first]; player.Specifics = new PlayerSpecifics(player.PlayerNumber); player.Specifics.PauseTimer = gameStart; players.Add(initialPlayers[first].TrackingID, player); menuController = initialPlayers[first].TrackingID; //menu controller is player 1 } This is a one player game. Also when the game starts Initialize is set to false, and gets set to true when i go from the games menu into the gameplay. So can anyone see any issues with this code block that would cause the kinect to lose players as they enter/exit the FOV? and not re-track them? Thank you for any help.

    Read the article

  • The World of SQL Database Deployment

    - by GGBlogger
    In my early development days, I used Microsoft Access for building databases. It made things easy since I only needed to package the database with the installation package so my clients would have access to it. When we began the development of a new package in Visual Studio .NET I decided to use SQL Server Express. It was free and provided good tools - also free. I thought it was a tremendous idea until it came time to distribute our new software! What a surprise. The nightmare Ah, the choices! Detach the database and have the client reattach it to a newly installed – oh wait. FIRST my new client needs to download and install SQL Server Express with SQL Server Management Studio. That’s not a great thing, but it is one more nightmare step for users who may have other versions of SQL installed. Then the question became – do we detach and reattach or do we do a backup. It was too late (bad planning) to revert to Microsoft Access but we badly needed a simple way to package and distribute both the database AND sample contents. Red Gate to the rescue It took me a while to find an answer but I did find it in a package called SQL Packager sold by a relatively unpublicized company in England called Red Gate. They call their products “ingeniously simple” and I must agree with that description. With SQL Packager you point to the database (more in a minute) you want to distribute. A few mouse clicks and dialogs and you have an executable file that you can ship virtually anywhere and virtually any way which, when run, installs the database on your destination SQL Server instance! It really is that simple. Easier to show than tell Let’s explore a hypothetical case. Let’s say you have a local SQL database of customers and you have decided you want to share it with your subsidiaries or partners. Here is the underlying screen you will see on starting SQL Packager. There are a bunch of possibilities here but I’m going to keep this relatively simple. At this point I simply want to illustrate the simplicity of generating an executable to deliver your database. You will notice that you can set up a new package, edit an existing package or change a bunch of options. Start SQL packager And the following is the default dialog you get on startup. In the next dialog, I’ve selected the Server and Database. I’ve also selected Windows Authentication. Pressing Next causes SQL Packager to run a number of checks and produce a report. Now you’re given a comprehensive list of what is going to be packaged and you’re allowed to change it if you desire. I’ve never made any changes here so I can’t really make any suggestions. The just illustrates the comprehensive nature of so many Red Gate products including this one. Clicking Next gives you still further options. SQL Packager then works its magic and shows you a dialog with the results. Packager then gives you a dialog of the scripts it has generated. The capture above only shows 1 of 4 tabs. Finally pressing Next gives you the option to generate a .NET executable of a C# project. I’ve only generated an executable so I’m not in a position to tell you what the C# project looks like. That may be the subject of further discussions. You can rename the package and tell SQL Packager where to save it. I’ve skipped a lot but this will serve to illustrate the comprehensive (and ingenious) things Red Gate does. All in all, it’s a superb way to distribute populated SQL databases. Oh – we’ll save running the resulting executable for later also but believe me it’s insanely simple.

    Read the article

  • Cream of the Crop

    - by KemButller
    JD Edwards has been working hard to ensure that you shouldn't have to work so hard! Yet there are still JD Edwards customers that may not be up to speed on all the new and or improved tools and utilities we have delivered, all designed to make your life easier. So today, I want to share what I consider to be the cream of the crop….those items that every customer should know about and leverage to make ERP life just a little bit (or A LOT) easier! These are my top picks, the cream of a very good crop! Explore and enjoy, and gain some of your time back to do with as you please. · www.runjde.com It’s where to go when you need to know! The Resource Kits available on www.runjde.com provide comprehensive Resource Kits (guides) by user type. The guides provide brief descriptions of the wide array of resources that are available to JD Edwards’s eco system and links to each of those resources. · My Oracle Support (MOS) Information Centers This link will take you to an index that is designed to provide you with simple and quick navigation to the available EnterpriseOne Information Centers. This index provides links to: · EnterpriseOne Application specific Information Centers · EnterpriseOne Tools and Technology Information Centers · EnterpriseOne Performance Information Center · EnterpriseOne 9.1 and 9.0 Information Centers Information Centers give Oracle the ability to aggregate content for a given focus area and present this content in categories for easy browsing by our customers. Information Centers offer a variety of focused dynamic content organized around one or more of the following tasks. · Overview · Use · Troubleshooting · Patching and Maintenance · Install and Configure · Upgrade · Optimize Performance · Security · Certify JD Edwards Newsletters Be in the know by reading the Global Customer Support Product Newsletters. They are PACKED with news and information covering a wide range of topics and news. It is a must read if you want to know what’s happening in the JD Edwards universe! Read the latest EntepriseOne newsletter Read the latest World newsletter Learn How to receive notification when a new newsletter edition is published Oracle Learning Library – (OLL) Oracle Learn Library is the place to go for easy access to JD Edwards Application and Tools training. For a comprehensive view of the training available for a specific product/functional area, explore the Knowledge Paths For Net Change (new feature) training, explore the TOI sessions (TOI stands for Transfer Of Information). Tip: Be sure to experiment with the search filters! · www.upgradejde.com The site designed to help customers and partners with the process of upgrading JD Edwards. The site is a wealth of information, tools and resources designed to assist in the evaluation, planning and execution steps required when upgrading. Of note is the wildly successful upgrade strategy known as “The Art of the Possible” wherein JD Edwards and many of our partners hold free workshops to teach customers how to conduct upgrades in 100 days or less. Equally important is the fact that on www.upgradejde.com, customers can gain visibility into planned enhancements using the Product and Technology Feature Catalogs. The catalogs are great for creating customer specific reports about the net change between older releases and current or planned releases. Examples of other key resources on www.upgradejde.com are the product data base changes between releases, extensibility guides, (formerly known as programmer’s guides), whitepapers, ROI calculators and much more!

    Read the article

  • Dual monitor not working completely in 12.10 after upgrade

    - by Mark Baldridge
    At 12.04, dual monitors worked perfectly. After upgrading to 12.10, the primary monitor works, the second monitor only partly works. I am sure there is some difference between the releases that I have missed setting properly. System settings - Displays show both correctly as Acer 22" monitors at 1680x1050 (16:10). An icon on monitor 2 is present, but elongated; almost an artifact, since other icons on the primary screen are absent, but this one icon is there on th second monitor. Selecting the icons on both screens exist. Painting is weird on monitor 2. Launcher exists and works on both screens, but even with sticky edges off, the cursor stops at the left edge of monitor 2. Clicking on text editor on screen 2 launcer will launch gedit there. If I drag it, it leaves a trail of after images like repaint is failing. If I drive the cursor on the launcher, the help tags like "LibreOffice Writer" appear, but stay on screen unless I drag the active gedit window over them. Then part of the help bubbles are overwritten, leaving behind after images of the gedit window on screen. What is really fascinating is that the System settings - Displays is now ignoring monitor selection, after allowing it earlier. Just before this, the help popup which said "Select a monitor to change its properties; drag to rearrange its placement" actually let me do that. Maybe a trick of where I grab the edge of the monitor in the Displays setting. I just found a working handle. When I drag monitor 1 to the right of monitor 2, "Apply" and confirm, both monitors work normally (although the right monitor lets the cursor slide off the right edge onto the left edge of monitor 1 - which sounds correct). Painting of windows does not leave an after image. However, success is only temporary. The setting survives the reboot, but painting on the left monitor, now monitor 2, now replicates the issues from before. The after image of the gedit window and the small window for "Are you sure you want to close all programs and restart the computer?" are still on monitor 2 (on the left now), even though they are not real windows, nor do they have processes behind them. Curiously, in Displays, the "green" monitor on the left in the display window is matched by the right monitor color in the monitor upper left corner. Probably makes sense as the one on the right is now monitor 1. If I repeat the "drag the left monitor to the right of the right monitor on the "Displays" window, things are oriented properly, with no display artifacts as I drag windows around either screen. Also the description bubbles that pop up are overwritten on both screens, so none of those artifacts either. This goodness does not survive a reboot, however. Have not tried logging out and back in. All of this after positing that the motherboard VGA and HDMI ports could have been the issue. So, I installed an e-GeForce 7600 GT Dual DVI (I know the web thinks it is not DVI, but VGA, but the connectors are DVI). No change to the weird behavior. The good parts continue to work, the weirdness also works, and swapping monitor positions seems to cure the issue. So, is there a setting I have missed? Given "swapping" monitor 1 and 2 on the System Settings... - Displays makes it work, just not across boot, I suspect so.

    Read the article

  • Oracle Applications Cloud Release 8 Customization: Your User Interface, Your Text

    - by ultan o'broin
    Introducing the User Interface Text Editor In Oracle Applications Cloud Release 8, there’s an addition to the customization tool set, called the User Interface Text Editor  (UITE). When signed in with an application administrator role, users launch this new editing feature from the Navigator's Tools > Customization > User Interface Text menu option. See how the editor is in there with other customization tools? User Interface Text Editor is launched from the Navigator Customization menu Applications customers need a way to make changes to the text that appears in the UI, without having to initiate an IT project. Business users can now easily change labels on fields, for example. Using a composer and activated sandbox, these users can take advantage of the Oracle Metadata Services (MDS), add a key to a text resource bundle, and then type in their preferred label and its description (as a best practice for further work, I’d recommend always completing that description). Changing a simplified UI field label using Oracle Composer In Release 8, the UITE enables business users to easily change UI text on a much wider basis. As with composers, the UITE requires an activated sandbox where users can make their changes safely, before committing them for others to see. The UITE is used for editing UI text that comes from Oracle ADF resource bundles or from the Message Dictionary (or FND_MESSAGE_% tables, if you’re old enough to remember such things). Functionally, the Message Dictionary is used for the text that appears in business rule-type error, warning or information messages, or as a text source when ADF resource bundles cannot be used. In the UITE, these Message Dictionary texts are referred to as Multi-part Validation Messages.   If the text comes from ADF resource bundles, then it’s categorized as User Interface Text in the UITE. This category refers to the text that appears in embedded help in the UI or in simple error, warning, confirmation, or information messages. The embedded help types used in the application are explained in an Oracle Fusion Applications User Experience (UX) design pattern set. The message types have a UX design pattern set too. Using UITE  The UITE enables users to search and replace text in UI strings using case sensitive options, as well as by type. Users select singular and plural options for text changes, should they apply. Searching and replacing text in the UITE The UITE also provides users with a way to preview and manage changes on an exclusion basis, before committing to the final result. There might, for example, be situations where a phrase or word needs to remain different from how it’s generally used in the application, depending on the context. Previewing replacement text changes. Changes can be excluded where required. Multi-Part Messages The Message Dictionary table architecture has been inherited from Oracle E-Business Suite days. However, there are important differences in the Oracle Applications Cloud version, notably the additional message text components, as explained in the UX Design Patterns. Message Dictionary text has a broad range of uses as indicated, and it can also be reserved for internal application use, for use by PL/SQL and C programs, and so on. Message Dictionary text may even concatenate together at run time, where required. The UITE handles the flexibility of such text architecture by enabling users to drill down on each message and see how it’s constructed in total. That way, users can ensure that any text changes being made are consistent throughout the different message parts. Multi-part (Message Dictionary) message components in the UITE Message Dictionary messages may also use supportability-related numbers, the ones that appear appended to the message text in the application’s UI. However, should you have the requirement to remove these numbers from users' view, the UITE is not the tool for the job. Instead, see my blog about using the Manage Messages UI.

    Read the article

  • Broken Views

    - by Ajarn Mark Caldwell
    “SELECT *” isn’t just hazardous to performance, it can actually return blatantly wrong information. There are a number of blog posts and articles out there that actively discourage the use of the SELECT * FROM …syntax.  The two most common explanations that I have seen are: Performance:  The SELECT * syntax will return every column in the table, but frequently you really only need a few of the columns, and so by using SELECT * your are retrieving large volumes of data that you don’t need, but the system has to process, marshal across tiers, and so on.  It would be much more efficient to only select the specific columns that you need. Future-proof:  If you are taking other shortcuts in your code, along with using SELECT *, you are setting yourself up for trouble down the road when enhancements are made to the system.  For example, if you use SELECT * to return results from a table into a DataTable in .NET, and then reference columns positionally (e.g. myDataRow[5]) you could end up with bad data if someone happens to add a column into position 3 and skewing all the remaining columns’ ordinal position.  Or if you use INSERT…SELECT * then you will likely run into errors when a new column is added to the source table in any position. And if you use SELECT * in the definition of a view, you will run into a variation of the future-proof problem mentioned above.  One of the guys on my team, Mike Byther, ran across this in a project we were doing, but fortunately he caught it while we were still in development.  I asked him to put together a test to prove that this was related to the use of SELECT * and not some other anomaly.  I’ll walk you through the test script so you can see for yourself what happens. We are going to create a table and two views that are based on that table, one of them uses SELECT * and the other explicitly lists the column names.  The script to create these objects is listed below. IF OBJECT_ID('testtab') IS NOT NULL DROP TABLE testtabgoIF OBJECT_ID('testtab_vw') IS NOT NULL DROP VIEW testtab_vwgo IF OBJECT_ID('testtab_vw_named') IS NOT NULL DROP VIEW testtab_vw_namedgo CREATE TABLE testtab (col1 NVARCHAR(5) null, col2 NVARCHAR(5) null)INSERT INTO testtab(col1, col2)VALUES ('A','B'), ('A','B')GOCREATE VIEW testtab_vw AS SELECT * FROM testtabGOCREATE VIEW testtab_vw_named AS SELECT col1, col2 FROM testtabgo Now, to prove that the two views currently return equivalent results, select from them. SELECT 'star', col1, col2 FROM testtab_vwSELECT 'named', col1, col2 FROM testtab_vw_named OK, so far, so good.  Now, what happens if someone makes a change to the definition of the underlying table, and that change results in a new column being inserted between the two existing columns?  (Side note, I normally prefer to append new columns to the end of the table definition, but some people like to keep their columns alphabetized, and for clarity for later people reviewing the schema, it may make sense to group certain columns together.  Whatever the reason, it sometimes happens, and you need to protect yourself and your code from the repercussions.) DROP TABLE testtabgoCREATE TABLE testtab (col1 NVARCHAR(5) null, col3 NVARCHAR(5) NULL, col2 NVARCHAR(5) null)INSERT INTO testtab(col1, col3, col2)VALUES ('A','C','B'), ('A','C','B')goSELECT 'star', col1, col2 FROM testtab_vwSELECT 'named', col1, col2 FROM testtab_vw_named I would have expected that the view using SELECT * in its definition would essentially pass-through the column name and still retrieve the correct data, but that is not what happens.  When you run our two select statements again, you see that the View that is based on SELECT * actually retrieves the data based on the ordinal position of the columns at the time that the view was created.  Sure, one work-around is to recreate the View, but you can’t really count on other developers to know the dependencies you have built-in, and they won’t necessarily recreate the view when they refactor the table. I am sure that there are reasons and justifications for why Views behave this way, but I find it particularly disturbing that you can have code asking for col2, but actually be receiving data from col3.  By the way, for the record, this entire scenario and accompanying test script apply to SQL Server 2008 R2 with Service Pack 1. So, let the developer beware…know what assumptions are in effect around your code, and keep on discouraging people from using SELECT * syntax in anything but the simplest of ad-hoc queries. And of course, let’s clean up after ourselves.  To eliminate the database objects created during this test, run the following commands. DROP TABLE testtabDROP VIEW testtab_vwDROP VIEW testtab_vw_named

    Read the article

  • Where are my date ranges in Analytics coming from?

    - by Jeffrey McDaniel
    In the P6 Reporting Database there are two main tables to consider when viewing time - W_DAY_D and W_Calendar_FS.  W_DAY_D is populated internally during the ETL process and will provide a row for every day in the given time range. Each row will contain aspects of that day such as calendar year, month, week, quarter, etc. to allow it to be used in the time element when creating requests in Analytics to group data into these time granularities. W_Calendar_FS is used for calculations such as spreads, but is also based on the same set date range. The min and max day_dt (W_DAY_D) and daydate (W_Calendar_FS) will be related to the date range defined, which is a start date and a rolling interval plus a certain range. Generally start date plus 3 years.  In P6 Reporting Database 2.0 this date range was defined in the Configuration utility.  As of P6 Reporting Database 3.0, with the introduction of the Extended Schema this date range is set in the P6 web application. The Extended Schema uses this date range to calculate the data for near real time reporting in P6.  This same date range is validated and used for the P6 Reporting Database.  The rolling date range means if today is April 1, 2010 and the rolling interval is set to three years, the min date will be 1/1/2010 and the max date will be 4/1/2013.  1/1/2010 will be the min date because we always back fill to the beginning of the year. On April 2nd, the Extended schema services are run and the date range is adjusted there to move the max date forward to 4/2/2013.  When the ETL process is run the Reporting Database will pick up this change and also adjust the max date on the W_DAY_D and W_Calendar_FS. There are scenarios where date ranges affecting areas like resource limit may not be adjusted until a change occurs to cause a recalculation, but based on general system usage these dates in these tables will progress forward with the rolling intervals. Choosing a large date range can have an effect on the ETL process for the P6 Reporting Database. The extract portion of the process will pull spread data over into the STAR. The date range defines how long activity and resource assignment spread data is spread out in these tables. If an activity lasts 5 days it will have 5 days of spread data. If a project lasts 5 years, and the date range is 3 years the spread data after that 3 year date range will be bucketed into the last day in the date range. For the overall project and even the activity level you will still see the correct total values.  You just would not be able to see the daily spread 5 years from now. This is an important question when choosing your date range, do you really need to see spread data down to the day 5 years in the future?  Generally this amount of granularity years in the future is not needed. Remember all those values 5, 10, 15, 20 years in the future are still available to report on they would be in more of a summary format on the activity or project.  The data is always there, the level of granularity is the decision.

    Read the article

  • Web Service Example - Part 3: Asynchronous

    - by Denis T
    In this edition of the ADF Mobile blog we'll tackle part 3 of our Web Service examples.  In this posting we'll take a look at firing the web service asynchronously and then filling in the UI when it completes.  This can be useful when you have data on the device in a local store and want to show that to the user while the application uses lazy loading from a web service to load more data. Getting the sample code: Just click here to download a zip of the entire project.  You can unzip it and load it into JDeveloper and deploy it either to iOS or Android.  Please follow the previous blog posts if you need help getting JDeveloper or ADF Mobile installed.  Note: This is a different workspace than WS-Part2 What's different? In this example, when you click the Search button on the Forecast By Zip option, now it takes you directly to the results page, which is initially blank.  When the web service returns a second or two later the data pops into the UI.  If you go back to the search page and hit Search it will again clear the results and invoke the web service asynchronously.  This isn't really that useful for this particular example but it shows an important technique that can be used for other use cases. How it was done 1)  First we created a new class, ForecastWorker, that implements the Runnable interface.  This is used as our worker class that we create an instance of and pass to a new thread that we create when the Search button is pressed inside the retrieveForecast actionListener handler.  Once the thread is started, the retrieveForecast returns immediately.  2)  The rest of the code that we had previously in the retrieveForecast method has now been moved to the retrieveForecastAsync.  Note that we've also added synchronized specifiers on both these methods so they are protected from re-entrancy. 3)  The run method of the ForecastWorker class then calls the retrieveForecastAsync method.  This executes the web service code that we had previously, but now on a separate thread so the UI is not locked.  If we had already shown data on the screen it would have appeared before this was invoked.  Note that you do not see a loading indicator either because this is on a separate thread and nothing is blocked. 4)  The last but very important aspect of this method is that once we update data in the collections from the data we retrieve from the web service, we call AdfmfJavaUtilities.flushDataChangeEvents().   We need this because as data is updated in the background thread, those data change events are not propagated to the main thread until you explicitly flush them.  As soon as you do this, the UI will get updated if any changes have been queued. Summary of Fundamental Changes In This Application The most fundamental change is that we are invoking and handling our web services in a background thread and updating the UI when the data returns.  This allows an application to provide a better user experience in many cases because data that is already available locally is displayed while lengthy queries or web service calls can be done in the background and the UI updated when they return.  There are many different use cases for background threads and this is just one example of optimizing the user experience and generating a better mobile application. 

    Read the article

  • Dynamically Changing the Display Names of Menus and Popups

    - by Geertjan
    Very interesting thing and handy to know when needed is the fact that "menuText" and "popupText" (from org.openide.awt.ActionRegistration) can be changed dynamically, via "putValue" as shown below for "popupText". The Action class, in this case, needs to be eager, hence you won't receive the object of interest via the constructor, but you can easily use the global Lookup for that purpose instead, as also shown below. import java.awt.event.ActionEvent; import java.text.DateFormat; import java.text.SimpleDateFormat; import javax.swing.AbstractAction; import org.netbeans.api.project.Project; import org.netbeans.api.project.ProjectInformation; import org.netbeans.api.project.ProjectUtils; import org.openide.awt.ActionID; import org.openide.awt.ActionReference; import org.openide.awt.ActionRegistration; import org.openide.util.Utilities; @ActionID( category = "Project", id = "org.ptt.DemoProjectAction") @ActionRegistration( lazy = false, displayName = "NOT-USED") @ActionReference(path = "Projects/Actions", position = 0) public final class DemoProjectAction extends AbstractAction{ private final ProjectInformation context; public DemoProjectAction() { putValue("popupText", "Select Me To See Current Time!"); context = ProjectUtils.getInformation( Utilities.actionsGlobalContext().lookup(Project.class)); } @Override public void actionPerformed(ActionEvent e) { refresh(); } protected void refresh() { DateFormat formatter = new SimpleDateFormat("HH:mm:ss"); String formatted = formatter.format(System.currentTimeMillis()); putValue("popupText", "Time: " + formatted + " (" + context.getDisplayName() +")"); } } Now, let's do something semi useful and display, in the popup, which is available when you right-click a project, the time since the last change was made anywhere in the project, i.e., we can listen recursively to any changes done within a project and then update the popup with the newly acquired information, dynamically: import java.awt.event.ActionEvent; import java.text.DateFormat; import java.text.SimpleDateFormat; import javax.swing.AbstractAction; import org.netbeans.api.project.Project; import org.netbeans.api.project.ProjectUtils; import org.openide.awt.ActionID; import org.openide.awt.ActionReference; import org.openide.awt.ActionRegistration; import org.openide.filesystems.FileAttributeEvent; import org.openide.filesystems.FileChangeListener; import org.openide.filesystems.FileEvent; import org.openide.filesystems.FileRenameEvent; import org.openide.util.Utilities; @ActionID( category = "Project", id = "org.ptt.TrackProjectTimerAction") @ActionRegistration( lazy = false, displayName = "NOT-USED") @ActionReference( path = "Projects/Actions", position = 0) public final class TrackProjectTimerAction extends AbstractAction implements FileChangeListener { private final Project context; private Long startTime; private Long changedTime; private DateFormat formatter; public TrackProjectTimerAction() { putValue("popupText", "Enable project time tracker"); this.formatter = new SimpleDateFormat("HH:mm:ss"); context = Utilities.actionsGlobalContext().lookup(Project.class); context.getProjectDirectory().addRecursiveListener(this); } @Override public void actionPerformed(ActionEvent e) { startTimer(); } protected void startTimer() { startTime = System.currentTimeMillis(); String formattedStartTime = formatter.format(startTime); putValue("popupText", "Timer started: " + formattedStartTime + " (" + ProjectUtils.getInformation(context).getDisplayName() + ")"); } @Override public void fileChanged(FileEvent fe) { changedTime = System.currentTimeMillis(); formatter = new SimpleDateFormat("mm:ss"); String formattedLapse = formatter.format(changedTime - startTime); putValue("popupText", "Time since last change: " + formattedLapse + " (" + ProjectUtils.getInformation(context).getDisplayName() + ")"); startTime = changedTime; } @Override public void fileFolderCreated(FileEvent fe) {} @Override public void fileDataCreated(FileEvent fe) {} @Override public void fileDeleted(FileEvent fe) {} @Override public void fileRenamed(FileRenameEvent fre) {} @Override public void fileAttributeChanged(FileAttributeEvent fae) {} }

    Read the article

  • MySQL Server 5.6 default my.cnf and my.ini

    - by user12626240
    We've introduced a default my.cnf / my.ini file for MySQL Server that you can now see in the 5.6.8 release candidate: # For advice on how to change settings please see # http://dev.mysql.com/doc/refman/5.6/en/server-configuration-defaults.html [mysqld] # Remove leading # and set to the amount of RAM for the most important data # cache in MySQL. Start at 70% of total RAM for dedicated server, else 10%. # innodb_buffer_pool_size = 128M   # Remove leading # to turn on a very important data integrity option: logging # changes to the binary log between backups. # log_bin   # These are commonly set, remove the # and set as required. # basedir = ..... # datadir = ..... # port = ..... # socket = ..... # server_id = .....   # Remove leading # to set options mainly useful for reporting servers. # The server defaults are faster for transactions and fast SELECTs. # Adjust sizes as needed, experiment to find the optimal values. # join_buffer_size = 128M # sort_buffer_size = 2M # read_rnd_buffer_size = 2M   sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES    There is also a template file called my-default.cnf or my-default.ini that has these lines near the start: # *** DO NOT EDIT THIS FILE. It's a template which will be copied to the # *** default location during install, and will be replaced if you # *** upgrade to a newer version of MySQL.   On Linux systems, the mysql_install_db command will copy the template file to the final location, where the server will read and use the file, removing the extra three lines. On Windows, the installer will create extra settings based on the answers you gave during installation. Neither will overwrite an existing my.cnf or my.ini file. The only initially active setting here is to change the value of  sql_mode from the server default of NO_ENGINE_SUBSTITUTION to NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES. This strict mode changes warnings for some non-standard behaviour into errors. This can cause applications which rely on the non-standard things, like dates that aren't valid, to lose data. If we had just changed the server default, the new setting would affect all servers that lack an explicit sql_mode setting, including those where strict mode is harmful. So we did it in the default file instead because that will only affect new server installations. You should expect that in our next version after 5.6, the server default will include STRICT_TRANS_TABLES. Our Windows installer and some of our connectors already use STRICT_TRANS_TABLES by default. Strict has been our preferred setting for many years and it is good to see some development platforms are using it. If you need the old behaviour, just remove the STRICT_TRANS_TABLES setting. If you do this, please also ask your application provider to make it unnecessary. They can do that by setting the session sql_mode setting in their own connections, so the rest of the applications using the server don't have to have an undesirable default. We've kept this file as small as possible because we found that our old files were too big and confused people. We've also now removed the old my-huge and related example files. One key part of this is the link to the documentation, where we will provide an introduction to some key settings. We'd like to hear your feedback on settings that will benefit most users or are most important to call out for existing users. Please do that by commenting here or if you prefer by adding comments to this bug report.

    Read the article

  • glutPostRedisplay() does not update display

    - by A D
    I am currently drawing a rectangle to the screen and would like to move it by using the arrow keys. However, when I press an arrow key the vertex data changes but the display does refresh to reflect these changes, even though I am calling glutPostRedisplay(). Is there something else that I must do? My code: #include <GL/glew.h> #include <GL/freeglut.h> #include <GL/freeglut_ext.h> #include <iostream> #include "Shaders.h" using namespace std; const int NUM_VERTICES = 6; const GLfloat POS_Y = -0.1; const GLfloat NEG_Y = -0.01; struct Vertex { GLfloat x; GLfloat y; Vertex() : x(0), y(0) {} Vertex(GLfloat givenX, GLfloat givenY) : x(givenX), y(givenY) {} }; Vertex left_paddle[NUM_VERTICES]; void init() { glClearColor(1.0f, 1.0f, 1.0f, 0.0f); left_paddle[0] = Vertex(-0.95f, 0.95f); left_paddle[1] = Vertex(-0.95f, 0.0f); left_paddle[2] = Vertex(-0.85f, 0.95f); left_paddle[3] = Vertex(-0.85f, 0.95f); left_paddle[4] = Vertex(-0.95f, 0.0f); left_paddle[5] = Vertex(-0.85f, 0.0f); GLuint vao; glGenVertexArrays( 1, &vao ); glBindVertexArray( vao ); GLuint buffer; glGenBuffers(1, &buffer); glBindBuffer(GL_ARRAY_BUFFER, buffer); glBufferData(GL_ARRAY_BUFFER, sizeof(left_paddle), NULL, GL_STATIC_DRAW); GLuint program = init_shaders( "vshader.glsl", "fshader.glsl" ); glUseProgram( program ); GLuint loc = glGetAttribLocation( program, "vPosition" ); glEnableVertexAttribArray( loc ); glVertexAttribPointer( loc, 2, GL_FLOAT, GL_FALSE, 0, 0); glBindVertexArray(vao); } void movePaddle(Vertex* array, GLfloat change) { for(int i = 0; i < NUM_VERTICES; i++) { array[i].y = array[i].y + change; } glutPostRedisplay(); } void special( int key, int x, int y ) { switch ( key ) { case GLUT_KEY_DOWN: movePaddle(left_paddle, NEG_Y); break; } } void display() { glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glDrawArrays(GL_TRIANGLES, 0, 6); glutSwapBuffers(); } int main(int argc, char **argv) { glutInit(&argc, argv); glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGB); glutInitWindowSize(500,500); glutCreateWindow("Rectangle"); glewInit(); init(); glutDisplayFunc(display); glutSpecialFunc(special); glutMainLoop(); return 0; }

    Read the article

  • Appropriate design / technologies to handle dynamic string formatting?

    - by Mark W
    recently I was tasked with implementing a way of adding support for versioning of hardware packet specifications to one of our libraries. First a bit of information about the project. We have a hardware library which has classes for each of the various commands we support sending to our hardware. These hardware modules are essentially just lights with a few buttons, and a 2 or 4 digit display. The packets typically follow the format {SOH}AADD{ETX}, where AA is our sentinel action code, and DD is the device ID. These packet specs are different from one command to the next obviously, and the different firmware versions we have support different specifications. For example, on version 1 an action code of 14 may have a spec of {SOH}AADDTEXT{ETX} which would be AA = 14 literal, DD = device ID, TEXT = literal text to display on the device. Then we come out with a revision with adds an extended byte(s) onto the end of the packet like this {SOH}AADDTEXTE{ETX}. Assume the TEXT field is fixed width for this example. We have now added a new field onto the end which could be used to say specify the color or flash rate of the text/buttons. Currently this java library only supports one version of the commands, the latest. In our hardware library we would have a class for this command, say a DisplayTextArgs.java. That class would have fields for the device ID, the text, and the extended byte. The command class would expose a method which generates the string ("{SOH}AADDTEXTE{ETX}") using the value from the class. In practice we would create the Args class as needed, populate the fields, call the method to get our packet string, then ship that down across the CAN. Some of our other commands specification can vary for the same command, on the same version, depending on some runtime state. For example, another command for version 1 may be {SOH}AA{ETX}, where this action code clears all of the modules behind a specific controller device of their text. We may overload this packet to have option fields with multiple meanings like {SOH}AAOC{ETX} where OC is literal text, which tells the controller to only clear text on a specific module type, and to leave the others alone, or the spec could also have an option format of {SOH}AADD{ETX} to clear the text off a a specific device. Currently, in the method which generates the packet string, we would evaluate fields on the args class to determine which spec we will be using when formatting the packet. For this example, it would be along the lines of: if m_DeviceID != null then use {SOH}AADD{ETX} else if m_ClearOCs == true then use {SOH}AAOC{EXT} else use {SOH}AA{ETX} I had considered using XML, or a database to store String.format format strings, which were linked to firmware version numbers in some table. We would load them up at startup, and pass in the version number of the hardwares firmware we are currently using (I can query the devices for their firmware version, but the version is not included in all packets as part of the spec). This breaks down pretty quickly because of the dynamic nature of how we select which version of the command to use. I then considered using a rule engine to possibly build out expressions which could be interpreted at runtume, to evaluate the args class's state, and from that select the appropriate format string to use, but my brief look at rule engines for java scared me away with its complexity. While it seems like it might be a viable solution, it seems overly complex. So this is why I am here. I wouldn't say design is my strongest skill, and im having trouble figuring out the best way to approach this problem. I probably wont be able to radically change the args classes, but if the trade off was good enough, I may be able to convince my boss that the change is appropriate. What I would like from the community is some feedback on some best practices / design methodologies / API or other resources which I could use to accomplish: Logic to determine which set of commands to use for a given firmware version Of those command, which version of each command to use (based on the args classes state) Keep the rules logic decoupled from the application so as to avoid needing releases for every firmware version Be simple enough so I don't need weeks of study and trial and error to implement effectively.

    Read the article

  • Javascript Inheritance Part 2

    - by PhubarBaz
    A while back I wrote about Javascript inheritance, trying to figure out the best and easiest way to do it (http://geekswithblogs.net/PhubarBaz/archive/2010/07/08/javascript-inheritance.aspx). That was 2 years ago and I've learned a lot since then. But only recently have I decided to just leave classical inheritance behind and embrace prototypal inheritance. For most of us, we were trained in classical inheritance, using class hierarchies in a typed language. Unfortunately Javascript doesn't follow that model. It is both classless and typeless, which is hard to fathom for someone who's been using classes the last 20 years. For the last two or three years since I've got into Javascript I've been trying to find the best way to force it into the class model without much success. It's clunky and verbose and hard to understand. I think my biggest problem was that it felt so wrong to add or change object members at run time. Every time I did it I felt like I needed a shower. That's the 20 years of classical inheritance in me. Finally I decided to embrace change and do something different. I decided to use the factory pattern to build objects instead of trying to use inheritance. Javascript was made for the factory pattern because of the way you can construct objects at runtime. In the factory pattern you have a factory function that you call and tell it to give you a certain type of object back. The factory function takes care of constructing the object to your specification. Here's an example. Say we want to have some shape objects and they have common attributes like id and area that we want to depend on in other parts of your application. So first thing to do is create a factory object and give it a factory method to create an abstract shape object. The factory method builds the object then returns it. var shapeFactory = { getShape: function(id){ var shape = { id: id, area: function() { throw "Not implemented"; } }; return shape; }}; Now we can add another factory method to get a rectangle. It calls the getShape() method first and then adds an implementation to it. getRectangle: function(id, width, height){ var rect = this.getShape(id); rect.width = width; rect.height = height; rect.area = function() { return this.width * this.height; }; return rect;} That's pretty simple right? No worrying about hooking up prototypes and calling base constructors or any of that crap I used to do. Now let's create a factory method to get a cuboid (rectangular cube). The cuboid object will extend the rectangle object. To get the area we will call into the base object's area method and then multiply that by the depth. getCuboid: function(id, width, height, depth){ var cuboid = this.getRectangle(id, width, height); cuboid.depth = depth; var baseArea = cuboid.area; cuboid.area = function() { var a = baseArea.call(this); return a * this.depth; } return cuboid;} See how we called the area method in the base object? First we save it off in a variable then we implement our own area method and use call() to call the base function. For me this is a lot cleaner and easier than trying to emulate class hierarchies in Javascript.

    Read the article

  • Is this spaghetti code already? [migrated]

    - by hephestos
    I post the following code writen all by hand. Why I have the feeling that it is a western spaghetti on its own. Second, could that be written better? <div id="form-board" class="notice" style="height: 200px; min-height: 109px; width: auto;display: none;"> <script type="text/javascript"> jQuery(document).ready(function(){ $(".form-button-slide").click(function(){ $( "#form-board" ).dialog(); return false; }); }); </script> <?php echo $this->Form->create('mysubmit'); echo $this->Form->input('inputs', array('type' => 'select', 'id' => 'inputs', 'options' => $inputs)); echo $this->Form->input('Fields', array('type' => 'select', 'id' => 'fields', 'empty' => '-- Pick a state first --')); echo $this->Form->input('inputs2', array('type' => 'select', 'id' => 'inputs2', 'options' => $inputs2)); echo $this->Form->input('Fields2', array('type' => 'select', 'id' => 'fields2', 'empty' => '-- Pick a state first --')); echo $this->Form->end("Submit"); ?> </div> <div style="width:100%"></div> <div class="form-button-slide" style="float:left;display:block;"> <?php echo $this->Html->link("Error Results", "#"); ?> </div> <script type="text/javascript"> jQuery(document).ready(function(){ $("#mysubmitIndexForm").submit(function() { // we want to store the values from the form input box, then send via ajax below jQuery.post("Staffs/view", { data1: $("#inputs").attr('value'), data2:$("#inputs2").attr('value'),data3:$("#fields").attr('value'), data4:$("#fields2").attr('value') } ); //Close the dialog $( "#form-board" ).dialog('close') return false; }); $("#inputs").change(function() { // we want to store the values from the form input box, then send via ajax below var input_id = $('#inputs').attr('value'); $.ajax({ type: "POST", //The controller who listens to our request url: "Inputs/getFieldsFromOneInput/"+input_id, data: "input_id="+ input_id, //+"&amp; lname="+ lname, success: function(data){//function on success with returned data $('form#mysubmit').hide(function(){}); data = $.parseJSON(data); var sel = $("#fields"); sel.empty(); for (var i=0; i<data.length; i++) { sel.append('<option value="' + data[i].id + '">' + data[i].name + '</option>'); } } }); return false; }); $("#inputs2").change(function() { // we want to store the values from the form input box, then send via ajax below var input_id = $('#inputs2').attr('value'); $.ajax({ type: "POST", //The controller who listens to our request url: "Inputs/getFieldsFromOneInput/"+input_id, data: "input_id="+ input_id, //+"&amp; lname="+ lname, success: function(data){//function on success with returned data $('form#mysubmit').hide(function(){}); data = $.parseJSON(data); var sel = $("#fields2"); sel.empty(); for (var i=0; i<data.length; i++) { sel.append('<option value="' + data[i].id + '">' + data[i].name + '</option>'); } } }); return false; }); }); </script>

    Read the article

  • Managed c++ std::string not accessible in unmanaged c++

    - by Radhesham
    In unmanaged c++ dll i have a function which takes constant std::string as argument Prototype : void read ( const std::string &amp;imageSpec_ ) I call this function from managed c++ dll by passing a std::string. When i debug the unmanaged c++ code the parameter imageSpec_ shows the value correctly but does not allow me to copy that value in other variable. imageSpec_.copy( sFilename, 4052 ); It shows length of imageSpec_ as 0(zero). If i try copying like std::string sTempFileName(imageSpec_); this statement string new string is a empty string. But for std::string sTempFileName(imageSpec_.c_str()); this statement string gets copied correctly. i.e. with charpointer string is copied correctly. Copying this way will need a major change in unmanaged c++ code. I am building unmanaged code in Visual studio 6.0 and managed c++ in Visual studio 2008. Is there any specific setting or code change in managed c++ that will solve the issue?

    Read the article

  • MSBuild (.NET 4.0) access problems

    - by JMP
    I'm using Cruise Control .NET as my build server (Windows 2008 Server). Yesterday I upgraded my ASP.NET MVC project from VS 2008/.NET 3.5 to VS 2010/.NET 4.0. The only change I made to my ccnet.config's MSBuild task was the location of MSBuild.exe. Ever since I made that change, the build has been broken with the error: MSB4019 - The imported project "C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v10.0\WebApplications\Microsoft.WebApplication.targets" was not found. Confirm that the path in the declaration is correct, and that the file exists on disk. This file does, in fact, exist in the location specified (I solved a problem similar to this when setting up the build server for VS2008/.NET 3.5 by copying the files from my dev environment to my build environment). So I RDP'ed into the build machine and opened a command prompt, used MSBUILD to attempt to build my project. MSBUILD returns the error: MSB3021 - Unable to copy file "obj\debug....dll". Access to the path 'bin....dll' is denied. Since I'm running MSBUILD from the command prompt, logged in with an account that has administrative privileges, I'm assuming that MSBUILD is running with the same privileges that I have. Next, I tried to copy the file that MSBUILD was attempting to copy. In this case, I get the UAC dialog that makes me click the [Continue] button to complete the copy. I'd like to avoid installing Visual Studio 2010 on my build machine, can anyone suggest other fixes I might try?

    Read the article

  • DevExpress AspxGridView clientside SelectionChanged problem when using paged ObjectDataSource

    - by Constantin Baciu
    The context is as follows: One DexExpress AspxGridView with a server-side paging/filtering/sorting mechanism (using ObjectDataSource). I've been having problems with the filter mechanism ( see this stack ). Now, the problem I'm having is this: the client-side events get mangled between DataSource events. :O . Let me explain what happens: if I change the page (or sort/filter for that matter), then, select one row from the grid, the client-side SelectionChanged event fires well. If I change the page (or sort/filter), the event doesn't fire anymore. Instead, on the server side, I get a "The method or operation is not implemented" exception with the following stack-trace: at DevExpress.Web.Data.WebDataProviderBase.GetListSouceRowValue(Int32 listSourceRowIndex, String fieldName) at DevExpress.Web.Data.WebDataProxy.GetListSourceRowValue(Int32 listSourceRowIndex, String fieldName) at DevExpress.Web.Data.WebDataProxy.GetKeyValueCore(Int32 index, GetKeyValueCallback getKeyValue) at DevExpress.Web.Data.WebDataSelectionBase.GetSelectedValues(String[] fieldNames, Int32 visibleStartIndex, Int32 visibleRowCountOnPage) at DevExpress.Web.Data.WebDataProxy.GetSelectedValues(String[] fieldNames) at DevExpress.Web.ASPxGridView.ASPxGridView.FBSelectFieldValues(String[] args) at DevExpress.Web.ASPxGridView.ASPxGridView.GetCallbackResultCore() at DevExpress.Web.ASPxGridView.ASPxGridView.GetCallbackResult() at DevExpress.Web.ASPxClasses.ASPxWebControl.System.Web.UI.ICallbackEventHandler.GetCallbackResult() Am I doing something wrong? Any help will be much appreciated.

    Read the article

  • DevExpress AspxGridView clientside SelectionChanged problem when using paged ObjectDataSource

    - by Constantin Baciu
    The context is as follows: One DexExpress AspxGridView with a server-side paging/filtering/sorting mechanism (using ObjectDataSource). I've been having problems with the filter mechanism ( see this stack ). Now, the problem I'm having is this: the client-side events get mangled between DataSource events. :O . Let me explain what happens: if I change the page (or sort/filter for that matter), then, select one row from the grid, the client-side SelectionChanged event fires well. If I change the page (or sort/filter), the event doesn't fire anymore. Instead, on the server side, I get a "The method or operation is not implemented" exception with the following stack-trace: at DevExpress.Web.Data.WebDataProviderBase.GetListSouceRowValue(Int32 listSourceRowIndex, String fieldName) at DevExpress.Web.Data.WebDataProxy.GetListSourceRowValue(Int32 listSourceRowIndex, String fieldName) at DevExpress.Web.Data.WebDataProxy.GetKeyValueCore(Int32 index, GetKeyValueCallback getKeyValue) at DevExpress.Web.Data.WebDataSelectionBase.GetSelectedValues(String[] fieldNames, Int32 visibleStartIndex, Int32 visibleRowCountOnPage) at DevExpress.Web.Data.WebDataProxy.GetSelectedValues(String[] fieldNames) at DevExpress.Web.ASPxGridView.ASPxGridView.FBSelectFieldValues(String[] args) at DevExpress.Web.ASPxGridView.ASPxGridView.GetCallbackResultCore() at DevExpress.Web.ASPxGridView.ASPxGridView.GetCallbackResult() at DevExpress.Web.ASPxClasses.ASPxWebControl.System.Web.UI.ICallbackEventHandler.GetCallbackResult() Am I doing something wrong? Any help will be much appreciated.

    Read the article

  • [WPF] Custom TabItem in TabControl

    - by Simon
    I've created CustomTabItem which inherits from TabItem and i'd like to use it while binding ObservableCollection in TabControl <TabControl ItemsSource="{Binding MyObservableCollection}"/> It should like this in XAML, but i do not know how change default type of the output item created by TabControl while binding. I've tried to create converter, but it has to do something like this inside convertin method: List<CustomTabItem> resultList = new List<CustomTabItem>(); And iterate through my input ObservableCollection, create my CustomTab based on item from collection and add it to resultList... I'd like to avoid it, bacause while creating CustomTabItem i'm creating complex View and it takes a while, so i don't want to create it always when something change in binded collection. My class extends typical TabItem and i'd like to use this class in TabControl instead of TabItem. <TabControl.ItemContainerStyle> <Style TargetType="{x:Type local:CustomTabItem}"> <Setter Property="MyProperty" Value="{Binding xxx}"/> </Style> </TabControl.ItemContainerStyle> Code above generates error that Style cannot be applied to TabItem. My main purpose is to use in XAML my own CustomTabItem and bind properties... Just like above... I've also tried to use <TabControl.ItemTemplate/> <TabControl.ContentTemaplte/> But they are just styles for TabItem, so i'll still be missing my properties wchich i added in my custom class.

    Read the article

  • Making TeamCity integrate the Subversion build number into the assembly version.

    - by Lasse V. Karlsen
    I want to adjust the output from my TeamCity build configuration of my class library so that the produced dll files have the following version number: 3.5.0.x, where x is the subversion revision number that TeamCity has picked up. I've found that I can use the BUILD_NUMBER environment variable to get x, but unfortunately I don't understand what else I need to do. The "tutorials" I find all say "You just add this to the script", but they don't say which script, and "this" is usually referring to the AssemblyInfo task from the MSBuild Community Extensions. Do I need to build a custom MSBuild script somehow to use this? Is the "script" the same as either the solution file or the C# project file? I don't know much about the MSBuild process at all, except that I can pass a solution file directly to MSBuild, but what I need to add to "the script" is XML, and the solution file decidedly does not look like XML. So, can anyone point me to a step-by-step guide on how to make this work? This is what I ended up with: Install the MSBuild Community Tasks Edit the .csproj file of my core class library, and change the bottom so that it reads: <Import Project="$(MSBuildToolsPath)\Microsoft.CSharp.targets" /> <Import Project="$(MSBuildExtensionsPath)\MSBuildCommunityTasks\MSBuild.Community.Tasks.Targets" /> <Target Name="BeforeBuild"> <AssemblyInfo Condition=" '$(BUILD_NUMBER)' != '' " CodeLanguage="CS" OutputFile="$(MSBuildProjectDirectory)\..\GlobalInfo.cs" AssemblyVersion="3.5.0.0" AssemblyFileVersion="$(BUILD_NUMBER)" /> </Target> <Target Name="AfterBuild"> Change all my AssemblyInfo.cs files so that they don't specify either AssemblyVersion or AssemblyFileVersion (in retrospect, I'll look into putting AssemblyVersion back) Added a link to the now global GlobalInfo.cs that is located just outside all the project Make sure this file is built once, so that I have a default file in source control This will now update GlobalInfo.cs only if the environment variable BUILD_NUMBER is set, which it is when I build through TeamCity. I opted for keeping AssemblyVersion constant, so that references still work, and only update AssemblyFileVersion, so that I can see which build a dll is from.

    Read the article

  • How to make facebox popup remain open and the content inside the facebox changes after the submit

    - by Leonardo Dario Perna
    Hi, I'm a jQuery total n00b. In my rails app this what happen: I'm on the homepage, I click this link: <a href='/betas/new' rel='facebox'>Sign up</a> A beautiful facebox popup shows up and render this views and the containing form: # /app/views/invites/new <% form_tag({ :controller => 'registration_code', :action => 'create' }, :id => 'codeForm') do %> <%= text_field_tag :code %> <br /> <%= submit_tag 'Confirm' %> <% end %> I clink on submit and if the code is valid the user is taken on another page in another controller: def create # some stuff redirect_to :controller => 'users', :action => 'type' end Now I would like to render that page INSIDE the SAME popup contains the form, after the submit button is pressed but I have NO IDEA how to do it. I've tried FaceboxRender but this happens: Original version: # /controllers/users_controller def type end If I change it like that nothing happens: # /controllers/users_controller def type respond_to do |format| format.html format.js { render_to_facebox } end end If I change it like that (I know is wrong but I'm a n00b so it's ok :-): # /controllers/users_controller def type respond_to do |format| format.html { render_to_facebox } format.js end end I got this rendered: try { jQuery.facebox("my raw HTML from users/type.html.erb substituted here")'); throw e } Any solutions? THANK YOU SO MUCH!!

    Read the article

  • Forbidden Patterns Check-In Policy in TFS 2010

    - by Jaxidian
    I've been trying to use the Forbidden Patterns part of the TFS 2010 Power Tools and I'm just not understanding something - I simply cannot get anything to change as I try to use this! I'm using the version that was released recently (I believe April 23, 2010), so it's not an old version. First off, yes, I know it's regex based, so let's clear that doubt... I have tried to block the following scenarios: 1) I have modified all of my T4 EF templates to generate files named EntityName.gen.cs. I then attempted to prevent TFS from wanting to check those files in. I used the regular expression \.gen\.cs\z and it didn't change a single thing! I even tried it without the \z and nadda! 2) I don't want app.config and web.config files to be checked-in by default because we have these things stored into app.config.base and web.config.base files that our build scripts use to generate our per-environment app.config and web.config files. As such, I tried the following regexes and again, nothing worked! web\.config\z, app\.config\z, web\.release\.config\z and web\.debug\.config\z. What is it that I am screwing up with this?

    Read the article

< Previous Page | 651 652 653 654 655 656 657 658 659 660 661 662  | Next Page >