Search Results

Search found 10206 results on 409 pages for 'tooling and testing'.

Page 244/409 | < Previous Page | 240 241 242 243 244 245 246 247 248 249 250 251  | Next Page >

  • Silverlight 4 &ndash; Coded UI Framework Video Tutorial

    - by mbcrump
    With the release of Visual Studio 2010 Feature Pack 2, Microsoft included the Coded UI Test framework. With this release it is possible to create automated test with just a few mouse clicks. This is a very powerful feature that all Silverlight developers need to learn. Instead of my normal blog post, I have created a video tutorial that walks you through it starting from “File” –> New Project. I hope you enjoy and please leave feedback. Video Tutorial (short 9 minute video): Slides from the demo (only 3): Silverlight 4 – Coded UI Testing Code for the MainPage.xaml that was used in the Demo. For the sake of time, I did not go into the AutomationProperties.Name that I used for the TextBox or Button. I added that for each element . <Grid x:Name="LayoutRoot" Background="White" Height="100" Width="350"> <Grid.ColumnDefinitions> <ColumnDefinition/> <ColumnDefinition/> </Grid.ColumnDefinitions> <Grid.RowDefinitions> <RowDefinition/> <RowDefinition/> </Grid.RowDefinitions> <TextBlock Padding="15" Grid.Column="0" TextAlignment="Right">Name</TextBlock> <TextBox AutomationProperties.Name="txtAP" Grid.Column="1" Height="25" TextAlignment="Right" Name="txtName" /> <Button AutomationProperties.Name="btnAP" Grid.Row="1" Grid.Column="1" Content="Click for Name" x:Name="btnMessage" Click="btnMessage_Click" /> </Grid>  Subscribe to my feed

    Read the article

  • How can I pass referrer header from my https domain to http domains?

    - by nutcracker
    My website is 100% https. I have links to other http domains. The referrer header is not set when linking from a https page to a http page. From http://en.wikipedia.org/wiki/HTTP_referrer If a website is accessed from a HTTP Secure (HTTPS) connection and a link points to anywhere except another secure location, then the referer field is not sent. I would prefer that other domains can see the referrer so that they know that traffic comes from my domain. Is there a way to force this header or is there another solution? Update I've done some basic testing using a redirect: http page -- link to http --> 301 redirect --> http page = referrer intact https page -- link to https --> 301 redirect --> http page = referrer blank https page -- link to http --> 301 redirect --> http page = referrer blank https page -- link to http --> 302 redirect --> http page = referrer blank The referrer is lost when linking from a https page to a http redirect page on my own domain. So there is no referrer on the redirect.

    Read the article

  • When someone deletes a shared data source in SSRS

    - by Rob Farley
    SQL Server Reporting Services plays nicely. You can have things in the catalogue that get shared. You can have Reports that have Links, Datasets that can be used across different reports, and Data Sources that can be used in a variety of ways too. So if you find that someone has deleted a shared data source, you potentially have a bit of a horror story going on. And this works for this month’s T-SQL Tuesday theme, hosted by Nick Haslam, who wants to hear about horror stories. I don’t write about LobsterPot client horror stories, so I’m writing about a situation that a fellow MVP friend asked me about recently instead. The best thing to do is to grab a recent backup of the ReportServer database, restore it somewhere, and figure out what’s changed. But of course, this isn’t always possible. And it’s much nicer to help someone with this kind of thing, rather than to be trying to fix it yourself when you’ve just deleted the wrong data source. Unfortunately, it lets you delete data sources, without trying to scream that the data source is shared across over 400 reports in over 100 folders, as was the case for my friend’s colleague. So, suddenly there’s a big problem – lots of reports are failing, and the time to turn it around is small. You probably know which data source has been deleted, but getting the shared data source back isn’t the hard part (that’s just a connection string really). The nasty bit is all the re-mapping, to get those 400 reports working again. I know from exploring this kind of stuff in the past that the ReportServer database (using its default name) has a table called dbo.Catalog to represent the catalogue, and that Reports are stored here. However, the information about what data sources these deployed reports are configured to use is stored in a different table, dbo.DataSource. You could be forgiven for thinking that shared data sources would live in this table, but they don’t – they’re catalogue items just like the reports. Let’s have a look at the structure of these two tables (although if you’re reading this because you have a disaster, feel free to skim past). Frustratingly, there doesn’t seem to be a Books Online page for this information, sorry about that. I’m also not going to look at all the columns, just ones that I find interesting enough to mention, and that are related to the problem at hand. These fields are consistent all the way through to SQL Server 2012 – there doesn’t seem to have been any changes here for quite a while. dbo.Catalog The Primary Key is ItemID. It’s a uniqueidentifier. I’m not going to comment any more on that. A minor nice point about using GUIDs in unfamiliar databases is that you can more easily figure out what’s what. But foreign keys are for that too… Path, Name and ParentID tell you where in the folder structure the item lives. Path isn’t actually required – you could’ve done recursive queries to get there. But as that would be quite painful, I’m more than happy for the Path column to be there. Path contains the Name as well, incidentally. Type tells you what kind of item it is. Some examples are 1 for a folder and 2 a report. 4 is linked reports, 5 is a data source, 6 is a report model. I forget the others for now (but feel free to put a comment giving the full list if you know it). Content is an image field, remembering that image doesn’t necessarily store images – these days we’d rather use varbinary(max), but even in SQL Server 2012, this field is still image. It stores the actual item definition in binary form, whether it’s actually an image, a report, whatever. LinkSourceID is used for Linked Reports, and has a self-referencing foreign key (allowing NULL, of course) back to ItemID. Parameter is an ntext field containing XML for the parameters of the report. Not sure why this couldn’t be a separate table, but I guess that’s just the way it goes. This field gets changed when the default parameters get changed in Report Manager. There is nothing in dbo.Catalog that describes the actual data sources that the report uses. The default data sources would be part of the Content field, as they are defined in the RDL, but when you deploy reports, you typically choose to NOT replace the data sources. Anyway, they’re not in this table. Maybe it was already considered a bit wide to throw in another ntext field, I’m not sure. They’re in dbo.DataSource instead. dbo.DataSource The Primary key is DSID. Yes it’s a uniqueidentifier... ItemID is a foreign key reference back to dbo.Catalog Fields such as ConnectionString, Prompt, UserName and Password do what they say on the tin, storing information about how to connect to the particular source in question. Link is a uniqueidentifier, which refers back to dbo.Catalog. This is used when a data source within a report refers back to a shared data source, rather than embedding the connection information itself. You’d think this should be enforced by foreign key, but it’s not. It does allow NULLs though. Flags this is an int, and I’ll come back to this. When a Data Source gets deleted out of dbo.Catalog, you might assume that it would be disallowed if there are references to it from dbo.DataSource. Well, you’d be wrong. And not because of the lack of a foreign key either. Deleting anything from the catalogue is done by calling a stored procedure called dbo.DeleteObject. You can look at the definition in there – it feels very much like the kind of Delete stored procedures that many people write, the kind of thing that means they don’t need to worry about allowing cascading deletes with foreign keys – because the stored procedure does the lot. Except that it doesn’t quite do that. If it deleted everything on a cascading delete, we’d’ve lost all the data sources as configured in dbo.DataSource, and that would be bad. This is fine if the ItemID from dbo.DataSource hooks in – if the report is being deleted. But if a shared data source is being deleted, you don’t want to lose the existence of the data source from the report. So it sets it to NULL, and it marks it as invalid. We see this code in that stored procedure. UPDATE [DataSource]    SET       [Flags] = [Flags] & 0x7FFFFFFD, -- broken link       [Link] = NULL FROM    [Catalog] AS C    INNER JOIN [DataSource] AS DS ON C.[ItemID] = DS.[Link] WHERE    (C.Path = @Path OR C.Path LIKE @Prefix ESCAPE '*') Unfortunately there’s no semi-colon on the end (but I’d rather they fix the ntext and image types first), and don’t get me started about using the table name in the UPDATE clause (it should use the alias DS). But there is a nice comment about what’s going on with the Flags field. What I’d LIKE it to do would be to set the connection information to a report-embedded copy of the connection information that’s in the shared data source, the one that’s about to be deleted. I understand that this would cause someone to lose the benefit of having the data sources configured in a central point, but I’d say that’s probably still slightly better than LOSING THE INFORMATION COMPLETELY. Sorry, rant over. I should log a Connect item – I’ll put that on my todo list. So it sets the Link field to NULL, and marks the Flags to tell you they’re broken. So this is your clue to fixing it. A bitwise AND with 0x7FFFFFFD is basically stripping out the ‘2’ bit from a number. So numbers like 2, 3, 6, 7, 10, 11, etc, whose binary representation ends in either 11 or 10 get turned into 0, 1, 4, 5, 8, 9, etc. We can test for it using a WHERE clause that matches the SET clause we’ve just used. I’d also recommend checking for Link being NULL and also having no ConnectionString. And join back to dbo.Catalog to get the path (including the name) of broken reports are – in case you get a surprise from a different data source being broken in the past. SELECT c.Path, ds.Name FROM dbo.[DataSource] AS ds JOIN dbo.[Catalog] AS c ON c.ItemID = ds.ItemID WHERE ds.[Flags] = ds.[Flags] & 0x7FFFFFFD AND ds.[Link] IS NULL AND ds.[ConnectionString] IS NULL; When I just ran this on my own machine, having deleted a data source to check my code, I noticed a Report Model in the list as well – so if you had thought it was just going to be reports that were broken, you’d be forgetting something. So to fix those reports, get your new data source created in the catalogue, and then find its ItemID by querying Catalog, using Path and Name to find it. And then use this value to fix them up. To fix the Flags field, just add 2. I prefer to use bitwise OR which should do the same. Use the OUTPUT clause to get a copy of the DSIDs of the ones you’re changing, just in case you need to revert something later after testing (doing it all in a transaction won’t help, because you’ll just lock out the table, stopping you from testing anything). UPDATE ds SET [Flags] = [Flags] | 2, [Link] = '3AE31CBA-BDB4-4FD1-94F4-580B7FAB939D' /*Insert your own GUID*/ OUTPUT deleted.Name, deleted.DSID, deleted.ItemID, deleted.Flags FROM dbo.[DataSource] AS ds JOIN dbo.[Catalog] AS c ON c.ItemID = ds.ItemID WHERE ds.[Flags] = ds.[Flags] & 0x7FFFFFFD AND ds.[Link] IS NULL AND ds.[ConnectionString] IS NULL; But please be careful. Your mileage may vary. And there’s no reason why 400-odd broken reports needs to be quite the nightmare that it could be. Really, it should be less than five minutes. @rob_farley

    Read the article

  • what are some good interview questions for a position that consists of reviewing code for security vulnerabilities?

    - by John Smith
    The position is an entry-level position that consists of reading C++ code and identifying lines of code that are vulnerable to buffer overflows, out-of-bounds reads, uncontrolled format strings, and a bunch of other CWE's. We don't expect the average candidate to be knowledgeable in the area of software security nor do we expect him or her to be an expert computer programmer; we just expect them to be able to read the code and correctly identify vulnerabilities. I guess I could ask them the typical interview questions: reverse a string, print a list of prime numbers, etc, but I'm not sure that their ability to write code under pressure (or lack thereof) tells me anything about their ability to read code. Should I instead focus on testing their knowledge of C++? Ask them if they understand what a pointer is and how bitwise operators work? My only concern about asking that kind of question is that I might unfairly weed out people who don't happen to have the knowledge but have the ability to acquire it. After all, it's not like they will be writing a single line of code, and it's not like we are looking only for people who already know C++, since we are willing to train the right candidate. (It is true that I could ask those questions only to those candidates who claim to know C++, but I'd like to give the same "test" to everyone.) Should I just focus on trying to get an idea of their level of intelligence? In other words, should I get them to talk and pay attention to the way they articulate their thoughts, and so on?

    Read the article

  • Antenna Aligner Part 6: Little Robots

    - by Chris George
    A week ago I took temporary ownership of a HTC Desire S so that I could start testing my app under Android. Support for Android was not in my original plan, but when Nomad added support for it recently, I starting thinking why not! So with some trepidation, I clicked the Build for Android button on the Nomad toolbar... nothing. Hmm... that's not right, I was expecting something to build. After a bit of faffing around I finally realised that I hadn't read the text on the Android setup page properly (yes that's right, RTFM!), and I needed a two-part application identifier, separated by a dot. I did this (not sure what the two part thing is all about, that one my list to investigate!) After making the change, the Android build worked and created the apk file. I uploaded this to the device and nervously ran it... it worked!!!  Well, more or less! So, there was not splash screen, but this was no surprise because I only have the iOS icons and splash screen in my project at the moment. What was more concerning was the compass update didn't seem to be working. I suspect this is a result of using an iOS specific option in the Phonegap compass watcher. Another thing to investigate. I've also just noticed that the css gradient background hasn't worked either... These issues aside, it was actually more successful than I was expecting, so happy days! Right, lets get Googling...   Next time: Preparing for submission to the App Store! :-)

    Read the article

  • All my Ubuntu VMs have apt-get update problems

    - by kashani
    I'm running Virtualbox 4.1 on an x86_64 Windows 7 host. I've got a collection of 12.04 and 10.04 LTS VMs I use to create debs for work. In the last week I started noticing problems on the 12.04 VMs. Tried the usual apt-get clean bit which didn't help. I rolled a new 11.10 VM for testing a Worpress upgrade. This VM has never been able to run apt-get update without errors. The interesting errors look like this: Get: 8 http://security.ubuntu.com oneiric-security/main Translation-en_US [344 B] 14% [7 Sources 48686/877 kB 6%] [Waiting for headers]bzip2: (stdin) is not a bzip2 file. Hit http://security.ubuntu.com oneiric-security/multiverse Translation-en Hit http://security.ubuntu.com oneiric-security/restricted Translation-en Hit http://security.ubuntu.com oneiric-security/universe Translation-en 22% [7 Sources 127526/877 kB 15%] [Waiting for headers]/usr/bin/xz: (stdin): File format not recognized and ends with /usr/bin/xz: (stdin): File format not recognized Ign http://us.archive.ubuntu.com oneiric/main Translation-en_US Ign http://us.archive.ubuntu.com oneiric-updates/main Translation-en_US Fetched 18.5 MB in 47s (392 kB/s) W: GPG error: http://us.archive.ubuntu.com oneiric InRelease: File /var/lib/apt/lists/partial/us.archive.ubuntu.com_ubuntu_dists_oneiric_InRelease doesn't start with a clearsigned message W: GPG error: http://security.ubuntu.com oneiric-security InRelease: File /var/lib/apt/lists/partial/security.ubuntu.com_ubuntu_dists_oneiric-security_InRelease doesn't start with a clearsigned message xv-utils, lzma, etc are all installed. I've reinstalled the VM from scratch three times and up at the same point.

    Read the article

  • Does Agile force developers to work more?

    - by Shooshpanchick
    Looking at common Agile practices it seems to me that they (intentionally or unintentionally?) force developer to spend more time actually working as opposed to reading blogs/articles, chatting, coffee breaks and just plain procrastinating. In particular: 1) Pair programming - the biggest work-forcer, just because it is inconvenient to do all that procrastination when there are two of you sitting together. 2) Short stories - when you have a HUGE chunk of work that must be done in e.g. a month, it is pretty common to slack off in the first three weeks and switch to OMG DEADLINE mode for the last one. And with the little chunks (that must be done in a day or less) it is exact opposite - you feel that time is tight, there is no space for maneuvering, and you will be held accountable for the task pretty soon, so you start working immediately. 3) Team communication and cohesion - when you underperform in a slow, distanced and silent environment it may feel ok, but when at the end of the day at Scrum meeting everyone boasts what they have accomplished and you have nothing to say you may actually feel ashamed. 4) Testing and feedback - again, it prevents you from keeping tasks "99% ready" (when it's actually around 20%) until the deadline suddenly happens. Do you feel that under Agile you work more than under "conventional" methodologies? Is this pressure compensated by the more comfortable environment and by the feeling of actually getting right things done quickly?

    Read the article

  • How to write your unit tests to switch between NUnit and MSTest

    - by Justin Jones
    On my current project I found it useful to use both NUnit and MsTest for unit testing. When using ReSharper for running unit tests, it just simply works better with NUnit, and on large scale projects NUnit tends to run faster. We would have just simply used NUnit for everything, but MSTest gave us a few bonuses out of the box that were hard to pass up. Namely code coverage (without having to shell out thousands of extra dollars for the privilege) and integrated tests into the build process. I’m one of those guys who wants the build to fail if the unit tests don’t pass. If they don’t pass, there’s no point in sending that build on to QA. So making the build work with MsTest is easiest if you just create a unit test project in your solution. This adds the right references and project type Guids in the project file so that everything just automagically just works. Then (using NuGet of course) you add in NUnit. At the top of your test file, remove the using statements that refer to MsTest and replace it with the following: #if NUNIT using NUnit.Framework; #else using TestFixture = Microsoft.VisualStudio.TestTools.UnitTesting.TestClassAttribute; using Test = Microsoft.VisualStudio.TestTools.UnitTesting.TestMethodAttribute; using TestFixtureSetUp = Microsoft.VisualStudio.TestTools.UnitTesting.TestInitializeAttribute; using SetUp = Microsoft.VisualStudio.TestTools.UnitTesting.TestInitializeAttribute; using Microsoft.VisualStudio.TestTools.UnitTesting; #endif Basically I’m taking the NUnit naming conventions, and redirecting them to MsTest. You can go the other way, of course. I only chose this direction because I had already written the tests as NUnit tests. NUnit and MsTest provide largely the same functionality with slightly differing class names. There’s few actual differences between then, and I have not run into them on this project so far. To run the tests as NUnit tests, simply open up the project properties tab and add the compiler directive NUNIT. Remove it, and you’re back in MsTest land.

    Read the article

  • QA - Developer communication

    - by exiter2000
    I am a developer and have worked at this company 4~5 years by now. We have been practicing scrum for about 2 years. I think, I have been worked well with QAs. I believe QAs/developers/technical writers are all one team. We are also actively hiring new team members. As a legacy member of the team, I have faced to assist new member(including developers and testers) with my business knowledge. We work on 2 weeks base scrum. I usually deliver my user story completely by the first date of second week and do some qa build with partial functionality of my user story so that QA has a good idea about my implementation and flow. Recently, I have met some QAs. In first week, the QAs do not talk... In stand up meeting, they say they are developing test cases regardless I deliver the user story or not. In second week, I do not have a single defect till Thursday afternoon and suddenly I have a major defect with several minor UI defect, which I delivered one week ago. Or I have one or two minor defects on second week however major defects on Thursday afternoon or Friday morning. This eventually make the story rolls over to the next sprint. Major defect takes time to fix and more importantly it would trigger the regression test for the story... Even if I worked Thursday evening and fixed it, the testing will not finish. And this happens multiple times with certain QAs. As a same team member, I talked to the QAs if they could test major defect with higher priority... Rejected... Because I do not understand QA process.. So I asked roughly how many major test cases are covered so far in the stand up meeting on 2nd week Wednesday.. The response is I should not ask this to the QA in the stand up meeting... What do I do?

    Read the article

  • Priority Manager&ndash;Part 1- Laying out the plan

    - by Patrick Liekhus
    Now that we have shown the EDMX with XPO/XAF and how use SpecFlow and BDD to run EasyTest scripts, let’s put it all together and show the evolution of a project using all the tools combined. I have a simple project that I use to track my priorities throughout the day.  It uses some of Stephen Covey’s principles from The 7 Habits of Highly Effective People.  The idea is to write down all your priorities the night before and rank them.  This way when you get started tomorrow you will have your list of priorities.  Now it’s not that new things won’t appear tomorrow and reprioritize your list, but at least now you can track them.  My idea is to create a project that will allow you manage your list from your desktop, a web browser or your mobile device.  This way your list is never too far away.  I will layout the data model and the additional concepts as time progresses. My goal is to show the power of all of these tools combined and I thought the best way would be to build a project in sequence.  I have had this idea for quite some time so let’s get it completed with the outline below. Here is the outline of the series of post in the near future: Part 2 – Modeling the Business Objects Part 3 – Changing XAF Default Properties Part 4 – Advanced Settings within Liekhus EDMX/XAF Tool Part 5 – Custom Business Rules Part 6 – Unit Testing Our Implementation Part 7 – Behavior Driven Development (BDD) and SpecFlow Tests Part 8 – Using the Windows Application Part 9 – Using the Web Application Part 10 – Exposing OData from our Project Part 11 – Consuming OData with Excel PowerPivot Part 12 – Consuming OData with iOS Part 13 – Consuming OData with Android Part 14 – What’s Next I hope this helps outline what to expect.  I anticipate that I will have additional topics mixed in there but I plan on getting this outline completed within the next several weeks.  Thanks

    Read the article

  • Intermittent ethernet connectivity

    - by Amey Jah
    I am facing a weird problem. I am connecting to my dsl modem via ethernet. After booting, sometimes Ubuntu 10.10 does not detect the eth0 card properly or does not connect to the modem. I have made the following observations: My modem is fully on. All indicators are on except ethernet, which is blinking continuously. My network indicator applet informs me that I am connected to modem, but I am not able to browse as it has not established connection with modem. I need to power off and on (restart) the modem several times OR disconnect/connect (via network applet) several times, before the system can establish a connection with the modem. Once it establishes the connection, the ethernet LED (on modem) stops blinking and glows continuously. However, I do not see this problem every time I reboot or start my computer. 3/5 time, it connects to the modem in a single attempt. Friends, I am not able to understand root cause of this problem. Plus I do not know what kind of logs should be attached. If you want any logs please give me command to run, and I will paste the result for you. Note: I ran system testing when above problem happened. Follow this link to download result.

    Read the article

  • Custom daemon script: works, but does not run at boot / startup

    - by pearjoint
    this is Ubuntu 10.10 Maverick. I have the following shell script in init.d that I want to run as a "daemon" (background service with start/stop/restart really) at system startup. There is a symlink in rc3.d. I tried 4 and 5 too. (Ideally this would initialize before graphical login happens and before a user logs in.) IMPORTANT: the script works 100% as expected and required when testing this with service MetaLeapDaemon start and service MetaLeapDaemon stop. (This shell script calls a Python program which makes sure the appropriate .pid files are both created at startup and deleted at exit.) So generally it works fine but now my only issue is why it will not be run at any of the run-levels I tried. I know for sure it isn't run because the log file it normally creates does not get created. As you can see (by the lack of any uid:gid args in the start-stop-daemon commands) this would currently run only under root, is this forbidden in a default setup? Here's the script, pretty much your run-off-the-mill daemon script really: #! /bin/sh DAEMON=/opt/metaleap/_core/daemon/MetaLeapDaemon.py NAME=MetaLeapDaemon DESC="MetaLeapDaemon" test -f $DAEMON || exit 0 set -e case "$1" in start) start-stop-daemon --start --pidfile /var/run/$NAME.pid --exec $DAEMON ;; stop) start-stop-daemon --stop --pidfile /var/run/$NAME.pid ;; restart) start-stop-daemon --stop --pidfile /var/run/$NAME.pid sleep 1 start-stop-daemon --start --pidfile /var/run/$NAME.pid --exec $DAEMON ;; *) N=/etc/init.d/$NAME echo "Usage: $N {start|stop|restart}" >&2 exit 1 ;; esac exit 0

    Read the article

  • De-index URL paremeters

    - by Doug Firr
    Upon reading over this question is lengthy so allow me to provide a one sentence summary: I need to get Google to de-index URLs that have certain parameters appended I have a website example.com with language translations. There used to be many translations but I deleted them all so that only English (Default) and French options remain. When one selects a language option a parameter is aded to the URL. For example, the home page: https://example.com (default) https://example.com/main?l=fr_FR (French) I added a robots.txt to stop Google from crawling any of the language translations: # robots.txt generated at http://www.mcanerin.com User-agent: * Disallow: Disallow: /cgi-bin/ Disallow: /*?l= So any pages containing "?l=" should not be crawled. I checked in GWT using the robots testing tool. It works. But under html improvements the previously crawled language translation URLs remain indexed. The internet says to add a 404 to the header of the removed URLs so the Googles knows to de-index it. I checked to see what my CMS would throw up if I visited one of the URLs that should no longer exist. This URL was listed in GWT under duplicate title tags (One of the reasons I want to scrub up my URLS) https://example.com/reports/view/884?l=vi_VN&l=hy_AM This URL should not exist - I removed the language translations. The page loads when it should not! I played around. I typed example.com?whatever123 It seems that parameters always load as long as everything before the question mark is a real URL. So if Google has indexed all these URLS with parameters how do I remove them? I cannot check if a 404 is being generated because the page always loads because it's a parameter that needs to be de-indexed.

    Read the article

  • Texture2D.GetData fails to return pixel colour data

    - by Chris Charabaruk
    Because I'm using sprite sheets instead of an individual texture per sprite, I need to pass in a Rectangle when calling Texture2D.GetData() in my collision detection for per-pixel tests. Unfortunately, without fail I get an ArgumentException percolated down from an internal method inside the Texture (not Texture2D) class. My code for getting the texture data looks like this: public override Color[] GetPixelData() { Color[] data = new Color[(int)size.Product()]; Rectangle rect = new Rectangle(hframe * (int)size.X, vframe * (int)size.Y, (int)size.X, (int)size.Y); #if DEBUG if (sprite.Bounds.Contains(rect) && sprite.Format == SurfaceFormat.Color) #endif sprite.GetData(0, rect, data, 0, 1); return data; } Even with the check to ensure I'm grabbing a valid rectangle and that the texture format matches what I'm trying to get, I still get that exception, claiming "The size of the data passed in is too large or too small for this resource." Unfortunately, the debugger won't let me check the locals within the Texture.ValidateTotalSize() method where the exception originates. Has anyone else had this problem and knows how to fix it? I'm relying on AABB testing only for now, but that doesn't really work for some of my game's entities due to odd shapes, rotation and scaling.

    Read the article

  • Experiences with Ubuntu One?

    - by rsuarez
    I'm currently testing several backup/sync services: Dropbox, SpiderOak and Ubuntu One. Of them, Dropbox is the winner hands down; SpiderOak is nice too, but a bit more intrusive and unpredictable (sometimes is slow in syncing some files, or doesn't sync them at all). Ubuntu One has promise, but I've used it much less than the other two. I'm thinking about buying a "20 pack" and using Ubuntu One as my only synchronization software. It's the cheapest of them all (3$/month vs 10$ in Dropbox and SpiderOak), and 20GB of space is enough for me. My intention is to sync most of my $HOME folder. All the computers I'll connect will have Ubuntu installed, so not being multiplatform doesn't really matter to me. If its performance is as good as Dropbox's, I'm sold. But I'd like to gauge some experiences here first. Is anyone using it seriously? I.e., to sync a lot of files that change often (like the aforementioned $HOME folder, program sources, or something alike). What have been your experiences? Thanks in advance.

    Read the article

  • Unable to use Maya animation with scripts when imported to Unity

    - by keshk
    I am testing to import Maya animation over to Unity. I set up a simple cylinder with 2 bones and an IK handle. Made a simple animation where the cylinder bends and goes back to straight position over 24 frames. Following that, I selected everything and baked, all bones,ik,(animation by selecting all at the graph editor) and even the cylinder. I saved the scene and then select all and export as FBX with animation and bake checked. In unity imported it and at the preview able to see the animation. When I load the model into scene and play (after assigning the controller), able to see animation too. But now when I try to script it and control the animation, nothing happens. Even to test, I tried the following under the Update method. if(animation.isPlaying) Debug.Log("Animation Works"); else Debug.Log("Animation not working"); The bool doesn't even return true nor false. My animation is called "bend", thus just for try I did the following and nothing happens. animation.Play("bend"); Can please advice based on my steps, am I missing something. Do I need to add the controller or is that an unnecessary step? Did I screw up on the Maya part or the Unity part. Thanks for help.

    Read the article

  • What benefits can I get upgrading my ASP.NET (Webform) + DAL(EF) + Repository + BLL structure to MVC?

    - by Etienne
    I'm in the process of defining an approach that may best fit our needs for a big web application development. For now, I'm thinking going with an ASP.NET Architecture with a DAL using Entity Framework, a Repository concept to not access DAL directly from BLL and a BLL that call the repository and make every manipulations necessary to prepare data to push in a presentation layer (.aspx files). I don't plan to use ASP.Net controls and prefer to keep things simple and lightweight using plain html, jQuery UI controls and do most of the server calls with jQuery Ajax. Sometimes, when needed, I plan to use handlers (.ashx) to call BLL methods that will return JSON or HTML to client for dynamic stuff. My solution also has a test project that Mock the Repository with in-memory data to not repose on database for testing BLL methods... It may be usefull to add that we will build a big application over this architecture with hundreds of tables and store procedures with a lot of reading and writing to database. My question is, having this architecture in mind, Is there any evident advantages that I can obtain by using an MVC3 project instead of the described architecture base on Webform? Do you see any problem in this architecture that may cause us problem during the next steps of development? I know the MVC pattern for using it in others projects with Django... but the Microsoft MVC implementation look so much more complex and verbose than Django MVC and it's why I'm hesitating (or waiting for a little push?) right now before jumping into it... We are in a real project with deadlines and don't want to slow the development process without any real benefits.

    Read the article

  • NetBeans 7.1 RC1 now available - JavaFX 2, Enhanced Java Editor, Improved JavaEE, WebLogic 12 support

    - by arungupta
    NetBeans 7.1 RC1 is now available! What's new in NetBeans 7.1 ? Support for JavaFX 2 Full compile/debug/profile development cycle Many editor enhancements Deployment tools  Customized UI controls using CSS3 Enhanced Java editor Upgrade projects completely to JDK 7 Import statement organizer Rectangular block selection Getters/Setters included in refactoring Java EE  50+ CDI improvements RichFaces4 and ICEFaces2 component libraries EJB Timer creation wizard Code completion for table, column, and PU names CSS3, GUI Builder, Git, Maven3, and several other features listed at New and Noteworthy Download and give us your feedback using NetBeans Community Acceptance Testing by Dec 7th. Check out the latest tutorials. To me the best part was creating a Java EE 6 application, deploying on GlassFish, and then re-deploying the same application by changing the target to Oracle WebLogic Server 12c (internal build). And now see the same application deployed to both the servers: Don't miss the Oracle WebLogic Server 12c Launch Event on Dec 1. You can provide additional feedback about NetBeans on mailing lists and forums, file reports, and contact us via Twitter. The final release of NetBeans IDE 7.1 is planned for December.

    Read the article

  • sqlplus: Running "set lines" and "set pagesize" automatially

    - by katsumii
    This is a followup to my previous entry. Using the full tty real estate with sqlplus (INOUE Katsumi @ Tokyo) 'rlwrap' is widely used for adding 'sqlplus' the history function and command line editing. Here's another but again kludgy implementation. First this is the alias. alias sqlplus="rlwrap -z ~/sqlplus.filter sqlplus" And this is the file content. #!/usr/bin/env perl use lib ($ENV{RLWRAP_FILTERDIR} or "."); use RlwrapFilter; use POSIX qw(:signal_h); use strict; my $filter = new RlwrapFilter; $filter -> prompt_handler(\&prompt); sigprocmask(SIG_UNBLOCK, POSIX::SigSet->new(28)); $SIG{WINCH} = 'winchHandler'; $filter -> run; sub winchHandler { $filter -> input_handler(\&input); sigprocmask(SIG_UNBLOCK, POSIX::SigSet->new(28)); $SIG{WINCH} = 'winchHandler'; $filter -> run; } sub input { $filter -> input_handler(undef); return `resize |sed -n "1s/COLUMNS=/set linesize /p;2s/LINES=/set pagesize /p"` . $_; } sub prompt { if ($_ =~ "SQL> ") { $filter -> input_handler(\&input); $filter -> prompt_handler(undef); } return $_; } I hope I can compare these 2 implementations after testing more and getting some feedbacks.

    Read the article

  • Oracle Unified Method 5 Essentials Exam (Beta)

    - by user535886
    Oracle Unified Method 5 Essentials (1Z1-568) exam The Oracle Unified Method Certified Implementation Specialist Certification identifies professionals who are skilled in Oracle’s all inclusive methodology. The certification covers the core features the Oracle Unified Method suite, including but not limited to, Focus Areas, Use Cases, and Requirements Gathering. The certification proves a baseline of the consultant’s knowledge and allows the implementation team to work as a cohesive team from day 1. Up-to-date training and field experience are highly recommended. Target Audience: implementation consultants. We are offering to Oracle Partners & Employees beta exam vouchers to earn Oracle Implementation Specialist credential. Exam appointments will be open soon for scheduling at authorized Pearson Vue testing centers. Due to the high demand we process the requests on a first-come, first-served basis. If you would like to request a voucher, please send an e-mail to [email protected] with the following information for each participant: first and last name; business email address, company name, and exam name. 

    Read the article

  • Oracle OpenWorld Session: “Business Driven Development with BPM: Lessons from the Real World”

    - by Ajay Khanna
    One of key values that BPM promises is “Business Empowerment”. People closest to the processes, who participate in the process every day, are the ones who know most about the process. These are the people who run day-to-day operations, people who triage customer issues, people who envision improvements and innovations. It is, therefore, imperative that when a company decides to use BPM technology to automate their business processes, business people take the driver’s seat. BPM is not an IT only project. Oracle BPM suite has been designed keeping this core tenet of BPM, Business Empowerment, in mind. The result is business user centered design of Process Composer. Process Composer is designed to let business users document their processes, analyze them using simulation, create web forms, specify business rules and even run them in testing mode using process player, to see if the designed process meets their needs. This does not mean that IT has no role in this process. In fact, Oracle BPM Suite has made it very easy for Business and IT to collaborate. The same process can be shared among business, and IT stakeholders and each can collaborate to create model-driven, process based executable applications. A process may need to integrate with multiple systems via various mechanisms, and IT leads system and data integration effort. IT helps fine tune the performance of process applications and ensures that the deployment of process application meets scalability and failover standards. In this session, we saw Harish Gaur and Satya Narayanan from Oracle demonstrate roles Business and IT play in BPM projects and how Oracle BPM Suite enables business and IT collaboration to design and automate process based applications. They also discussed real life customer stories. Some key takeaways from this session: There are no IT projects, only business initiatives, requiring IT support Identify high impact processes – critical, better BPM ROI Identify key metrics to measure process performance Align business with IT layer

    Read the article

  • Best solution for getting referral information in PHP

    - by absentx
    I am currently redoing some link structuring on a website. In the past we have used specific php files on the last step to direct the user to the proper place. Example: www.mysite.com/action/go-to-blue.php or www.mysite.com/action/short/go-to-red.php www.mysite.com/action/tall/go-to-red.php We are now restructuring to eliminate the /short/ or /tall/ directory. What this means is now "go-to-blue.php" will be doing some extra processing to make sure it sends the visitor to the proper place. The static method of the past was quite effective, because, well, if they left from that page we knew we had it right. Now since we are 301 redirecting action/short/go-to-red.php to just action/go-to-red.php it is quite important on "go-to-red.php" that we realize a user may have been redirected from /short/ or /tall/. So right now I am using HTTP_REFERRER and of course in my testing that works fine, but after a lot of reading it is clear that this is not a solid solution, so I was starting to brainstorm on other ways to check and make sure we get the proper referral information. If we could check HTTP_REFERRER plus some other test, I would feel confident we have a pretty good system in place to send the visitor to the right place. Some questions/comments: Could I use a session variable or a cookie to accomplish this goal? If so, would that be maintained through the 301 redirect? I don't see why it wouldn't be.. Passing the url in the url is not an option in this case.

    Read the article

  • hdmi audio works only with aplay -D alsa test wavs; open source radeon drivers; kernel 3.5 vgaswitcheroo

    - by user108754
    I've trolled the internets to make hdmi work on my system Ubuntu 12.04 software center kernel 3.5 uname: Linux ubuntu 3.5.0-18-generic #29~precise1-Ubuntu SMP...x86_64 x86_64 x86_64 GNU/Linux open source radeon drivers vgaswitcheroo (hybrid intel/radeon gpu): I boot with intel, not radeon, running. (and recall that with kernel 3.5, vgaswitcheroo now gives info on a third item, "DIS-Audio"; it indicates pwr on my system) ( /etc/rc.local: chown user:user /sys/kernel/debug/ # change "username" with your user name echo OFF /sys/kernel/debug/vgaswitcheroo/switch ) grub indeed now has "radeon.audio=1" for testing audio, I did aplay -l which gave me the card and device, which made me try aplay -D plughw:1,3 /usr/share/sounds/alsa/Front_Center.wav and lo! I get crystal clear sound on my hdtv. If I play an mp3 file as the argument to that command, I get noise as, I guess, aplay interprets the mp3 code as a wav. If I play a .wav that is not in the /usr/share/sounds/alsa/ directory, I get nothing. Internet flash video in browser plays no sound over hdmi. Both system sounds control and pavucontrol have hdmi cedar selected. Alas, I can not get sound for any gui test (left, right). Why would only aplay, and only when directed with "-D plughw", yield sound over hdmi? I've also tried only using one sound program at a time, if it was a limitation of alsa, so I tried aplay with web browser and even the sound control gui closed. I tried each of the last two, running alone. No improvement. alsamixer only shows hda intel and I think it's only the intel audio, not the hdmi.

    Read the article

  • Lubuntu: How do I arrange windows with keyboard shortcuts?

    - by iveand
    I have been testing with Lubuntu and have found it really pleasant. However, I cannot find an answer for one essential (for me) feature: being able to arrange windows with keyboard shortcuts. In Ubuntu this is done with the "Grid" plugin in "compizconfig-settings-manager". Unity uses this as well for window arranging. I am not interested in the "drag to the left to take up 1/2 the screen", but rather just the "hit some key combo" to put the active window to the left 1/2 of the screen, for example. I found this entry in "askubuntu" asking a very similar question: Is there a way to make Openbox behave like the Compiz Grid plugin? But it seems the "answer to this question", namely pyTile (http://sourceforge.net/projects/pytile/), doesn't do what I would like it to do. I don't want to tile all current windows, I want to resize one window at a time as I specify (left, right, top, center, bottom, 1/2, 2/3, 1/3, etc.). And I would prefer to do so with the keyboard. I think it possible to install compiz in Lubuntu, but this seems like "overkill" for this one feature (Grid keyboard shortcuts). Any help out there? It would REALLY MAKE MY DAY!!!

    Read the article

  • Payroll Customers Must Apply Mandatory Patches to Maintain Your Supportability

    - by DanaD
    The HRMS Suite of products has minimum required Rollup Patch (RUP) levels as well as additional mandatory patches that our customers must apply to ensure they are in compliance for support.  Without these patches, customers risk not being able to apply any fixes for issues they encounter as these RUPs and mandatory patches are the minimum patch level expected by Oracle Support and Oracle Development.  Core Payroll and International Payroll customers must apply the yearly Rollup Patch within 12 months of its issue. Legislative Payroll customers have additional requirements for the Rollup Patch, as the RUP generally is a pre-requisite for the next Year End/Fourth Quarter/Year Begin payroll processing supported by Oracle. These minimum RUP patches and other mandatory patches for your product or legislation are created with the following goals in mind: Compliance: Manage the people in your organization within the requirements of a specific country. Supportability: Ensure you are on a common code base so that if problems are identified, patches can be readily provided to you. Reliability: Reliable code with multiple customer downloads and comprehensive testing. For the listing of Mandatory Rollup Patches for Oracle Payroll please view: Doc ID 295406.1: Mandatory Family Pack/Rollup Patch (RUP) Levels for Oracle Payroll. For the listing of Mandatory Patches for the HRMS Suite please view: Doc ID 1160507.1: Oracle E-Business Suite HCM Information Center – Consolidated HRMS Mandatory Patch List. For information on the latest Rollup Patches (RUPs) for the HRMS Suite please view: Doc ID 135266.1: Oracle HRMS Product Family – Release 11i & 12 Information.

    Read the article

< Previous Page | 240 241 242 243 244 245 246 247 248 249 250 251  | Next Page >