Search Results

Search found 9217 results on 369 pages for 'cross apply'.

Page 73/369 | < Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >

  • Writing Acceptance test cases

    - by HH_
    We are integrating a testing process in our SCRUM process. My new role is to write acceptance tests of our web applications in order to automate them later. I have read a lot about how tests cases should be written, but none gave me practical advices to write test cases for complex web applications, and instead they threw conflicting principles that I found hard to apply: Test cases should be short: Take the example of a CMS. Short test cases are easy to maintain and to identify the inputs and outputs. But what if I want to test a long series of operations (eg. adding a document, sending a notification to another user, the other user replies, the document changes state, the user gets a notice). It rather seems to me that test cases should represent complete scenarios. But I can see how this will produce overtly complex test documents. Tests should identify inputs and outputs:: What if I have a long form with many interacting fields, with different behaviors. Do I write one test for everything, or one for each? Test cases should be independent: But how can I apply that if testing the upload operation requires that the connect operation is successful? And how does it apply to writing test cases? Should I write a test for each operation, but each test declares its dependencies, or should I rewrite the whole scenario for each test? Test cases should be lightly-documented: This principles is specific to Agile projects. So do you have any advice on how to implement this principle? Although I thought that writing acceptance test cases was going to be simple, I found myself overwhelmed by every decision I had to make (FYI: I am a developer and not a professional tester). So my main question is: What steps or advices do you have in order to write maintainable acceptance test cases for complex applications. Thank you.

    Read the article

  • How-To Geek is Hiring a Geeky Writer – Here Are the Details

    - by The Geek
    Think you have the perfect combination of geek knowledge and writing skills? We’re looking for an experienced writer to join our team, and here are all the details. We need a new writer to cover topics surrounding Windows 7 or 8, home networking, home routers, security, media, troubleshooting, mobile devices, and many similar topics. We are not looking for writers that focus solely on Linux or tech news writers. Please apply if you have the following qualities: You must be a geek at heart. You must be able to put in plenty of time, work, and dedication. If you’re too busy already, don’t apply. You must be able to write articles that are easy to understand. You must be creative. You must generate ideas for articles on your own, and take suggestions like a pro. You must be at least 18 years old. You must have solid English writing skills. You must be able to write tips, how-to articles, explainers, guides, instructional articles, etc. Again, we are not looking for tech news writers. Here’s a couple of our previous articles so you can get an idea of what we’re looking for in terms of quality and content. Please make sure to look through these before you decide to apply. How-to Article: Make Your Own Windows 8 Start Button with Zero Memory Usage Explainer: HTG Explains: When Do You Need to Update Your Drivers? Explainer: HTG Explains: Why Do Hard Drives Show the Wrong Capacity in Windows? How-To Article: How to Factory Reset Your Android Phone or Tablet When It Won’t Boot How to Fix a Stuck Pixel on an LCD Monitor How to Factory Reset Your Android Phone or Tablet When It Won’t Boot Our Geek Trivia App for Windows 8 is Now Available Everywhere

    Read the article

  • What happens when you close an Adsense account?

    - by rakibtg
    I need to change my payee name, I have asked in Google Adsense product forum one of top contributor replied me: "You will have to close the account & apply again with using your real payee name. That's why they specifically state that the payee name needs to match the full name on your bank account." https://support.google.com/adsense/answer/47333?hl=en This makes sense, but got few question because the support page do not have sufficient content to help me. My questions are: What happens when you close your Adsense account? If I apply again, then what will be the process to re-gain my account? I mean should I have to apply for a website again, then Adsense team will review and approve that? Is there any chance to disapprove my account? What about current check? I have two check in my hand. So, is Google will send those check again to me with my new payee name? Anyone experienced this problem? I have asked it on Google Forum but got no answer!

    Read the article

  • Is there any professional way to illustrate difference between 2 diagrams?

    - by lamwaiman1988
    I made a documented which is to describe the difference between the same logical structure from different version ( e.g Structure A from version 8 and Structure A from version 9 ). Luckily I've got the logical structure diagram from the 2 functional specification. I've managed to copy the image of each logical structure and paste them in MS Word and compare the 2 version side by side. I don't know if there is a standard way to illustrate the difference. I simply draw a cross over the removed logical member from the previous version and draw a rectangle around the new logical member of the next version. I know my way is kinda childish. I am wondering how to present them professionally. In addition, You won't believe this, but MS Word doesn't have a shape of CROSS, so I am actually using a multiplication sign that look like a giant monster: This is why I hate myself. Unlike 2 separated lines, this shape is easy to use, draw and resize. I am wondering if MS Word would care about a normal cross.

    Read the article

  • Replication with AK8

    - by Steve Tunstall
    Hello folks, This came up today and I want to make sure it's clear. Remember the "deferred update" I spoke about in my "Upgrade to AK8" entry just a bit ago? It's important to understand that this deferred update changes the way replication works. It is necessary that systems with the deferred update applied only replicate with other systems that have also had this deferred update applied. So if you apply it, your system can NOT replicate with ANY other system that has NOT had it applied, even if that other system is running AK8!!! Got it??? Remember, we do have a new version of the 2011 code for the older systems that do not want to upgrade to AK8. This 2011.1.8 code ALSO HAS this same deferred update in it. So, if you upgrade your system to AK8, and then apply the deferred update, and you have another system running either 2011.1.8 or AK8, you can replicate with them again once they apply the deferred update for multiple initiator groups. Yes, even if you're not using LUNs. Here is what it looks like if you try. It will fail.

    Read the article

  • Problem with AquaTerm on Snow Leopard

    - by cheetah
    When i try to install AquaTerm on Snow leopard from MacPorts i got this: ---> Computing dependencies for aquaterm ---> Building aquaterm Error: Target org.macports.build returned: shell command "cd "/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_aqua_aquaterm/work/aquaterm" && /usr/bin/xcodebuild -target "AquaTerm" -configuration Deployment build OBJROOT=build/ SYMROOT=build/ MACOSX_DEPLOYMENT_TARGET=10.6 ARCHS=x86_64 SDKROOT= USER_APPS_DIR=/Applications/MacPorts FRAMEWORKS_DIR=/opt/local/Library/Frameworks" returned error 1 Command output: [WARN]Warning: The Copy Bundle Resources build phase contains this target's Info.plist file 'AquaTerm.framework-Info.plist'. CopyPlistFile build/Deployment/AquaTerm.framework/Versions/A/Resources/AquaTerm.framework-Info.plist AquaTerm.framework-Info.plist cd /opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_aqua_aquaterm/work/aquaterm /Developer/Library/Xcode/Plug-ins/CoreBuildTasks.xcplugin/Contents/Resources/copyplist AquaTerm.framework-Info.plist --outdir /opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_aqua_aquaterm/work/aquaterm/build/Deployment/AquaTerm.framework/Versions/A/Resources error: can't exec '/Developer/Library/Xcode/Plug-ins/CoreBuildTasks.xcplugin/Contents/Resources/copyplist' (No such file or directory) Command /Developer/Library/Xcode/Plug-ins/CoreBuildTasks.xcplugin/Contents/Resources/copyplist failed with exit code 71 Command /Developer/Library/Xcode/Plug-ins/CoreBuildTasks.xcplugin/Contents/Resources/copyplist failed with exit code 71 === BUILD NATIVE TARGET AquaTerm OF PROJECT AquaTerm WITH CONFIGURATION Deployment === Check dependencies Warning: Multiple build commands for output file /opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_aqua_aquaterm/work/aquaterm/build/Deployment/AquaTerm.app/Contents/Resources/help.html [WARN]Warning: Multiple build commands for output file /opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_aqua_aquaterm/work/aquaterm/build/Deployment/AquaTerm.app/Contents/Resources/help.html CopyTiffFile build/Deployment/AquaTerm.app/Contents/Resources/Cross.tiff English.lproj/Cross.tiff cd /opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_aqua_aquaterm/work/aquaterm /Developer/Library/Xcode/Plug-ins/CoreBuildTasks.xcplugin/Contents/Resources/copytiff English.lproj/Cross.tiff --outdir /opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_aqua_aquaterm/work/aquaterm/build/Deployment/AquaTerm.app/Contents/Resources error: can't exec '/Developer/Library/Xcode/Plug-ins/CoreBuildTasks.xcplugin/Contents/Resources/copytiff' (No such file or directory) Command /Developer/Library/Xcode/Plug-ins/CoreBuildTasks.xcplugin/Contents/Resources/copytiff failed with exit code 71 Command /Developer/Library/Xcode/Plug-ins/CoreBuildTasks.xcplugin/Contents/Resources/copytiff failed with exit code 71 ** BUILD FAILED ** The following build commands failed: AQTFwk: CopyPlistFile /opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_aqua_aquaterm/work/aquaterm/build/Deployment/AquaTerm.framework/Versions/A/Resources/AquaTerm.framework-Info.plist AquaTerm.framework-Info.plist AquaTerm: CopyTiffFile /opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_aqua_aquaterm/work/aquaterm/build/Deployment/AquaTerm.app/Contents/Resources/Cross.tiff English.lproj/Cross.tiff (2 failures) Error: Status 1 encountered during processing. Before reporting a bug, first run the command again with the -d flag to get complete output. How i can solve this problem?

    Read the article

  • Should an event-sourced aggregate root have query access to the event sourcing repository?

    - by JD Courtoy
    I'm working on an event-sourced CQRS implementation, using DDD in the application / domain layer. I have an object model that looks like this: public class Person : AggregateRootBase { private Guid? _bookingId; public Person(Identification identification) { Apply(new PersonCreatedEvent(identification)); } public Booking CreateBooking() { // Enforce Person invariants var booking = new Booking(); Apply(new PersonBookedEvent(booking.Id)); return booking; } public void Release() { // Enforce Person invariants // Should we load the booking here from the aggregate repository? // We need to ensure that booking is released as well. var booking = BookingRepository.Load(_bookingId); booking.Release(); Apply(new PersonReleasedEvent(_bookingId)); } [EventHandler] public void Handle(PersonBookedEvent @event) { _bookingId = @event.BookingId; } [EventHandler] public void Handle(PersonReleasedEvent @event) { _bookingId = null; } } public class Booking : AggregateRootBase { private DateTime _bookingDate; private DateTime? _releaseDate; public Booking() { //Enforce invariants Apply(new BookingCreatedEvent()); } public void Release() { //Enforce invariants Apply(new BookingReleasedEvent()); } [EventHandler] public void Handle(BookingCreatedEvent @event) { _bookingDate = SystemTime.Now(); } [EventHandler] public void Handle(BookingReleasedEvent @event) { _releaseDate = SystemTime.Now(); } // Some other business activities unrelated to a person } With my understanding of DDD so far, both Person and Booking are seperate aggregate roots for two reasons: There are times when business components will pull Booking objects separately from the database. (ie, a person that has been released has a previous booking modified due to incorrect information). There should not be locking contention between Person and Booking whenever a Booking needs to be updated. One other business requirement is that a Booking can never occur for a Person more than once at a time. Due to this, I'm concerned about querying the query database on the read side as there could potentially be some inconsistency there (due to using CQRS and having an eventually consistent read database). Should the aggregate roots be allowed to query the event-sourced backing store for objects (lazy-loading them as needed)? Are there any other avenues of implementation that would make more sense?

    Read the article

  • Binary socket and policy file in Flex

    - by Daniil
    Hi, I'm trying to evaluate whether Flex can access binary sockets. Seems that there's a class calles Socket (flex.net package). The requirement is that Flex will connect to a server serving binary data. It will then subscribe to data and receive the feed which it will interpret and display as a chart. I've never worked with Flex, my experience lies with Java, so everything is new to me. So I'm trying to quickly set something simple up. The Java server expects the following: DataInputStream in = ..... byte cmd = in.readByte(); int size = in.readByte(); byte[] buf = new byte[size]; in.readFully(buf); ... do some stuff and send binary data in something like out.writeByte(1); out.writeInt(10000); ... etc... Flex, needs to connect to a localhost:6666 do the handshake and read data. I got something like this: try { var socket:Socket = new Socket(); socket.connect('192.168.110.1', 9999); Alert.show('Connected.'); socket.writeByte(108); // 'l' socket.writeByte(115); // 's' socket.writeByte(4); socket.writeMultiByte('HHHH', 'ISO-8859-1'); socket.flush(); } catch (err:Error) { Alert.show(err.message + ": " + err.toString()); } The first thing is that Flex does a <policy-file-request/>. I've modified the server to respond with: <?xml version="1.0"?> <!DOCTYPE cross-domain-policy SYSTEM "/xml/dtds/cross-domain-policy.dtd"> <cross-domain-policy> <site-control permitted-cross-domain-policies="master-only"/> <allow-access-from domain="192.168.110.1" to-ports="*" /> </cross-domain-policy> After that - EOFException happens on the server and that's it. So the question is, am I approaching whole streaming data issue wrong when it comes to Flex? Am I sending the policy file wrong? Unfortunately, I can't seem to find a good solid example of how to do it. It seems to me that Flex can do binary Client-Server application, but I personally lack some basic knowledge when doing it. I'm using Flex 3.5 in IntelliJ IDEA IDE. Any help is appreciated. Thank you!

    Read the article

  • Should an event-sourced aggregate root have access to the event sourcing repository?

    - by JD Courtoy
    I'm working on an event-sourced CQRS implementation, using DDD in the application / domain layer. I have an object model that looks like this: public class Person : AggregateRootBase { private Guid? _bookingId; public Person(Identification identification) { Apply(new PersonCreatedEvent(identification)); } public Booking CreateBooking() { // Enforce Person invariants var booking = new Booking(); Apply(new PersonBookedEvent(booking.Id)); return booking; } public void Release() { // Enforce Person invariants // Should we load the booking here from the aggregate repository? // We need to ensure that booking is released as well. var booking = BookingRepository.Load(_bookingId); booking.Release(); Apply(new PersonReleasedEvent(_bookingId)); } [EventHandler] public void Handle(PersonBookedEvent @event) { _bookingId = @event.BookingId; } [EventHandler] public void Handle(PersonReleasedEvent @event) { _bookingId = null; } } public class Booking : AggregateRootBase { private DateTime _bookingDate; private DateTime? _releaseDate; public Booking() { //Enforce invariants Apply(new BookingCreatedEvent()); } public void Release() { //Enforce invariants Apply(new BookingReleasedEvent()); } [EventHandler] public void Handle(BookingCreatedEvent @event) { _bookingDate = SystemTime.Now(); } [EventHandler] public void Handle(BookingReleasedEvent @event) { _releaseDate = SystemTime.Now(); } // Some other business activities unrelated to a person } With my understanding of DDD so far, both Person and Booking are seperate aggregate roots for two reasons: There are times when business components will pull Booking objects separately from the database. (ie, a person that has been released has a previous booking modified due to incorrect information). There should not be locking contention between Person and Booking whenever a Booking needs to be updated. One other business requirement is that a Booking can never occur for a Person more than once at a time. Due to this, I'm concerned about querying the query database on the read side as there could potentially be some inconsistency there (due to using CQRS and having an eventually consistent read database). Should the aggregate roots be allowed to query the event-sourced backing store by id for objects (lazy-loading them as needed)? Are there any other avenues of implementation that would make more sense?

    Read the article

  • When to call glEnable(GL_FRAMEBUFFER_SRGB)?

    - by Steven Lu
    I have a rendering system where I draw to an FBO with a multisampled renderbuffer, then blit it to another FBO with a texture in order to resolve the samples in order to read off the texture to perform post-processing shading while drawing to the backbuffer (FBO index 0). Now I'd like to get some correct sRGB output... The problem is the behavior of the program is rather inconsistent between when I run it on OS X and Windows and this also changes depending on the machine: On Windows with the Intel HD 3000 it will not apply the sRGB nonlinearity but on my other machine with a Nvidia GTX 670 it does. On the Intel HD 3000 in OS X it will also apply it. So this probably means that I'm not setting my GL_FRAMEBUFFER_SRGB enable state at the right points in the program. However I can't seem to find any tutorials that actually tell me when I ought to enable it, they only ever mention that it's dead easy and comes at no performance cost. I am currently not loading in any textures so I haven't had a need to deal with linearizing their colors yet. To force the program to not simply spit back out the linear color values, what I have tried is simply comment out my glDisable(GL_FRAMEBUFFER_SRGB) line, which effectively means this setting is enabled for the entire pipeline, and I actually redundantly force it back on every frame. I don't know if this is correct or not. It certainly does apply a nonlinearization to the colors but I can't tell if this is getting applied twice (which would be bad). It could apply the gamma as I render to my first FBO. It could do it when I blit the first FBO to the second FBO. Why not? I've gone so far as to take screen shots of my final frame and compare raw pixel color values to the colors I set them to in the program: I set the input color to RGB(1,2,3) and the output is RGB(13,22,28). That seems like quite a lot of color compression at the low end and leads me to question if the gamma is getting applied multiple times. I have just now gone through the sRGB equation and I can verify that the conversion seems to be only applied once as linear 1/255, 2/255, and 3/255 do indeed map to sRGB 13/255, 22/255, and 28/255 using the equation 1.055*C^(1/2.4)+0.055. Given that the expansion is so large for these low color values it really should be obvious if the sRGB color transform is getting applied more than once. So, I still haven't determined what the right thing to do is. does glEnable(GL_FRAMEBUFFER_SRGB) only apply to the final framebuffer values, in which case I can just set this during my GL init routine and forget about it hereafter?

    Read the article

  • SQL Server Configuration timeouts - and a workaround [SSIS]

    - by jamiet
    Ever since I started writing SSIS packages back in 2004 I have opted to store configurations in .dtsConfig (.i.e. XML) files rather than in a SQL Server table (aka SQL Server Configurations) however recently I inherited some packages that used SQL Server Configurations and thus had to immerse myself in their murky little world. To all the people that have ever gone onto the SSIS forum and asked questions about ambiguous behaviour of SQL Server Configurations I now say this... I feel your pain! The biggest problem I have had was in dealing with the change to the order in which configurations get applied that came about in SSIS 2008. Those changes are detailed on MSDN at SSIS Package Configurations however the pertinent bits are: As the utility loads and runs the package, events occur in the following order: The dtexec utility loads the package. The utility applies the configurations that were specified in the package at design time and in the order that is specified in the package. (The one exception to this is the Parent Package Variables configurations. The utility applies these configurations only once and later in the process.) The utility then applies any options that you specified on the command line. The utility then reloads the configurations that were specified in the package at design time and in the order specified in the package. (Again, the exception to this rule is the Parent Package Variables configurations). The utility uses any command-line options that were specified to reload the configurations. Therefore, different values might be reloaded from a different location. The utility applies the Parent Package Variable configurations. The utility runs the package. To understand how these steps differ from SSIS 2005 I recommend reading Doug Laudenschlager’s blog post Understand how SSIS package configurations are applied. The very nature of SQL Server Configurations means that the Connection String for the database holding the configuration values needs to be supplied from the command-line. Typically then the call to execute your package resembles this: dtexec /FILE Package.dtsx /SET "\Package.Connections[SSISConfigurations].Properties[ConnectionString]";"\"Data Source=SomeServer;Initial Catalog=SomeDB;Integrated Security=SSPI;\"", The problem then is that, as per the steps above, the package will (1) attempt to apply all configurations using the Connection String stored in the package for the "SSISConfigurations" Connection Manager before then (2) applying the Connection String from the command-line and then (3) apply the same configurations all over again. In the packages that I inherited that first attempt to apply the configurations would timeout (not unexpected); I had 8 SQL Server Configurations in the package and thus the package was waiting for 2 minutes until all the Configurations timed out (i.e. 15seconds per Configuration) - in a package that only executes for ~8seconds when it gets to do its actual work a delay of 2minutes was simply unacceptable. We had three options in how to deal with this: Get rid of the use of SQL Server configurations and use .dtsConfig files instead Edit the packages when they get deployed Change the timeout on the "SSISConfigurations" Connection Manager #1 was my preferred choice but, for reasons I explain below*, wasn't an option in this particular instance. #2 was discounted out of hand because it negates the point of using Configurations in the first place. This left us with #3 - change the timeout on the Connection Manager. This is done by going into the properties of the Connection Manager, opening the "All" tab and changing the Connect Timeout property to some suitable value (in the screenshot below I chose 2 seconds). This change meant that the attempts to apply the SQL Server configurations timed out in 16 seconds rather than two minutes; clearly this isn't an optimum solution but its certainly better than it was. So there you have it - if you are having problems with SQL Server configuration timeouts within SSIS try changing the timeout of the Connection Manager. Better still - don't bother using SQL Server Configuration in the first place. Even better - install RC0 of SQL Server 2012 to start leveraging SSIS parameters and leave the nasty old world of configurations behind you. @Jamiet * Basically, we are leveraging a SSIS execution/logging framework in which the client had invested a lot of resources and SQL Server Configurations are an integral part of that.

    Read the article

  • JavaScript: this

    - by bdukes
    JavaScript is a language steeped in juxtaposition.  It was made to “look like Java,” yet is dynamic and classless.  From this origin, we get the new operator and the this keyword.  You are probably used to this referring to the current instance of a class, so what could it mean in a language without classes? In JavaScript, this refers to the object off of which a function is referenced when it is invoked (unless it is invoked via call or apply). What this means is that this is not bound to your function, and can change depending on how your function is invoked. It also means that this changes when declaring a function inside another function (i.e. each function has its own this), such as when writing a callback. Let's see some of this in action: var obj = { count: 0, increment: function () { this.count += 1; }, logAfterTimeout = function () { setTimeout(function () { console.log(this.count); }, 1); } }; obj.increment(); console.log(obj.count); // 1 var increment = obj.increment; window.count = 'global count value: '; increment(); console.log(obj.count); // 1 console.log(window.count); // global count value: 1 var newObj = {count:50}; increment.call(newObj); console.log(newObj.count); // 51 obj.logAfterTimeout();// global count value: 1 obj.logAfterTimeout = function () { var proxiedFunction = $.proxy(function () { console.log(this.count); }, this); setTimeout(proxiedFunction, 1); }; obj.logAfterTimeout(); // 1 obj.logAfterTimeout = function () { var that = this; setTimeout(function () { console.log(that.count); }, 1); }; obj.logAfterTimeout(); // 1 The last couple of examples here demonstrate some methods for making sure you get the values you expect.  The first time logAfterTimeout is redefined, we use jQuery.proxy to create a new function which has its this permanently set to the passed in value (in this case, the current this).  The second time logAfterTimeout is redefined, we save the value of this in a variable (named that in this case, also often named self) and use the new variable in place of this. Now, all of this is to clarify what’s going on when you use this.  However, it’s pretty easy to avoid using this altogether in your code (especially in the way I’ve demonstrated above).  Instead of using this.count all over the place, it would have been much easier if I’d made count a variable instead of a property, and then I wouldn’t have to use this to refer to it.  var obj = (function () { var count = 0; return { increment: function () { count += 1; }, logAfterTimeout = function () { setTimeout(function () { console.log(count); }, 1); }, getCount: function () { return count; } }; }()); If you’re writing your code in this way, the main place you’ll run into issues with this is when handling DOM events (where this is the element on which the event occurred).  In that case, just be careful when using a callback within that event handler, that you’re not expecting this to still refer to the element (and use proxy or that/self if you need to refer to it). Finally, as demonstrated in the example, you can use call or apply on a function to set its this value.  This isn’t often needed, but you may also want to know that you can use apply to pass in an array of arguments to a function (e.g. console.log.apply(console, [1, 2, 3, 4])).

    Read the article

  • SQL SERVER – Installing SQL Server Data Tools and SSRS

    - by Pinal Dave
    This example is from the Beginning SSRS by Kathi Kellenberger. Supporting files are available with a free download from the www.Joes2Pros.com web site. If you have installed SQL Server, but are missing the Data Tools or Reporting Services Double-click the SQL Server 2012 installation media. Click the Installation link on the left to view the Installation options. Click the top link New SQL Server stand-alone installation or add features to an existing installation. Follow the SQL Server Setup wizard until you get to the Installation Type screen. At that screen, select Add features to an existing instance of SQL Server 2012. Click Next to move to the Feature Selection page. Select Reporting Services – Native and SQL Server Data Tools. If the Management Tools have not been installed, go ahead and choose them as well. Continue through the wizard and reboot the computer at the end of the installation if instructed to do so. Configure Reporting Services If you installed Reporting Services during the installation of the SQL Server instance, SSRS will be configured automatically for you. If you install SSRS later, then you will have to go back and configure it as a subsequent step. Click Start > All Programs > Microsoft SQL Server 2012 > Configuration Tools > Reporting Services Configuration Manager > Connect on the Reporting Services Configuration Connection dialog box. On the left-hand side of the Reporting Services Configuration Manager, click Database. Click the Change Database button on the right side of the screen. Select Create a new report server database and click Next. Click through the rest of the wizard accepting the defaults. This wizard creates two databases: ReportServer, used to store report definitions and security, and ReportServerTempDB which is used as scratch space when preparing reports for user requests. Now click Web Service URL on the left-hand side of the Reporting Services Configuration Manager. Click the Apply button to accept the defaults. If the Apply button has been grayed out, move on to the next step. This step sets up the SSRS web service. The web service is the program that runs in the background that communicates between the web page, which you will set up next, and the databases. The final configuration step is to select the Report Manager URL link on the left. Accept the default settings and click Apply. If the Apply button was already grayed out, this means the SSRS was already configured. This step sets up the Report Manager web site where you will publish reports. You may be wondering if you also must install a web server on your computer. SQL Server does not require that the Internet Information Server (IIS), the Microsoft web server, be installed to run Report Manager. Click Exit to dismiss the Reporting Services Configuration Manager dialog box. Tomorrow’s Post Tomorrow’s blog post will show how to create your first report using the Report Wizard. If you want to learn SSRS in easy to simple words – I strongly recommend you to get Beginning SSRS book from Joes 2 Pros. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Reporting Services, SSRS

    Read the article

  • Part 6: Extensions vs. Modifications

    - by volker.eckardt(at)oracle.com
    Customizations = Extensions + Modifications In the EBS terminology, a customization can be an extension or a modification. Extension means that you mainly create your own code from scratch. You may utilize existing views, packages and java classes, but your code is unique. Modifications are quite different, because here you take existing code and change or enhance certain areas to achieve a slightly different behavior. Important is that it doesn't matter if you place your code at the same or at another place – it is a modification. It is also not relevant if you leave the original code enabled or not! Why? Here is the answer: In case the original code piece you have taken as your base will get patched, you need to copy the source again and apply all your changes once more. If you don't do that, you may get different results or write different data compared to the standard – this causes a high risk! Here are some guidelines how to reduce the risk: Invest a bit longer when searching for objects to select data from. Rather choose a view than a table. In case Oracle development changes the underlying tables, the view will be more stable and is therefore a better choice. Choose rather public APIs over internal APIs. Same background as before: although internal structure might change, the public API is more stable. Use personalization and substitution rather than modification. Spend more time to check if the requirement can be covered with such techniques. Build a project code library, avoid that colleagues creating similar functionality multiple times. Otherwise you have to review lots of similar code to determine the need for correction. Use the technique of “flagged files”. Flagged files is a way to mark a standard deployment file. If you run the patch analyse (within Application Manager), the analyse result will list flagged standard files in case they will be patched. If you maintain a cross reference to your own CEMLIs, you can easily determine which CEMLIs have to be reviewed. Implement a code review process. This can be done by utilizing team internal or external persons. If you implement such a team internal process, your team members will come up with suggestions how to improve the code quality by themselves. Review heavy customizations regularly, to identify options to reduce complexity; let's say perform this every 6th month. You may not spend days for such a review, but a high level cross check if the customization can be reduced is suggested. De-install customizations which are no more required. Define a process for this. Add a section into the technical documentation how to uninstall and what are possible implications. Maintain a cross reference between CEMLIs and between CEMLIs, EBS modules and business processes. Keep this list up to date! Share this list! By following these guidelines, you are able to improve product stability. Although we might not be able to avoid modifications completely, we can give a much better advise to developers and to our test team. Summary: Extensions and Modifications have to be handled differently during their lifecycle. Modifications implicate a much higher risk and should therefore be reviewed more frequently. Good cross references allow you to give clear advise for the testing activities.

    Read the article

  • Security error accessing Service outside of FlexBuilder

    - by MikeHoss
    I'm very new to Flex and I have what I think it a head-scratcher. I am building a little Flash app that will consume some web services over HTTP. When I am in Flexbuilder and run my app there, it works fine. When I goto to my FlexBuilder project on my OS and double-click on it, it works fine. When I zip up my bin-debug file, I get this error: Security error accessing url faultCode:Channel.Security.Error faultString: 'Security error accessing url' faultDetail:'Destination: DefaultHTTP' So I googled that and got information on about the crossdomain.xml file. Well, I can't put a crossdomain file in the service I am calling, but I can put one somewhere else. So I put the following lines in Flex app: Security.allowDomain("vx1391"); Security.loadPolicyFile("http://vx1391:8080/job/Remote%20FIT%20Runner/ws/trunk/flash-cross-domain.xml"); My cross-domain.xml file is wide-open: &lt;cross-domain-policy&gt; &lt;allow-access-from domain="*"/&gt; </cross-domain-policy> Which I know is bad in a prod enivironment, but right now I just need to get this working locally but outside of FlexBuilder. Anyone want to help out this Flex-noob?

    Read the article

  • XSLT 1.0 help with recursion logic

    - by DashaLuna
    Hello guys, I'm having troubles with the logic and would apprecite any help/tips. I have <Deposits> elements and <Receipts> elements. However there isn't any identification what receipt was paid toward what deposit. I am trying to update the <Deposits> elements with the following attributes: @DueAmont - the amount that is still due to pay @Status - whether it's paid, outstanding (partly paid) or due @ReceiptDate - the latest receipt's date that was paid towards this deposit Every deposit could be paid with one or more receipts. It also could happen, that 1 receipt could cover one or more deposits. For example. If there are 3 deposits: 500 100 450 That are paid with the following receipts: 200 100 250 I want to get the following info: Deposit 1 is fully paid (status=paid, dueAmount=0, receiptNum=3. Deposit 2 is partly paid (status=outstanding, dueAmount=50, receiptNum=3. Deposit 3 is not paid (status=due, dueAmount=450, receiptNum=NAN. I've added comments in the code explaining what I'm trying to do. I am staring at this code for the 3rd day now non stop - can't see what I'm doing wrong. Please could anyone help me with it? :) Thanks! Set up: $deposits - All the available deposits $receiptsAsc - All the available receipts sorted by their @ActionDate Code: <!-- Accumulate all the deposits with @Status, @DueAmount and @ReceiptDate attributes Provide all deposits, receipts and start with 1st receipt --> <xsl:variable name="depositsClassified"> <xsl:call-template name="classifyDeposits"> <xsl:with-param name="depositsAll" select="$deposits"/> <xsl:with-param name="receiptsAll" select="$receiptsAsc"/> <xsl:with-param name="receiptCount" select="'1'"/> </xsl:call-template> </xsl:variable> <!-- Recursive function to associate deposits' total amounts with overall receipts paid to determine whether a deposit is due, outstanding or paid. Also determine what's the due amount and latest receipt towards the deposit for each deposit --> <xsl:template name="classifyDeposits"> <xsl:param name="depositsAll"/> <xsl:param name="receiptsAll"/> <xsl:param name="receiptCount"/> <!-- If there are deposits to proceed --> <xsl:if test="$depositsAll"> <!-- Get the 1st deposit --> <xsl:variable name="deposit" select="$depositsAll[1]"/> <!-- Calculate the sum of all receipts up to and including currenly considered --> <xsl:variable name="receiptSum"> <xsl:choose> <xsl:when test="$receiptsAll"> <xsl:value-of select="sum($receiptsAll[position() &lt;= $receiptCount]/@ReceiptAmount)"/> </xsl:when> <xsl:otherwise>0</xsl:otherwise> </xsl:choose> </xsl:variable> <!-- Difference between deposit amount and sum of the receipts calculated above --> <xsl:variable name="diff" select="$deposit/@DepositTotalAmount - $receiptSum"/> <xsl:choose> <!-- Deposit isn't paid fully and there are more receipts/payments exist. So consider the same deposit, but take next receipt into calculation as well --> <xsl:when test="($diff &gt; 0) and ($receiptCount &lt; count($receiptsAll))"> <xsl:call-template name="classifyDeposits"> <xsl:with-param name="depositsAll" select="$depositsAll"/> <xsl:with-param name="receiptsAll" select="$receiptsAll"/> <xsl:with-param name="receiptCount" select="$receiptCount + 1"/> </xsl:call-template> </xsl:when> <!-- Deposit is paid or we ran out of receipts --> <xsl:otherwise> <!-- process the deposit. Determine its status and then update corresponding attributes --> <xsl:apply-templates select="$deposit" mode="defineDeposit"> <xsl:with-param name="diff" select="$diff"/> <xsl:with-param name="receiptNum" select="$receiptCount"/> </xsl:apply-templates> <!-- Recursively call the template with the rest of deposits excluding the first. Before hand update the @ReceiptsAmount. For the receipts before current it is now 0, for the current is what left in the $diff, and simply copy over receipts after current one. --> <xsl:variable name="receiptsUpdatedRTF"> <xsl:for-each select="$receiptsAll"> <xsl:choose> <!-- these receipts was fully accounted for the current deposit. Make them 0 --> <xsl:when test="position() &lt; $receiptCount"> <xsl:copy> <xsl:copy-of select="./@*"/> <xsl:attribute name="ReceiptAmount">0</xsl:attribute> </xsl:copy> </xsl:when> <!-- this receipt was partly/fully(in case $diff=0) accounted for the current deposit. Make it whatever is in $diff --> <xsl:when test="position() = $receiptCount"> <xsl:copy> <xsl:copy-of select="./@*"/> <xsl:attribute name="ReceiptAmount"> <xsl:value-of select="format-number($diff, '#.00;#.00')"/> </xsl:attribute> </xsl:copy> </xsl:when> <!-- these receipts weren't yet considered - copy them over --> <xsl:otherwise> <xsl:copy-of select="."/> </xsl:otherwise> </xsl:choose> </xsl:for-each> </xsl:variable> <xsl:variable name="receiptsUpdated" select="msxsl:node-set($receiptsUpdatedRTF)/Receipts"/> <!-- Recursive call for the next deposit. Starting counting receipts from the current one. --> <xsl:call-template name="classifyDeposits"> <xsl:with-param name="depositsAll" select="$deposits[position() != 1]"/> <xsl:with-param name="receiptsAll" select="$receiptsUpdated"/> <xsl:with-param name="receiptCount" select="$receiptCount"/> </xsl:call-template> </xsl:otherwise> </xsl:choose> </xsl:if> </xsl:template> <!-- Determine deposit's status and due amount --> <xsl:template match="MultiDeposits" mode="defineDeposit"> <xsl:param name="diff"/> <xsl:param name="receiptNum"/> <xsl:choose> <xsl:when test="$diff &lt;= 0"> <xsl:apply-templates select="." mode="addAttrs"> <xsl:with-param name="status" select="'paid'"/> <xsl:with-param name="dueAmount" select="'0'"/> <xsl:with-param name="receiptNum" select="$receiptNum"/> </xsl:apply-templates> </xsl:when> <xsl:when test="$diff = ./@DepositTotalAmount"> <xsl:apply-templates select="." mode="addAttrs"> <xsl:with-param name="status" select="'due'"/> <xsl:with-param name="dueAmount" select="$diff"/> </xsl:apply-templates> </xsl:when> <xsl:when test="$diff &lt; ./@DepositTotalAmount"> <xsl:apply-templates select="." mode="addAttrs"> <xsl:with-param name="status" select="'outstanding'"/> <xsl:with-param name="dueAmount" select="$diff"/> <xsl:with-param name="receiptNum" select="$receiptNum"/> </xsl:apply-templates> </xsl:when> <xsl:otherwise/> </xsl:choose> </xsl:template> <!-- Add new attributes (@Status, @DueAmount and @ReceiptDate) to the deposit element --> <xsl:template match="MultiDeposits" mode="addAttrs"> <xsl:param name="status"/> <xsl:param name="dueAmount"/> <xsl:param name="receiptNum" select="''"/> <xsl:copy> <xsl:copy-of select="./@*"/> <xsl:attribute name="Status"><xsl:value-of select="$status"/></xsl:attribute> <xsl:attribute name="DueAmount"><xsl:value-of select="$dueAmount"/></xsl:attribute> <xsl:if test="$receiptNum != ''"> <xsl:attribute name="ReceiptDate"> <xsl:value-of select="$receiptsAsc[position() = $receiptNum]/@ActionDate"/> </xsl:attribute> </xsl:if> <xsl:copy-of select="./*"/> </xsl:copy> </xsl:template>

    Read the article

  • Javascript "Member not found" error in IE8

    - by Steven
    I'm trying to debug the following block of Javascript code to see what the issue is. I'm getting an error that says "Member not found" on the line constructor = function() { in the extend:function() method. I'm not very good with Javascript, and I didn't write this, so I'm kind of lost on what the issue is. The error only occurs in IE8, it works fine in IE7 and Firefox. var Class = { create: function() { return function() { if(this.destroy) Class.registerForDestruction(this); if(this.initialize) this.initialize.apply(this, arguments); } }, extend: function(baseClassName) { constructor = function() { var i; this[baseClassName] = {} for(i in window[baseClassName].prototype) { if(!this[i]) this[i] = window[baseClassName].prototype[i]; if(typeof window[baseClassName].prototype[i] == 'function') { this[baseClassName][i] = window[baseClassName].prototype[i].bind(this); } } if(window[baseClassName].getInheritedStuff) { window[baseClassName].getInheritedStuff.apply(this); } if(this.destroy) Class.registerForDestruction(this); if(this.initialize) this.initialize.apply(this, arguments); } constructor.getInheritedStuff = function() { this[baseClassName] = {} for(i in window[baseClassName].prototype) { if(!this[i]) this[i] = window[baseClassName].prototype[i]; if(typeof window[baseClassName].prototype[i] == 'function') { this[baseClassName][i] = window[baseClassName].prototype[i].bind(this); } } if(window[baseClassName].getInheritedStuff) { window[baseClassName].getInheritedStuff.apply(this); } } return constructor; }, objectsToDestroy : [], registerForDestruction: function(obj) { if(!Class.addedDestructionLoader) { Event.observe(window, 'unload', Class.destroyAllObjects); Class.addedDestructionLoader = true; } Class.objectsToDestroy.push(obj); }, destroyAllObjects: function() { var i,item; for(i=0;item=Class.objectsToDestroy[i];i++) { if(item.destroy) item.destroy(); } Class.objectsToDestroy = null; } }

    Read the article

  • Automatically stashing

    - by Readonly
    The section Last links in the chain: Stashing and the reflog in http://ftp.newartisans.com/pub/git.from.bottom.up.pdf recommends stashing often to take snapshots of your work in progress. The author goes as far as recommending that you can use a cron job to stash your work regularly, without having to do a stash manually. The beauty of stash is that it lets you apply unobtrusive version control to your working process itself: namely, the various stages of your working tree from day to day. You can even use stash on a regular basis if you like, with something like the following snapshot script: $ cat <<EOF > /usr/local/bin/git-snapshot #!/bin/sh git stash && git stash apply EOF $ chmod +x $_ $ git snapshot There’s no reason you couldn’t run this from a cron job every hour, along with running the reflog expire command every week or month. The problem with this approach is: If there are no changes to your working copy, the "git stash apply" will cause your last stash to be applied over your working copy. There could be race conditions between when the cron job executes and the user working on the working copy. For example, "git stash" runs, then the user opens the file, then the script's "git stash apply" is executed. Does anybody have suggestions for making this automatic stashing work more reliably?

    Read the article

  • Detecting browser capabilities and selective events for mouse and touch

    - by skidding
    I started using touch events for a while now, but I just stumbled upon quite a problem. Until now, I checked if touch capabilities are supported, and applied selective events based on that. Like this: if(document.ontouchmove === undefined){ //apply mouse events }else{ //apply touch events } However, my scripts stopped working in Chrome5 (which is currently beta) on my computer. I researched it a bit, and as I expected, in Chrome5 (as opposed to older Chrome, Firefox, IE, etc.) document.ontouchmove is no longer undefined but null. At first I wanted to submit a bug report, but then I realized: There are devices that have both mouse and touch capabilities, so that might be natural, maybe Chrome now defines it because my OS might support both types of events. So the solutions seems easy: Apply BOTH event types. Right? Well the problem now take place on mobile. In order to be backward compatible and support scripts that only use mouse events, mobile browsers might try to fire them as well (on touch). So then with both mouse and touch events set, a certain handler might be called twice every time. What is the way to approach this? Is there a better way to check and apply selective events, or must I ignore the problems that might occur if browsers fire both touch and mouse events at times?

    Read the article

  • Wrong getBounds() on LineScaleMode.NONE

    - by ghalex
    I have write a simple example that adds a canvas and draw a rectangle with stroke size 20 scale mode none. The problem is that if I call getBounds() first time I will get a correct result but after I call scale(); the getBounds() function will give me a wrong result. It will take in cosideration the stroke but stroke has scalemode to none and on the screen nothing happens but in the result I will have a x value smaller. Can sombody tell me how can I fix this ? protected var display :Canvas; protected function addCanvas():void { display = new Canvas(); display.x = display.y = 50; display.width = 100; display.height = 100; display.graphics.clear(); display.graphics.lineStyle( 20, 0x000000, 0.5, true, LineScaleMode.NONE ); display.graphics.beginFill( 0xff0000, 1 ); display.graphics.drawRect(0, 0, 100, 100); display.graphics.endFill(); area.addChild( display ); traceBounce(); } protected function scale():void { var m :Matrix = display.transform.matrix; var apply :Matrix = new Matrix(); apply.scale( 2, 1 ); apply.concat( m ); display.transform.matrix = apply; traceBounce(); } protected function traceBounce():void { trace( display.getBounds( this ) ); }

    Read the article

  • Open source file upload with no timeout on IIS6 with ASP, ASP.NET 2.0 or PHP5

    - by Christopher Done
    I'm after a cross-platform cross-browser way of uploading files such that there is no timeout. Uploads aren't necessarily huge -- some just take a long time to upload because of the uploader's slow connection -- but the server times out anyway. I hear that there are methods to upload files in chunks so that somehow the server decides not to timeout the upload. After searching around all I can see is proprietary upload helpers and Java and Flash (SWFUpload) widgets that aren't cross-platform, don't upload in chunks, or aren't free. I'd like a way to do it in any of these platforms (ASP, ASP.NET 2.0 or PHP5), though I am not very clued up on all this .NET class/controller/project/module/visual studio/compile/etc stuff, so some kind of runnable complete project that runs on .NET 2.0 would be helpful. PHP and ASP I can assume will be more straight-forward. Unless I am completely missing something, which I suspect/hope I am, reasonable web uploads are bloody hard work in any language or platform. So my question is: how can I perform web browser uploads, cross-platform, so that they don't timeout, using free software? Is it possible at all?

    Read the article

  • Rails: link_to_remote prototype helper with :with option

    - by Syed Aslam
    I am trying to grab the current value of a drop down list with Prototype and passing it along using :with like this <%= link_to_remote "today", :update => "choices", :url => { :action => "check_availability" } , :with => "'practitioner='+$F('practitioner')&'clinic='+$F('clinic')&'when=today'", :loading => "spinner.show(); $('submit').disable();", :complete => "spinner.hide(); $('submit').enable();" %> However, this is not working as expected. I am unable to access parameters in the controller as the link_to_remote helper is sending parameters like this: Parameters: {"succ"=>"function () {\n return this + 1;\n}", "action"=>"check_availability", "round"=>"function () {\n return __method.apply(null, [this].concat($A(arguments)));\n}", "ceil"=>"function () {\n return __method.apply(null, [this].concat($A(arguments)));\n}", "floor"=>"function () {\n return __method.apply(null, [this].concat($A(arguments)));\n}", "times"=>"function (iterator, context) {\n $R(0, this, true).each(iterator, context);\n return this;\n}", "toPaddedString"=>"function (length, radix) {\n var string = this.toString(radix || 10);\n return \"0\".times(length - string.length) + string;\n}", "toColorPart"=>"function () {\n return this.toPaddedString(2, 16);\n}", "abs"=>"function () {\n return __method.apply(null, [this].concat($A(arguments)));\n}", "controller"=>"main"} Where am I going wrong? Is there a better way to do this?

    Read the article

  • How do you configure an SDL Tridion CME extension for a subset of views?

    - by Chris Summers
    I have created a new editor for SDL Tridion which adds some new functionality to the ribbon bar. This is enabled by adding the following snippet to the editor.config <!-- ItemCommenting PowerTool --> <ext:extension assignid="ItemCommenting" name="Save and&lt;br/&gt;Comment" pageid="HomePage" groupid="ManageGroup" insertbefore="SaveCloseBtn"> <ext:command>PT_ItemCommenting</ext:command> <ext:title>Save and Comment</ext:title> <ext:issmallbutton>false</ext:issmallbutton> <ext:dependencies> <cfg:dependency>PowerTools.Commands</cfg:dependency> </ext:dependencies> <ext:apply> <ext:view name="*" /> </ext:apply> </ext:extension> This is applied to all views by using a wildcard value in the node. This has results in my new button being added to the ribbon of every view, including the main dashboard. Is there a way to add this to all views except for the dashboard? Or do I have to create something like this? <ext:apply> <ext:view name="PageView" /> <ext:view name="ComponentView" /> <ext:view name="SchemaView" /> </ext:apply> If this is the only way to achieve the result I need, is there a list of all the view names somewhere?

    Read the article

  • convert xml document to comma delimited (CSV) file using xslt stylesheet.

    - by Brad H
    I need some assistance converting an xml document to a CSV file using an xslt stylesheet. I am trying to use the following xsl and I can't seem to get it right. I want my comma delimited file to include column headings, followed by the data. My biggest issues are removing the final comma after the last item and inserting a carriage return so each group of data appears on a separate line. I have been using XML Notepad. <xsl:template match="/"> <xsl:element name="table"> <xsl:apply-templates select="/*/*[1]" mode="header" /> <xsl:apply-templates select="/*/*" mode="row" /> </xsl:element> </xsl:template> <xsl:template match="*" mode="header"> <xsl:element name="tr"> <xsl:apply-templates select="./*" mode="column" /> </xsl:element> </xsl:template> <xsl:template match="*" mode="row"> <xsl:element name="tr"> <xsl:apply-templates select="./*" mode="node" /> </xsl:element> </xsl:template> <xsl:template match="*" mode="column"> <xsl:element name="th"> <xsl:value-of select="translate(name(.),'qwertyuiopasdfghjklzxcvbnm_','QWERTYUIOPASDFGHJKLZXCVBNM ')" /> </xsl:element>, </xsl:template> <xsl:template match="*" mode="node"> <xsl:element name="td"> <xsl:value-of select="." /> </xsl:element>, </xsl:template>

    Read the article

  • Is it possible to navigate to the parent node of a matched node during XSLT processing?

    - by Darin
    I'm working with an OpenXML document, processing the main document part with some XSLT. I've selected a set of nodes via <xsl:template match="w:sdt"> </xsl:template> In most cases, I simply need to replace that matched node with something else, and that works fine. BUT, in some cases, I need to replace not the w:sdt node that matched, but the closest w:p ancestor node (ie the first paragraph node that contains the sdt node). The trick is that the condition used to decide one or the other is based on data derived from the attributes of the sdt node, so I can't use a typical xslt xpath filter. I'm trying to do something like this <xsl:template match="w:sdt"> <xsl:choose> <xsl:when test={first condition}> {apply whatever templating is necessary} </xsl:when> <xsl:when test={exception condition}> <!-- select the parent of the ancestor w:p nodes and apply the appropriate templates --> <xsl:apply-templates select="(ancestor::w:p)/.." mode="backout" /> </xsl:when> </xsl:choose> </xsl:template> <!-- by using "mode", only this template will be applied to those matching nodes from the apply-templates above --> <xsl:template match="node()" mode="backout"> {CUSTOM FORMAT the node appropriately} </xsl:template> This whole concept works, BUT no matter what I've tried, It always applies the formatting from the CUSTOM FORMAT template to the w:p node, NOT it's parent node. It's almost as if you can't reference a parent from a matching node. And maybe you can't, but I haven't found any docs that say you can't Any ideas?

    Read the article

< Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >