Search Results

Search found 18096 results on 724 pages for 'let me be'.

Page 77/724 | < Previous Page | 73 74 75 76 77 78 79 80 81 82 83 84  | Next Page >

  • Is Tracking Software Usage Illegal?

    - by Graviton
    Let's say if I am doing desktop application, and I am interested to know whether our software really gets used or not. Is it alright to insert in code that tracks whether our software is used, for how long and so on? Note that no person-identifiable information will be collected, all I am interested to know is how frequent and for how long the software is used. The information will be sent to our server for diagnosis. What do you think?

    Read the article

  • what is this juju status ERROR state

    - by JUAN CABALLERO
    after i do a juju bootstrap i wait until cloud init is finished. i get no juju and the following errors. ERROR state/api: websocket.Dial wss://b4exj.master:17070/: dial tcp 198.105.244.240:17070: connection timed out ERROR state/api: websocket.Dial wss://b4exj.master:17070/: dial tcp 198.105.244.240:17070: connection timed out now let me add that the b4exj.master does not reside at 198.105.244.240:17070 but at 10.x.x.x this is in ubuntu 12.04.4 MAAS 1.4 and juju 1.18 all 64bit non VM

    Read the article

  • SQL Server v.Next (Denali) : Another SSMS bug that should be fixed

    - by AaronBertrand
    Sorry to call this out in a separate post (I talked about a bunch of SSMS Connect items the other day), but Aaron Nelson ( blog | twitter ) jogged my memory today about an issue that has gone unfixed for years: the custom coloring for Registered Servers is neither consistent nor global. For one of my servers, I've chosen a red color to show in the status bar. Let's pretend this is a production server, and I want the red to remind me to use caution. I can set this up by right-clicking a Registered...(read more)

    Read the article

  • Merge Join component sorted outputs [SSIS]

    - by jamiet
    One question that I have been asked a few times of late in regard to performance tuning SSIS data flows is this: Why isn’t the Merge Join output sorted (i.e.IsSorted=True)? This is a fair question. After all both of the Merge Join inputs are sorted, hence why wouldn’t the output be sorted as well? Well here’s a little secret, the Merge Join output IS sorted! There’s a caveat though – it is only under certain circumstances and SSIS itself doesn’t do a good job of informing you of it. Let’s take a look at an example. Here we have a dataflow that consumes data from the [AdventureWorks2008].[Sales].[SalesOrderHeader] & [AdventureWorks2008].[Sales].[SalesOrderDetail] tables then joins them using a Merge Join component: Let’s take a look inside the editor of the Merge Join: We are joining on the [SalesOrderId] field (which is what the two inputs just happen to be sorted upon). We are also putting [SalesOrderHeader].[SalesOrderId] into the output. Believe it or not the output from this Merge Join component is sorted (i.e. has IsSorted=True) but unfortunately the Merge Join component does not have an Advanced Editor hence it is hidden away from us. There are a couple of ways to prove to you that is the case; I could open up the package XML inside the .dtsx file and show you the metadata but there is an easier way than that – I can attach a Sort component to the output. Take a look: Notice that the Sort component is attempting to sort on the [SalesOrderId] column. This gives us the following warning: Validation warning. DFT Get raw data: {992B7C9A-35AD-47B9-A0B0-637F7DDF93EB}: The data is already sorted as specified so the transform can be removed. The warning proves that the output from the Merge Join is sorted! It must be noted that the Merge Join output will only have IsSorted=True if at least one of the join columns is included in the output. So there you go, the Merge Join component can indeed produce a sorted output and that’s very useful in order to avoid unnecessary expensive Sort operations downstream. Hope this is useful to someone out there! @Jamiet  P.S. Thank you to Bob Bojanic on the SSIS product team who pointed this out to me!

    Read the article

  • Map building - Tower Defense

    - by Dan K
    Before diving too deep into my question, let it be known that I am learning as far as java script goes and figured a simple Tower Defense game would be an excellent way to learn things. So I have found a simple background image with a path drawn on it and my question is how would I go about building a path so that I can animate my objects. Would I have to take the image and overlay a grid system, or can I store the path in some sort of array and have my objects move across it? Here is the background image:

    Read the article

  • Data structures in functional programming

    - by pwny
    I'm currently playing with LISP (particularly Scheme and Clojure) and I'm wondering how typical data structures are dealt with in functional programming languages. For example, let's say I would like to solve a problem using a graph pathfinding algorithm. How would one typically go about representing that graph in a functional programming language (primarily interested in pure functional style that can be applied to LISP)? Would I just forget about graphs altogether and solve the problem some other way?

    Read the article

  • Error message during update from 13.04 to 13.10

    - by layonhands
    The following was reported after I attempted to report the problem back to Ubuntu: The problem cannot be reported: You have some obsolete package versions installed. Please upgrade the following packages and check if the problem still occurs: ubuntu-release-upgrader-gtk, apport, apport-gtk, apport-symptoms, apt, apt-utils, at-spi2-core, binutils, dbus, gcc-4.7-base, gdb, gir1.2-atk-1.0, gir1.2-gtk-3.0, glib-networking, glib-networking-common, glib-networking-services, gnupg, gpgv, ifupdown, initramfs-tools, initramfs-tools-bin, kmod, libappindicator3-1, libapt-inst1.5, libapt-pkg4.12, libasound2, libatk-bridge2.0-0, libatk1.0-0, libatk1.0-data, libatspi2.0-0, libc-bin, libc6, libcups2, libdbus-1-3, libdbusmenu-glib4, libdbusmenu-gtk3-4, libdrm-intel1, libdrm-nouveau2, libdrm-radeon1, libdrm2, libgail-3-0, libgcc1, libgcrypt11, libglib2.0-0, libglib2.0-data, libgnutls26, libgomp1, libgstreamer-plugins-base1.0-0, libgstreamer1.0-0, libgtk-3-0, libgtk-3-bin, libgtk-3-common, libgudev-1.0-0, libicu48, libindicator3-7, libkmod2, liblcms2-2, libpci3, libplymouth2, libpolkit-agent-1-0, libpolkit-backend-1-0, libpolkit-gobject-1-0, libprocps0, libpython-stdlib, libpython2.7, libpython2.7-minimal, libpython2.7-stdlib, libpython3-stdlib, libpython3.3-minimal, libpython3.3-stdlib, libssl1.0.0, libstdc++6, libtiff5, libudev1, libx11-6, libx11-data, libx11-xcb1, libxcb-dri2-0, libxcb-glx0, libxcb-render0, libxcb-shm0, libxcb1, libxcursor1, libxext6, libxfixes3, libxi6, libxinerama1, libxml2, libxrandr2, libxrender1, libxres1, libxt6, libxtst6, libxxf86vm1, lsb-base, lsb-release, module-init-tools, multiarch-support, openssl, passwd, pciutils, perl, perl-base, perl-modules, plymouth, plymouth-theme-ubuntu-text, policykit-1, procps, python, python-gi, python-minimal, python2.7, python2.7-minimal, python3, python3-apport, python3-distupgrade, python3-gi, python3-minimal, python3-problem-report, python3-software-properties, python3-update-manager, python3.3, python3.3-minimal, rsyslog, shared-mime-info, software-properties-common, software-properties-gtk, tar, tzdata, ubuntu-release-upgrader-core, ubuntu-release-upgrader-gtk, udev, update-manager, update-manager-core, update-notifier, update-notifier-common If this question has already been answered, I'm sorry for the repost, but I would appreciate a link to the fix. Thanks. FYI: Dell Latitude D630, Intel Centrino processor. Also, the updater is currently running what seems to be the update. I will report back when it is done going through its process to let you know if it is in fact the 13.10 update. Update 2: System went through an update, but it wasn't for the OS. I think it was an update for the error message mentioned above. Now the OS update is currently running the 'distribution upgrade' portion of the update. This is further than it had gone before. Again I will report back once this is done to let you know whether or not the update was successful. Final Update: Don't know for sure what happened, but I'm almost sure that the error mentioned above was resolved in the first update prior to the 13.10 update. All set.

    Read the article

  • Help, i cant reference my vars!

    - by SystemNetworks
    I have a sub-class(let's call it sub) and it contains all the function of an object in my game. In my main class(Let's call it main), i connect my sub to main. (Example sub Code: s = new sub(); Then I put my sub function at the update method. Code: s.myFunc(); Becuase in my sub, i have booleans, integers, float and more. The problem is that I don't want to connect my main class to use my main's int, booleans and others. If i connect it, it will have a stack overflow. This is what I put in my sub: Code: package javagame; import org.newdawn.slick.GameContainer; import org.newdawn.slick.Graphics; import org.newdawn.slick.Input; import org.newdawn.slick.state.StateBasedGame; public class Armory { package javagame; import org.newdawn.slick.GameContainer; import org.newdawn.slick.Graphics; import org.newdawn.slick.Input; import org.newdawn.slick.state.StateBasedGame; public class Store { public Integer wood; public Float probePositionX; public Float probePositionY; public Boolean StoreOn; public Boolean darkBought; public Integer money; public Integer darkEnergy; public Integer lifeLeft; public Integer powerLeft; public void darkStores(GameContainer gc, StateBasedGame sbg, GameContainer gc2) { Input input1 = gc.getInput(); //Player need wood to enter(200) If not there will be an error. if(wood>=200) { //Enter Store! if(input1.isKeyDown(Input.KEY_Q)) { //Player must be in this cord! if((probePositionX>393 && probePositionX<555) && (probePositionY< 271 && probePositionY>171)) { //The Store is On StoreOn=true; } } } } } In my main (update function) I put: Code: s.darkBought = darkBought; s.darkEnergy = darkEnergy; s.lifeLeft = lifeLeft; s.money = money; s.powerLeft = powerLeft; s.probePositionX = probePositionX; s.probePositionY = probePositionY; s.StoreOn = StoreOn; s.wood = wood; s.darkStores(gc, sbg, gc); The problem is when I go to the place, and I press q, nothing shows up. It should show another image. Is there anything wrong???

    Read the article

  • How to efficiently render resizable GUI elements in DirectX?

    - by PolGraphic
    I wonder what would be most efficient way to render the GUI elements. When we're talking about constant-size elements (that can still be moving), the textures' atlas seems to be good. But what with the resizeable elements? Let's say the panel (with textured borders)? Is there any better way than just render 9 rectangles with textures on them (I guess one texture and different textures coordinates for left-top corner, border, middle etc. used in shader)?

    Read the article

  • Are there still plans for a new sound theme?

    - by Ingo Gerth
    Let me quote from Mark's blog almost one year ago: March 5th, 2010 at 7:19 pm Mark, will there be an update to the sound theme to match the updated visual brand? Mark Shuttleworth: Gack, I completely forgot about that. A very good point. Would you see if you can rally a round of community submissions for a sound theme inspired by light? Lets keep it short and sweet: What are the current considerations for the Ubuntu default sound theme?

    Read the article

  • When is a Seek not a Seek?

    - by Paul White
    The following script creates a single-column clustered table containing the integers from 1 to 1,000 inclusive. IF OBJECT_ID(N'tempdb..#Test', N'U') IS NOT NULL DROP TABLE #Test ; GO CREATE TABLE #Test ( id INTEGER PRIMARY KEY CLUSTERED ); ; INSERT #Test (id) SELECT V.number FROM master.dbo.spt_values AS V WHERE V.[type] = N'P' AND V.number BETWEEN 1 AND 1000 ; Let’s say we need to find the rows with values from 100 to 170, excluding any values that divide exactly by 10.  One way to write that query would be: SELECT T.id FROM #Test AS T WHERE T.id IN ( 101,102,103,104,105,106,107,108,109, 111,112,113,114,115,116,117,118,119, 121,122,123,124,125,126,127,128,129, 131,132,133,134,135,136,137,138,139, 141,142,143,144,145,146,147,148,149, 151,152,153,154,155,156,157,158,159, 161,162,163,164,165,166,167,168,169 ) ; That query produces a pretty efficient-looking query plan: Knowing that the source column is defined as an INTEGER, we could also express the query this way: SELECT T.id FROM #Test AS T WHERE T.id >= 101 AND T.id <= 169 AND T.id % 10 > 0 ; We get a similar-looking plan: If you look closely, you might notice that the line connecting the two icons is a little thinner than before.  The first query is estimated to produce 61.9167 rows – very close to the 63 rows we know the query will return.  The second query presents a tougher challenge for SQL Server because it doesn’t know how to predict the selectivity of the modulo expression (T.id % 10 > 0).  Without that last line, the second query is estimated to produce 68.1667 rows – a slight overestimate.  Adding the opaque modulo expression results in SQL Server guessing at the selectivity.  As you may know, the selectivity guess for a greater-than operation is 30%, so the final estimate is 30% of 68.1667, which comes to 20.45 rows. The second difference is that the Clustered Index Seek is costed at 99% of the estimated total for the statement.  For some reason, the final SELECT operator is assigned a small cost of 0.0000484 units; I have absolutely no idea why this is so, or what it models.  Nevertheless, we can compare the total cost for both queries: the first one comes in at 0.0033501 units, and the second at 0.0034054.  The important point is that the second query is costed very slightly higher than the first, even though it is expected to produce many fewer rows (20.45 versus 61.9167). If you run the two queries, they produce exactly the same results, and both complete so quickly that it is impossible to measure CPU usage for a single execution.  We can, however, compare the I/O statistics for a single run by running the queries with STATISTICS IO ON: Table '#Test'. Scan count 63, logical reads 126, physical reads 0. Table '#Test'. Scan count 01, logical reads 002, physical reads 0. The query with the IN list uses 126 logical reads (and has a ‘scan count’ of 63), while the second query form completes with just 2 logical reads (and a ‘scan count’ of 1).  It is no coincidence that 126 = 63 * 2, by the way.  It is almost as if the first query is doing 63 seeks, compared to one for the second query. In fact, that is exactly what it is doing.  There is no indication of this in the graphical plan, or the tool-tip that appears when you hover your mouse over the Clustered Index Seek icon.  To see the 63 seek operations, you have click on the Seek icon and look in the Properties window (press F4, or right-click and choose from the menu): The Seek Predicates list shows a total of 63 seek operations – one for each of the values from the IN list contained in the first query.  I have expanded the first seek node to show the details; it is seeking down the clustered index to find the entry with the value 101.  Each of the other 62 nodes expands similarly, and the same information is contained (even more verbosely) in the XML form of the plan. Each of the 63 seek operations starts at the root of the clustered index B-tree and navigates down to the leaf page that contains the sought key value.  Our table is just large enough to need a separate root page, so each seek incurs 2 logical reads (one for the root, and one for the leaf).  We can see the index depth using the INDEXPROPERTY function, or by using the a DMV: SELECT S.index_type_desc, S.index_depth FROM sys.dm_db_index_physical_stats ( DB_ID(N'tempdb'), OBJECT_ID(N'tempdb..#Test', N'U'), 1, 1, DEFAULT ) AS S ; Let’s look now at the Properties window when the Clustered Index Seek from the second query is selected: There is just one seek operation, which starts at the root of the index and navigates the B-tree looking for the first key that matches the Start range condition (id >= 101).  It then continues to read records at the leaf level of the index (following links between leaf-level pages if necessary) until it finds a row that does not meet the End range condition (id <= 169).  Every row that meets the seek range condition is also tested against the Residual Predicate highlighted above (id % 10 > 0), and is only returned if it matches that as well. You will not be surprised that the single seek (with a range scan and residual predicate) is much more efficient than 63 singleton seeks.  It is not 63 times more efficient (as the logical reads comparison would suggest), but it is around three times faster.  Let’s run both query forms 10,000 times and measure the elapsed time: DECLARE @i INTEGER, @n INTEGER = 10000, @s DATETIME = GETDATE() ; SET NOCOUNT ON; SET STATISTICS XML OFF; ; WHILE @n > 0 BEGIN SELECT @i = T.id FROM #Test AS T WHERE T.id IN ( 101,102,103,104,105,106,107,108,109, 111,112,113,114,115,116,117,118,119, 121,122,123,124,125,126,127,128,129, 131,132,133,134,135,136,137,138,139, 141,142,143,144,145,146,147,148,149, 151,152,153,154,155,156,157,158,159, 161,162,163,164,165,166,167,168,169 ) ; SET @n -= 1; END ; PRINT DATEDIFF(MILLISECOND, @s, GETDATE()) ; GO DECLARE @i INTEGER, @n INTEGER = 10000, @s DATETIME = GETDATE() ; SET NOCOUNT ON ; WHILE @n > 0 BEGIN SELECT @i = T.id FROM #Test AS T WHERE T.id >= 101 AND T.id <= 169 AND T.id % 10 > 0 ; SET @n -= 1; END ; PRINT DATEDIFF(MILLISECOND, @s, GETDATE()) ; On my laptop, running SQL Server 2008 build 4272 (SP2 CU2), the IN form of the query takes around 830ms and the range query about 300ms.  The main point of this post is not performance, however – it is meant as an introduction to the next few parts in this mini-series that will continue to explore scans and seeks in detail. When is a seek not a seek?  When it is 63 seeks © Paul White 2011 email: [email protected] twitter: @SQL_kiwi

    Read the article

  • Encrypting your SQL Server Passwords in Powershell

    - by laerte
    A couple of months ago, a friend of mine who is now bewitched by the seemingly supernatural abilities of Powershell (+1 for the team) asked me what, initially, appeared to be a trivial question: "Laerte, I do not have the luxury of being able to work with my SQL servers through Windows Authentication, and I need a way to automatically pass my username and password. How would you suggest I do this?" Given that I knew he, like me, was using the SQLPSX modules (an open source project created by Chad Miller; a fantastic library of reusable functions and PowerShell scripts), I merrily replied, "Simply pass the Username and Password in SQLPSX functions". He rather pointed responded: "My friend, I might as well pass: Username-'Me'-password 'NowEverybodyKnowsMyPassword'" As I do have the pleasure of working with Windows Authentication, I had not really thought this situation though yet (and thank goodness I only revealed my temporary ignorance to a friend, and the embarrassment was minimized). After discussing this puzzle with Chad Miller, he showed me some code for saving passwords on SQL Server Tables, which he had demo'd in his Powershell ETL session at Tampa SQL Saturday (and you can download the scripts from here). The solution seemed to be pretty much ready to go, so I showed it to my Authentication-impoverished friend, only to discover that we were only half-way there: "That's almost what I want, but the details need to be stored in my local txt file, together with the names of the servers that I'll actually use the Powershell scripts on. Something like: Server1,UserName,Password Server2,UserName,Password" I thought about it for just a few milliseconds (Ha! Of course I'm not telling you how long it actually took me, I have to do my own marketing, after all) and the solution was finally ready. First , we have to download Library-StringCripto (with many thanks to Steven Hystad), which is composed of two functions: One for encryption and other for decryption, both of which are used to manage the password. If you want to know more about the library, you can see more details in the help functions. Next, we have to create a txt file with your encrypted passwords:$ServerName = "Server1" $UserName = "Login1" $Password = "Senha1" $PasswordToEncrypt = "YourPassword" $UserNameEncrypt = Write-EncryptedString -inputstring $UserName -Password $PasswordToEncrypt $PasswordEncrypt = Write-EncryptedString -inputstring $Password -Password $PasswordToEncrypt "$($Servername),$($UserNameEncrypt),$($PasswordEncrypt)" | Out-File c:\temp\ServersSecurePassword.txt -Append $ServerName = "Server2" $UserName = "Login2" $Password = "senha2" $PasswordToEncrypt = "YourPassword" $UserNameEncrypt = Write-EncryptedString -inputstring $UserName -Password $PasswordToEncrypt $PasswordEncrypt = Write-EncryptedString -inputstring $Password -Password $PasswordToEncrypt "$($Servername),$($UserNameEncrypt),$($PasswordEncrypt)" | Out-File c:\temp\ ServersSecurePassword.txt -Append .And in the c:\temp\ServersSecurePassword.txt file which we've just created, you will find your Username and Password, all neatly encrypted. Let's take a look at what the txt looks like: .and in case you're wondering, Server names, Usernames and Passwords are all separated by commas. Decryption is actually much more simple:Read-EncryptedString -InputString $EncryptString -password "YourPassword" (Just remember that the Password you're trying to decrypt must be exactly the same as the encrypted phrase.) Finally, just to show you how smooth this solution is, let's say I want to use the Invoke-DBMaint function from SQLPSX to perform a checkdb on a system database: it's just a case of split, decrypt and be happy!Get-Content c:\temp\ServerSecurePassword.txt | foreach { [array] $Split = ($_).split(",") Invoke-DBMaint -server $($Split[0]) -UserName (Read-EncryptedString -InputString $Split[1] -password "YourPassword" ) -Password (Read-EncryptedString -InputString $Split[2] -password "YourPassword" ) -Databases "SYSTEM" -Action "CHECK_DB" -ReportOn c:\Temp } This is why I love Powershell.

    Read the article

  • Need Help in Pointing to focus on the Key elements in Code Review Phase?

    - by Sankar Ganesh
    Hi Friends, Let us share your views on the Code Review process, If someone gave a code snippet and ask you to review that code, then what are the major things you will focus on that code Review process. For Instance: I will check any dead code is available in that code, other than Checking Dead Code, what are the key elements to be focused on CODE REVIEW PROCESS. Thanks For Sharing Your Views Sankar Ganesh.S

    Read the article

  • Behavior Driven Development (BDD) and DevExpress XAF

    - by Patrick Liekhus
    So in my previous posts I showed you how I used EDMX to quickly build my business objects within XPO and XAF.  But how do you test whether your business objects are actually doing what you want and verify that your business logic is correct?  Well I was reading my monthly MSDN magazine last last year and came across an article about using SpecFlow and WatiN to build BDD tests.  So why not use these same techniques to write SpecFlow style scripts and have them generate EasyTest scripts for use with XAF.  Let me outline and show a few things below.  I plan on releasing this code in a short while, I just wanted to preview what I was thinking. Before we begin… First, if you have not read the article in MSDN, here is the link to the article that I found my inspiration.  It covers the overview of BDD vs. TDD, how to write some of the SpecFlow syntax and how use the “Steps” logic to create your own tests. Second, if you have not heard of EasyTest from DevExpress I strongly recommend you review it here.  It basically takes the power of XAF and the beauty of your application and allows you to create text based files to execute automated commands within your application. Why would we do this?  Because as you will see below, the cucumber syntax is easier for business analysts to interpret and digest the business rules from.  You can find most of the information you will need on Cucumber syntax within The Secret Ninja Cucumber Scrolls located here.  The basics of the syntax are that Given X When Y Then Z.  For example, Given I am at the login screen When I enter my login credentials Then I expect to see the home screen.  Pretty easy syntax to follow. Finally, we will need to download and install SpecFlow.  You can find it on their website here.  Once you have this installed then let’s write our first test. Let’s get started… So where to start.  Create a new testing project within your solution.  I typically call this with a similar naming convention as used by XAF, my project name .FunctionalTests (i.e.  AlbumManager.FunctionalTests).  Remove the basic test that is created for you.  We will not use the default test but rather create our own SpecFlow “Feature” files.  Add a new item to your project and select the SpecFlow Feature file under C#.  Name your feature file as you do your class files after the test they are performing. Now you can crack open your new feature file and write the actual test.  Make sure to have your Ninja Scrolls from above as it provides valuable resources on how to write your test syntax.  In this test below you can see how I defined the documentation in the Feature section.  This is strictly for our purposes of readability and do not effect the test.  The next section is the Scenario Outline which is considered a test template.  You can see the brackets <> around the fields that will be filled in for each test.  So in the example below you can see that Given I am starting a new test and the application is open.  This means I want a new EasyTest file and the windows application generated by XAF is open.  Next When I am at the Albums screen tells XAF to navigate to the Albums list view.  And I click the New:Album button, tells XAF to click the new button on the list grid.  And I enter the following information tells XAF which fields to complete with the mapped values.  And I click the Save and Close button causes the record to be saved and the detail form to be closed.  Then I verify results tests the input data against what is visible in the grid to ensure that your record was created. The Scenarios section gives each test a unique name and then fills in the values for each test.  This way you can use the same test to make multiple passes with different data. Almost there.  Now we must save the feature file and the BDD tests will be written using standard unit test syntax.  This is all handled for you by SpecFlow so just save the file.  What you will see in your Test List Editor is a unit test for each of the above scenarios you just built. You can now use standard unit testing frameworks to execute the test as you desire.  As you would expect then, these BDD SpecFlow tests can be automated into your build process to ensure that your business requirements are satisfied each and every time. How does it work? What we have done is to intercept the testing logic at runtime to interpret the SpecFlow syntax into EasyTest syntax.  This is the basic StepDefinitions that we are working on now.  We expect to put these on CodePlex within the next few days.  You can always override and make your own rules as you see fit for your project.  Follow the MSDN magazine above to start your own.  You can see part of our implementation below. As you can gather from the MSDN article and the code sample below, we have created our own common rules to build the above syntax. The code implementation for these rules basically saves your information from the feature file into an EasyTest file format.  It then executes the EasyTest file and parses the XML results of the test.  If the test succeeds the test is passed.  If the test fails, the EasyTest failure message is logged and the screen shot (as captured by EasyTest) is saved for your review. Again we are working on getting this code ready for mass consumption, but at this time it is not ready.  We will post another message when it is ready with all details about usage and setup. Thanks

    Read the article

  • Secure Store Service Application not available in SharePoint 2010 Standard

    - by Haseeb Akhtar
    We have migrated from SharePoint 2010 foundation to SharePoint 2010 standard. Now, the problem is we are looking for Secure Store Service on 'Services on Server' page in Central Administration, but we didn't see it. We have another server where SharePoint 2010 standard is installed and there we can see Secure Store Service available. Please let me know what needs to be done for the same. Thanks in Advance

    Read the article

  • Solution for payment gateway with multiple sellers

    - by pvieira
    I'm looking for a payment gateway that can be used in a website with multiple sellers. Let's say that depending on the purchased item, a given seller/merchant should receive the money. Would that be possible using only one "master merchant" account that would act as a "distributor" of funds for several "sub-merchants"? Does any well established privider (paypal, worldpay, auth.net, etc) supports this?

    Read the article

  • ADF Desktop Integration Page Now Live on OTN

    - by juan.ruiz
    I’m happy to announce that we have launched the  ADF Desktop Integration home page on OTN. This page will centralize all the resources related to desktop integration. As you can notice, currently we are providing a variety of resources to help you understand the technology as well as to improve your overall ADF desktop integration learning experience. Let us know what you think about the page and what additional resources related to ADF desktop integration you would like us to include.

    Read the article

  • New Company Website

    - by Liam McLennan
    For a long time now my company website has been showing its age. It was a dot net nuke monstrosity. Today I have deployed a new website for my company. I hope that it reflects my commitment to quality and minimalism. Please have a look and let me know what you think.

    Read the article

  • Kicking off the ODI12c Blog Series

    - by Madhu Nair
    Normal 0 false false false EN-US ZH-TW X-NONE MicrosoftInternetExplorer4 It is always exciting to talk about a new release, especially one as significant as the newly released Oracle Data Integrator 12c (ODI12c). Why? Because it is packed with features that addresses many requirements for the user community. If you missed sneak previews at this year's Oracle Open World sessions, do not despair. Because over the coming weeks the ODI12c team of developers and consultants will be sharing their perspective on key features, experiences and best practices for ODI12c right here through a series of blogs. Before diving into feature details in subsequent blogs it helps to understand the overall themes that went into developing ODI12c. Let the Productivity Flow: Let us face it. Designing for developer user experience is always top of mind to any enterprise software. ODI12c addresses this through the introduction of declarative flow based mappings (the topic of our next ODI blog by the way!!). Reusability has been addressed though the introduction of reusable mappings cutting down development times for repeated logics. An enhanced debugger makes life easy for complex granular debugging scenarios. Unique repository IDs now allow you to manage multiple repositories. Performance is Paramount: Another major area of focus for ODI12c is performance. Increased parallelism (like the multiple target table load feature), reduced session overheads and ability to customize loads plans through physical views all empower the user to tune run times for extreme performances. mapping showing multiple target load physical representation allowing users to choose execution options Integrating it all: This release is not just about ODI12c as a standalone product. Closer integration with Oracle GoldenGate now brings Change Data Capture (CDC) capabilities into ODI12c. Oracle Warehouse Builder (OWB) jobs can now be executed and monitored from within ODI12c. And ODI12c is fast becoming the de facto standard for Oracle Applications that need data integration in their solutions. The best example being the latest release of the Oracle BI Applications technology. Even as we bring you in-depth write-ups about the features there are some great previews and resources that are already out there. Like this super entry by beta partner Rittman Mead Consulting and this ODI12c Key Features White Paper. You can download ODI12c here (this post helps). The best though is the upcoming Executive Webcast featuring customers and executives who have seen and conceived the product. Don’t miss it!

    Read the article

  • Saint Louis Days of .NET 2012

    - by James Michael Hare
    Hey all, just a quick note to let you know I'll be one of the speakers at the St. Louis Days of .NET this year.  I'll be giving a revamped version of my Little Wonders (going to add some new ones to keep it fresh) -- and hopefully other presentations as well, the session selection process is ongoing.St. Louis Days of .NET is a wonderful conference in the midwest and a bargain to boot (only $175 if you register before July 1st!  Hope to see you there.For more information, visit: http://www.stlouisdayofdotnet.com/2012

    Read the article

  • Welcome to the Oracle Cloud Blog

    - by rex.wang
    Welcome to the new Oracle Cloud blog, the home for all things related to cloud computing, including: Oracle Cloud Private cloud products Managed cloud services Here you will find everything from industry perspectives, best practices, product news, customers, events and more. Let’s start with a fun video: watch as a slick SalesDay.com rep gives his best pitch to a wise CIO. Cloud will be a big theme at Oracle OpenWorld this year, so if you're going, here’s a guide to all cloud-related sessions and demo.

    Read the article

  • Fetching Latitude and Longitude Co-ordinates for Addresses using PowerShell

    - by Rob Farley
    Regular readers of my blog (at sqlblog.com – please let me know if you’re reading this elsewhere) may be aware that I’ve been doing more and more with spatial data recently. With the now-available SQL Server 2008 R2 Reporting Services including maps, it’s a topic that interests many people. Interestingly though, although many people have plenty of addresses in their various databases (whether they be CRM systems, HR systems or whatever), my experience shows that many people do not store the latitude...(read more)

    Read the article

< Previous Page | 73 74 75 76 77 78 79 80 81 82 83 84  | Next Page >