Search Results

Search found 19425 results on 777 pages for 'output clause'.

Page 302/777 | < Previous Page | 298 299 300 301 302 303 304 305 306 307 308 309  | Next Page >

  • Querying Visual Studio project files using T-SQL and Powershell

    - by jamiet
    Earlier today I had a need to get some information out of a Visual Studio project file and in this blog post I’m going to share a couple of ways of going about that because I’m pretty sure I won’t be the only person that ever wants to do this. The specific problem I was trying to solve was finding out how many objects in my database project (i.e. in my .dbproj file) had any warnings suppressed but the techniques discussed below will work pretty well for any Visual Studio project file because every such file is simply an XML document, hence it can be queried by anything that can query XML documents. Ever heard the phrase “when all you’ve got is hammer everything looks like a nail”? Well that’s me with querying stuff – if I can write SQL then I’m writing SQL. Here’s a little noddy database project I put together for demo purposes: Two views and a stored procedure, nothing fancy. I suppressed warnings for [View1] & [Procedure1] and hence the pertinent part my project file looks like this:   <ItemGroup>    <Build Include="Schema Objects\Schemas\dbo\Views\View1.view.sql">      <SubType>Code</SubType>      <SuppressWarnings>4151,3276</SuppressWarnings>    </Build>    <Build Include="Schema Objects\Schemas\dbo\Views\View2.view.sql">      <SubType>Code</SubType>    </Build>    <Build Include="Schema Objects\Schemas\dbo\Programmability\Stored Procedures\Procedure1.proc.sql">      <SubType>Code</SubType>      <SuppressWarnings>4151</SuppressWarnings>    </Build>  </ItemGroup>  <ItemGroup> Note the <SuppressWarnings> elements – those are the bits of information that I am after. With a lot of help from folks on the SQL Server XML forum  I came up with the following query that nailed what I was after. It reads the contents of the .dbproj file into a variable of type XML and then shreds it using T-SQL’s XML data type methods: DECLARE @xml XML; SELECT @xml = CAST(pkgblob.BulkColumn AS XML) FROM   OPENROWSET(BULK 'C:\temp\QueryingProjectFileDemo\QueryingProjectFileDemo.dbproj' -- <-Change this path!                    ,single_blob) AS pkgblob                    ;WITH XMLNAMESPACES( 'http://schemas.microsoft.com/developer/msbuild/2003' AS ns) SELECT  REVERSE(SUBSTRING(REVERSE(ObjectPath),0,CHARINDEX('\',REVERSE(ObjectPath)))) AS [ObjectName]        ,[SuppressedWarnings] FROM   (        SELECT  build.query('.') AS [_node]        ,       build.value('ns:SuppressWarnings[1]','nvarchar(100)') AS [SuppressedWarnings]        ,       build.value('@Include','nvarchar(1000)') AS [ObjectPath]        FROM    @xml.nodes('//ns:Build[ns:SuppressWarnings]') AS R(build)        )q And here’s the output: And that’s it – an easy way of discovering which warnings have been suppressed and for which objects in your database projects. I won’t bother going over the code as it is fairly self-explanatory – peruse it at your leisure.   Once I had the SQL above I figured I’d share it around a little in case it was ever useful to anyone else; hence I’m writing this blog post and I also posted it on the Visual Studio Database Development Tools forum at FYI: Discover which objects have had warnings suppressed. Luckily Kevin Goode saw the thread and he posted a different solution to the same problem, one that uses Powershell. The advantage of Kevin’s Powershell approach is that it is easy to analyse many .dbproj files at the same time. Below is Kevin’s code which I have tweaked ever so slightly so that it produces the same results as my SQL script (I just want any object that had had a warning suppressed whereas Kevin was querying specifically for warning 4151):   cd 'C:\Temp\QueryingProjectFileDemo\' cls $projects = ls -r -i *.dbproj Foreach($project in $projects) { $xml = new-object System.Xml.XmlDocument $xml.set_PreserveWhiteSpace( $true ) $xml.Load($project) #$xpath = @{Start="/e:Project/e:ItemGroup/e:Build[e:SuppressWarnings=4151]/@Include"} #$xpath = @{Start="/e:Project/e:ItemGroup/e:Build[contains(e:SuppressWarnings,'4151')]/@Include"} $xpath = @{Start="/e:Project/e:ItemGroup/e:Build[e:SuppressWarnings]/@Include"} $ns = @{ e = "http://schemas.microsoft.com/developer/msbuild/2003" } $xml | Select-Xml -XPath $xpath.Start -Namespace $ns |Select -Expand Node | Select -expand Value } and here’s the output: Nice reusable Powershell and SQL scripts – not bad for an evening’s work. Thank you to Kevin for allowing me to share his code. Don’t forget that these techniques can easily be adapted to query any Visual Studio project file, they’re only XML documents after all! Doubtless many people out there already have code for doing this but nonetheless here is another offering to the great script library in the sky. Have fun! @Jamiet

    Read the article

  • Phillips SAA7139 TV tuner card not working

    - by Gaurav Butola
    When I had windows 7 installed on my computer, My TV tuner card used to work fine after installing the drivers but on ubuntu it is not working. I have tried several Softwares to get it working but none helped. Today I installed Me Tv and when I open it, I get an error saying "There are no DVB devices available". What can I do to get my Philips TV Tuner card working. I have a PCI card and here is the output of lspi command 04:00.0 Multimedia controller: Philips Semiconductors SAA7130 Video Broadcast Decoder (rev 01)

    Read the article

  • How to create scripts that create another scripts

    - by sfrj
    I am writing an script that needs to generate another script that will be used to shutdown an appserver... This is how my code looks like: echo "STEP 8: CREATE STOP SCRIPT" stopScriptContent="echo \"STOPING GLASSFISH PLEASE WAIT...\"\n cd glassfish4/bin\n chmod +x asadmin\n ./asadmin stop-domain\n #In order to work it is required that the original folder of glassfish don't contain already any #project, otherwise, there will be a conflict\n" ${stopScriptContent} > stop.sh chmod +x stop.sh But it is not being created correctly, this is how the output stop.sh looks like: "STOPING GLASSFISH PLEASE WAIT..."\n cd glassfish4/bin\n chmod +x asadmin\n ./asadmin stop-domain\n #In order to work it is required that the original folder of glassfish don't contain already any #project, otherwise, there will be a conflict\n As you see, lots of things are wrong: there is no echo command is taking the \n literaly so there is no new line My doubts are: What is the correct way of making an .sh script create another .sh script. What do you thing I am doing wrong?

    Read the article

  • perl scripts stdin/pipe reading problem [closed]

    - by user4541
    I have 2 scripts for a task. The 1st outputs lines of data (terminated with RT/LF) to STDOUT now and then. The 2nd keeps reading data from STDIN for further processing in the following way: use strict; my $dataline; while(1) { $dtaline = ""; $dataline = ; until( $dataline ne "") { sleep(1); $dataline = ; } #further processing with a non-empty data line follows # } print "quitting...\n"; I redirect the output from the 1st to the 2nd using pipe as following: perl scrt1 |perl scpt2. But the problem I'm having with these 2 scpts is that it looks like that the 2nd scpt keeps getting the initial load of lines of data from the 1st scpt if there's no data anymore. Wonder if anybody having similar issues can kindly help a bit? Thanks.

    Read the article

  • can't login to Unity always login to Unity 2D

    - by Goddard
    I select Ubuntu on login and it always loads Unity 2D. I ran /usr/lib/nux/unity_support_test -p And got this error X Error of failed request: BadWindow (invalid Window parameter) Major opcode of failed request: 137 (NV-GLX) Minor opcode of failed request: 4 () Resource id in failed request: 0x21f Serial number of failed request: 42 Current serial number in output stream: 42 I'm using 12.04 with all the latest updates. nvidia-installer --version nvidia-installer: version 295.53 ([email protected]) Sat May 12 00:34:26 PDT 2012 The NVIDIA Software Installer for Unix/Linux. This program is used to install, upgrade and uninstall The NVIDIA Accelerated Graphics Driver Set for Linux-x86_64. Copyright (C) 2003 - 2010 NVIDIA Corporation.

    Read the article

  • Allowing Access to HttpContext in WCF REST Services

    - by Rick Strahl
    If you’re building WCF REST Services you may find that WCF’s OperationContext, which provides some amount of access to Http headers on inbound and outbound messages, is pretty limited in that it doesn’t provide access to everything and sometimes in a not so convenient manner. For example accessing query string parameters explicitly is pretty painful: [OperationContract] [WebGet] public string HelloWorld() { var properties = OperationContext.Current.IncomingMessageProperties; var property = properties[HttpRequestMessageProperty.Name] as HttpRequestMessageProperty; string queryString = property.QueryString; var name = StringUtils.GetUrlEncodedKey(queryString,"Name"); return "Hello World " + name; } And that doesn’t account for the logic in GetUrlEncodedKey to retrieve the querystring value. It’s a heck of a lot easier to just do this: [OperationContract] [WebGet] public string HelloWorld() { var name = HttpContext.Current.Request.QueryString["Name"] ?? string.Empty; return "Hello World " + name; } Ok, so if you follow the REST guidelines for WCF REST you shouldn’t have to rely on reading query string parameters manually but instead rely on routing logic, but you know what: WCF REST is a PITA anyway and anything to make things a little easier is welcome. To enable the second scenario there are a couple of steps that you have to take on your service implementation and the configuration file. Add aspNetCompatibiltyEnabled in web.config Fist you need to configure the hosting environment to support ASP.NET when running WCF Service requests. This ensures that the ASP.NET pipeline is fired up and configured for every incoming request. <system.serviceModel>     <serviceHostingEnvironment aspNetCompatibilityEnabled="true" multipleSiteBindingsEnabled="true" /> </system.serviceModel> Markup your Service Implementation with AspNetCompatibilityRequirements Attribute Next you have to mark up the Service Implementation – not the contract if you’re using a separate interface!!! – with the AspNetCompatibilityRequirements attribute: [ServiceContract(Namespace = "RateTestService")] [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] public class RestRateTestProxyService Typically you’ll want to use Allowed as the preferred option. The other options are NotAllowed and Required. Allowed will let the service run if the web.config attribute is not set. Required has to have it set. All these settings determine whether an ASP.NET host AppDomain is used for requests. Once Allowed or Required has been set on the implemented class you can make use of the ASP.NET HttpContext object. When I allow for ASP.NET compatibility in my WCF services I typically add a property that exposes the Context and Request objects a little more conveniently: public HttpContext Context { get { return HttpContext.Current; } } public HttpRequest Request { get { return HttpContext.Current.Request; } } While you can also access the Response object and write raw data to it and manipulate headers THAT is probably not such a good idea as both your code and WCF will end up writing into the output stream. However it might be useful in some situations where you need to take over output generation completely and return something completely custom. Remember though that WCF REST DOES actually support that as well with Stream responses that essentially allow you to return any kind of data to the client so using Response should really never be necessary. Should you or shouldn’t you? WCF purists will tell you never to muck with the platform specific features or the underlying protocol, and if you can avoid it you definitely should avoid it. Querystring management in particular can be handled largely with Url Routing, but there are exceptions of course. Try to use what WCF natively provides – if possible as it makes the code more portable. For example, if you do enable ASP.NET Compatibility you won’t be able to self host a WCF REST service. At the same time realize that especially in WCF REST there are number of big holes or access to some features are a royal pain and so it’s not unreasonable to access the HttpContext directly especially if it’s only for read-only access. Since everything in REST works of URLS and the HTTP protocol more control and easier access to HTTP features is a key requirement to building flexible services. It looks like vNext of the WCF REST stuff will feature many improvements along these lines with much deeper native HTTP support that is often so useful in REST applications along with much more extensibility that allows for customization of the inputs and outputs as data goes through the request pipeline. I’m looking forward to this stuff as WCF REST as it exists today still is a royal pain (in fact I’m struggling with a mysterious version conflict/crashing error on my machine that I have not been able to resolve – grrrr…).© Rick Strahl, West Wind Technologies, 2005-2011Posted in ASP.NET  AJAX  WCF  

    Read the article

  • SSD temperature sensor readout with hddtemp

    - by Dande Un
    It seems hddtemp cannot detect the temperature sensor of my SSD (Samsung EVO 840) properly.This is the bash output when running hddtemp: WARNUNG: Laufwerk /dev/sda scheint keinen Temperatur-Sensor zu haben. WARNUNG: Das bedeutet nicht, dass es keinen besitzt. WARNUNG: Falls Sie sicher sind, dass es einen besitzt, kontaktieren Sie mich bitte ([email protected]). WARNUNG: Siehe Optionen --help, --debug und --drivebase. /dev/sda: Samsung SSD 840 EVO 120G B ?@: kein Sensor I looked in the most recent .db file posted on http://nongnu.mirrors.hostinginnederland.nl//hddtemp/hddtemp.db, but it doesn't seem to list any SSD drives at all. Was anyone able to readout the temp-sensor of a SSD with hddtemp?

    Read the article

  • In C what is the difference between null and a new line character? Guys help please [migrated]

    - by Siddhartha Gurjala
    Whats the conceptual difference and similarity between NULL and a newline character i.e between '\0' and '\n' Explain their relevance for both integer and character data type variables and arrays? For reference here is an example snippets of a program to read and write a 2d char array PROGRAM CODE 1: int main() { char sort(),stuname(),swap(),(*p)(),(*q)(); int n; p=stuname; q=swap; printf("Let the number of students in the class be \n"); scanf("%d",&n); fflush(stdin); sort(p,q,n); return 0; } char sort(p1,q1,n1) char (*p1)(),(*q1)(); int n1; { (*p1)(n1); (*q1)(); } char stuname(int nos) // number of students { char name[nos][256]; int i,j; printf("Reading names of %d students started--->\n\n",nos); name[0][0]='k'; //initialising as non NULL charecter for(i=0;i<nos;i++) // nos=number of students { printf("Give name of student %d\n",i); for(j=0;j<256;j++) { scanf("%c",&name[i][j]); if(name[i][j]=='\n') { name[i][j]='\0'; j=257; } } } printf("\n\nWriting student names:\n\n"); for(i=0;i<nos;i++) { for(j=0;j<256&&name[i][j]!='\0';j++) { printf("%c",name[i][j]); } printf("\n"); } } char swap() { printf("Will swap shortly after getting clarity on scanf and %c"); } The above code is working good where as the same logic given with slight diff is not giving appropriate output. Here's the code PROGRAM CODE 2: #include<stdio.h> int main() { char sort(),stuname(),swap(),(*p)(),(*q)(); int n; p=stuname; q=swap; printf("Let the number of students in the class be \n"); scanf("%d",&n); fflush(stdin); sort(p,q,n); return 0; } char sort(p1,q1,n1) char (*p1)(),(*q1)(); int n1; { (*p1)(n1); (*q1)(); } char stuname(int nos) // number of students { char name[nos][256]; int i,j; printf("Reading names of %d students started--->\n\n",nos); name[0][0]='k'; //initialising as non NULL charecter for(i=0;i<nos;i++) // nos=number of students { printf("Give name of student %d\n",i); ***for(j=0;j<256&&name[i][j]!='\0';j++)*** { scanf("%c",&name[i][j]); /*if(name[i][j]=='\n') { name[i][j]='\0'; j=257; }*/ } } printf("\n\nWriting student names:\n\n"); for(i=0;i<nos;i++) { for(j=0;j<256&&name[i][j]!='\0';j++) { printf("%c",name[i][j]); } printf("\n"); } } char swap() { printf("Will swap shortly after getting clarity on scanf and %c"); } Here one more instance of same program not giving proper output given below PROGRAM CODE 3: #include<stdio.h> int main() { char sort(),stuname(),swap(),(*p)(),(*q)(); int n; p=stuname; q=swap; printf("Let the number of students in the class be \n"); scanf("%d",&n); fflush(stdin); sort(p,q,n); return 0; } char sort(p1,q1,n1) char (*p1)(),(*q1)(); int n1; { (*p1)(n1); (*q1)(); } char stuname(int nos) // number of students { char name[nos][256]; int i,j; printf("Reading names of %d students started--->\n\n",nos); name[0][0]='k'; //initialising as non NULL charecter for(i=0;i<nos;i++) // nos=number of students { printf("Give name of student %d\n",i); ***for(j=0;j<256&&name[i][j]!='\n';j++)*** { scanf("%c",&name[i][j]); /*if(name[i][j]=='\n') { name[i][j]='\0'; j=257; }*/ } name[i][i]='\0'; } printf("\n\nWriting student names:\n\n"); for(i=0;i<nos;i++) { for(j=0;j<256&&name[i][j]!='\0';j++) { printf("%c",name[i][j]); } printf("\n"); } } char swap() { printf("Will swap shortly after getting clarity on scanf and %c"); } Why the program code 2 and program code 3 are not working as expected as that of the code 1?

    Read the article

  • Write your Tests in RSpec with IronRuby

    - by kazimanzurrashid
    [Note: This is not a continuation of my previous post, treat it as an experiment out in the wild. ] Lets consider the following class, a fictitious Fund Transfer Service: public class FundTransferService : IFundTransferService { private readonly ICurrencyConvertionService currencyConvertionService; public FundTransferService(ICurrencyConvertionService currencyConvertionService) { this.currencyConvertionService = currencyConvertionService; } public void Transfer(Account fromAccount, Account toAccount, decimal amount) { decimal convertionRate = currencyConvertionService.GetConvertionRate(fromAccount.Currency, toAccount.Currency); decimal convertedAmount = convertionRate * amount; fromAccount.Withdraw(amount); toAccount.Deposit(convertedAmount); } } public class Account { public Account(string currency, decimal balance) { Currency = currency; Balance = balance; } public string Currency { get; private set; } public decimal Balance { get; private set; } public void Deposit(decimal amount) { Balance += amount; } public void Withdraw(decimal amount) { Balance -= amount; } } We can write the spec with MSpec + Moq like the following: public class When_fund_is_transferred { const decimal ConvertionRate = 1.029m; const decimal TransferAmount = 10.0m; const decimal InitialBalance = 100.0m; static Account fromAccount; static Account toAccount; static FundTransferService fundTransferService; Establish context = () => { fromAccount = new Account("USD", InitialBalance); toAccount = new Account("CAD", InitialBalance); var currencyConvertionService = new Moq.Mock<ICurrencyConvertionService>(); currencyConvertionService.Setup(ccv => ccv.GetConvertionRate(Moq.It.IsAny<string>(), Moq.It.IsAny<string>())).Returns(ConvertionRate); fundTransferService = new FundTransferService(currencyConvertionService.Object); }; Because of = () => { fundTransferService.Transfer(fromAccount, toAccount, TransferAmount); }; It should_decrease_from_account_balance = () => { fromAccount.Balance.ShouldBeLessThan(InitialBalance); }; It should_increase_to_account_balance = () => { toAccount.Balance.ShouldBeGreaterThan(InitialBalance); }; } and if you run the spec it will give you a nice little output like the following: When fund is transferred » should decrease from account balance » should increase to account balance 2 passed, 0 failed, 0 skipped, took 1.14 seconds (MSpec). Now, lets see how we can write exact spec in RSpec. require File.dirname(__FILE__) + "/../FundTransfer/bin/Debug/FundTransfer" require "spec" require "caricature" describe "When fund is transferred" do Convertion_Rate = 1.029 Transfer_Amount = 10.0 Initial_Balance = 100.0 before(:all) do @from_account = FundTransfer::Account.new("USD", Initial_Balance) @to_account = FundTransfer::Account.new("CAD", Initial_Balance) currency_convertion_service = Caricature::Isolation.for(FundTransfer::ICurrencyConvertionService) currency_convertion_service.when_receiving(:get_convertion_rate).with(:any, :any).return(Convertion_Rate) fund_transfer_service = FundTransfer::FundTransferService.new(currency_convertion_service) fund_transfer_service.transfer(@from_account, @to_account, Transfer_Amount) end it "should decrease from account balance" do @from_account.balance.should be < Initial_Balance end it "should increase to account balance" do @to_account.balance.should be > Initial_Balance end end I think the above code is self explanatory, treat the require(line 1- 4) statements as the add reference of our visual studio projects, we are adding all the required libraries with this statement. Next, the describe which is a RSpec keyword. The before does exactly the same as NUnit's Setup or MsTest’s TestInitialize attribute, but in the above we are using before(:all) which acts as ClassInitialize of MsTest, that means it will be executed only once before all the test methods. In the before(:all) we are first instantiating the from and to accounts, it is same as creating with the full name (including namespace)  like fromAccount = new FundTransfer.Account(.., ..), next, we are creating a mock object of ICurrencyConvertionService, check that for creating the mock we are not using the Moq like the MSpec version. This is somewhat an interesting issue of IronRuby or maybe the DLR, it seems that it is not possible to use the lambda expression that most of the mocking tools uses in arrange phase in Iron Ruby, like: currencyConvertionService.Setup(ccv => ccv.GetConvertionRate(Moq.It.IsAny<string>(), Moq.It.IsAny<string>())).Returns(ConvertionRate); But the good news is, there is already an excellent mocking tool called Caricature written completely in IronRuby which we can use to mock the .NET classes. May be all the mocking tool providers should give some thought to add the support for the DLR, so that we can use the tool that we are already familiar with. I think the rest of the code is too simple, so I am skipping the explanation. Now, the last thing, how we are going to run it with RSpec, lets first install the required gems. Open you command prompt and type the following: igem sources -a http://gems.github.com This will add the GitHub as gem source. Next type: igem install uuidtools caricature rspec and at last we have to create a batch file so that we can execute it in the Notepad++, create a batch like in the IronRuby bin directory like my previous post and put the following in that batch file: @echo off cls call spec %1 --format specdoc pause Next, add a run menu and shortcut in the Notepad++ like my previous post. Now when we run it it will show the following output: When fund is transferred - should decrease from account balance - should increase to account balance Finished in 0.332042 seconds 2 examples, 0 failures Press any key to continue . . . You will complete code of this post in the bottom. That's it for today. Download: RSpecIntegration.zip

    Read the article

  • nginx PPA does not work?

    - by Peter Smit
    I want to use the newest version of nginx, so I wanted to add the nginx/stable ppa sudo add-apt-repository ppa:nginx/stable sudo apt-get update However, the upgrade command says that there are no upgrades available and nginx is still the old version. Did I do something wrong? I use Ubuntu server 10.04 Lucid add-apt-repository output: $ sudo apt-add-repository ppa:nginx/stable Executing: gpg --ignore-time-conflict --no-options --no-default-keyring --secret-keyring /etc/apt/secring.gpg --trustdb-name /etc/apt/trustdb.gpg --keyring /etc/apt/trusted.gpg --primary-keyring /etc/apt/trusted.gpg --keyserver keyserver.ubuntu.com --recv 8B3981E7A6852F782CC4951600A6F0A3C300EE8C gpg: requesting key C300EE8C from hkp server keyserver.ubuntu.com gpg: key C300EE8C: "Launchpad Stable" not changed gpg: Total number processed: 1 gpg: unchanged: 1 apt-cache policy ouput: $ sudo apt-cache policy nginx nginx: Installed: 0.7.65-1ubuntu2 Candidate: 0.7.65-1ubuntu2 Version table: *** 0.7.65-1ubuntu2 0 500 http://eu-west-1.ec2.archive.ubuntu.com/ubuntu/ lucid/universe Packages 100 /var/lib/dpkg/status

    Read the article

  • what is wrong with this easy script

    - by alex
    what is wrong with this easy script? I just want to write an script which change my directory: A. I put below commands on the file witch its name is pathABC on the /home/alex directory, #!/bin/sh cd /home/alex/Documents/A/B/C echo HelloWorld B. also I did chmod +x pathABC , On the terminal when I am on the /home/alex directory, I run ./pathABC . But the output is just HelloWorld and the current directory remains with no change. I mean my directory remains as /home/alex and not go to the /home/alex/Documents/A/B/C. So where is wrong?

    Read the article

  • How to create list of installed packages for remove after testing?

    - by Wolf F.
    I like to test kmymoney. When trying to install there are a lot of kde packages that are needed by this program. That's ok, I'm using Unity and there are no kde packages installed at this moment. So, when I like to remove all this packages after testing kmymoney, how can I do that? sudo apt-get install kmymoney >> /some/folder/kmymoney.txt gives me the output of apt-get, but that's not what I'm looking for. Is there a way to remove this packages properly? Thanx in advance W.

    Read the article

  • Update/Insert With ADF Web Service Data Control

    - by shay.shmeltzer
    The Web service data control (WSDC) in ADF is a powerful feature that allows you to easily build a UI on top of WS interfaces exposed by other systems. However when you drag a WSDC to a page you usually get a set of output components where the data is shown. So how would you actually do an update operation on those values? The answer is that you need a call to another method in your WSDC that does the update - but what if you want to pass to it the actual values that you get from the get method you invoked before? Here is a demo showing how to do that: The two tricks that are shown here are: Changing the properties of items in the DC to be updateable - this gives you inputText fields instead of outputText fields. And passing the currentRow.dataProvider to the update method (and choosing the right iterator for this).

    Read the article

  • Code Trivia #6

    - by João Angelo
    It’s time for yet another code trivia and it’s business as usual. What will the following program output to the console? using System; using System.Drawing; using System.Threading; class Program { [ThreadStatic] static Point Mark = new Point(1, 1); static void Main() { Thread.CurrentThread.Name = "A"; MoveMarkUp(); var helperThread = new Thread(MoveMarkUp) { Name = "B" }; helperThread.Start(); helperThread.Join(); } static void MoveMarkUp() { Mark.Y++; Console.WriteLine("{0}:{1}", Thread.CurrentThread.Name, Mark); } }

    Read the article

  • Second display running off laptop VGA not correctly positioned (offset left and up)

    - by Filthy Pazuzu
    I have black bars on the right and bottom of my display! I'm running my laptop's VGA output to an recognized as a It is 20", but the assumed 3" difference does not account for the incorrect position. Everything displays fine. It runs high-res video beautifully. But it's 3/4" offset left and an unreasonably annoying 1/4" offset up. I've tried going through the display's annoying & useless menu, but it doesn't have any way to adjust the position. I'm certainly no linux newbie, but on this Ubuntu (Pangolin, BTW) I can't figure out how to make simple positional display changes on Ubuntu. It's not only frustrating, it's a bit humiliating! So. Does anyone know of an app that will allow me to make basic display position alterations? ("App" - annoyingly trendy, but a useful word - no matter how grating.) Thanks, & Cheers, Paz

    Read the article

  • Dynamic Bursting ... no really!

    - by Tim Dexter
    If any of you have seen me or my colleagues present BI Publisher to you then we have hopefully mentioned 'bursting.' You may have even seen a demo where we talk about being able to take a batch of data, say invoices. Then split them by some criteria, say customer id; format them with a template; generate the output and then deliver the documents to the recipients with a click. We and especially I, always say this can be completely dynamic! By this I mean, that you could store customer preferences in a database. What layout would each customer like; what output format they would like and how they would like the document delivered. We (I) talk a good talk, but typically don't do the walk in a demo. We hard code everything in the bursting query or bursting control file to get the concept across. But no more peeps! I have finally put together a dynamic bursting demo! Its been minutes in the making but its been tough to find those minutes! Read on ... It's nothing amazing in terms of making the burst dynamic. I created a CUSTOMER_PREFS table with some simple UI in an APEX application so that I can maintain their requirements. In EBS you have descriptive flexfields that could do the same thing or probably even 'contact' fields to store most of the info. Here's my table structure: Name                           Type ------------------------------ -------- CUSTOMER_ID                    NUMBER(6) TEMPLATE_TYPE                  VARCHAR2(20) TEMPLATE_NAME                  VARCHAR2(120) OUTPUT_FORMAT                  VARCHAR2(20) DELIVERY_CHANNEL               VARCHAR2(50) EMAIL                          VARCHAR2(255) FAX                            VARCHAR2(20) ATTACH                         VARCHAR2(20) FILE_LOC                       VARCHAR2(255) Simple enough right? Just need CUSTOMER_ID as the key for the bursting engine to join it to the customer data at burst time. I have not covered the full delivery options, just email, fax and file location. Remember, its a demo people :0) However the principal is exactly the same for each delivery type. They each have a set of attributes that need to be provided and you will need to handle that in your bursting query. On a side note, in EBS, you use a bursting control file, you can apply the same principals that I'm laying out here you just need to get the customer bursting info into the XML data stream so that you can refer to it in the control file using XPATH expressions. Next, we need to look up what attributes or parameters are required for each delivery method. that can be found in the documentation here.  Now we know the combinations of parameters and delivery methods we can construct the query using a series a decode statements: select distinct cp.customer_id "KEY", cp.template_name TEMPLATE, cp.template_type TEMPLATE_FORMAT, 'en-US' LOCALE, cp.output_format OUTPUT_FORMAT, 'false' SAVE_FORMAT, cp.delivery_channel DEL_CHANNEL, decode(cp.delivery_channel,'FILE', cp.file_loc , 'EMAIL', cp.email , 'FAX', cp.fax) PARAMETER1, decode(cp.delivery_channel,'FILE', c.cust_last_name||'_orders.pdf' ,'EMAIL','[email protected]' ,'FAX', 'faxserver.com') PARAMETER2, decode(cp.delivery_channel,'FILE',NULL ,'EMAIL','[email protected]' ,'FAX', null) PARAMETER3, decode(cp.delivery_channel,'FILE',NULL ,'EMAIL','Your current orders' ,'FAX',NULL) PARAMETER4, decode(cp.delivery_channel,'FILE',NULL ,'EMAIL','Please find attached a copy of your current orders with BI Publisher, Inc' ,'FAX',NULL) PARAMETER5, decode(cp.delivery_channel,'FILE',NULL ,'EMAIL','false' ,'FAX',NULL) PARAMETER6, decode(cp.delivery_channel,'FILE',NULL ,'EMAIL','[email protected]' ,'FAX',NULL) PARAMETER7 from cust_prefs cp, customers c, orders_view ov where cp.customer_id = c.customer_id and cp.customer_id = ov.customer_id order by cp.customer_id Pretty straightforward, just need to test, test, test, the query and ensure it's bringing back the correct data based on each customers preferences. Notice the NULL values for parameters that are not relevant for a given delivery channel. You should end up with bursting control data that the bursting engine can use:  Now, your users can run the burst and documents will be formatted, generated and delivered based on the customer prefs. If you're interested in the example, I have used the sample OE schema data for the base report. The report files and CUST_PREFS table are zipped up here. The zip contains the data model (.xdmz), the report and templates (.xdoz) and the sql scripts to create and load data to the CUST_PREFS table.  Once you load the report into the catalog, you'll need to create the OE data connection and point the data model at it. You'll probably need to re-point the report to the data model too. Happy Bursting!

    Read the article

  • SQL SERVER – Introduction to LEAD and LAG – Analytic Functions Introduced in SQL Server 2012

    - by pinaldave
    SQL Server 2012 introduces new analytical function LEAD() and LAG(). This functions accesses data from a subsequent row (for lead) and previous row (for lag) in the same result set without the use of a self-join . It will be very difficult to explain this in words so I will attempt small example to explain you this function. Instead of creating new table, I will be using AdventureWorks sample database as most of the developer uses that for experiment. Let us fun following query. USE AdventureWorks GO SELECT s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty, LEAD(SalesOrderDetailID) OVER (ORDER BY SalesOrderDetailID ) LeadValue, LAG(SalesOrderDetailID) OVER (ORDER BY SalesOrderDetailID ) LagValue FROM Sales.SalesOrderDetail s WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty GO Above query will give us following result. When we look at above resultset it is very clear that LEAD function gives us value which is going to come in next line and LAG function gives us value which was encountered in previous line. If we have to generate the same result without using this function we will have to use self join. In future blog post we will see the same. Let us explore this function a bit more. This function not only provide previous or next line but it can also access any line before or after using offset. Let us fun following query, where LEAD and LAG function accesses the row with offset of 2. USE AdventureWorks GO SELECT s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty, LEAD(SalesOrderDetailID,2) OVER (ORDER BY SalesOrderDetailID ) LeadValue, LAG(SalesOrderDetailID,2) OVER (ORDER BY SalesOrderDetailID ) LagValue FROM Sales.SalesOrderDetail s WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty GO Above query will give us following result. You can see the LEAD and LAG functions  now have interval of  rows when they are returning results. As there is interval of two rows the first two rows in LEAD function and last two rows in LAG function will return NULL value. You can easily replace this NULL Value with any other default value by passing third parameter in LEAD and LAG function. Let us fun following query. USE AdventureWorks GO SELECT s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty, LEAD(SalesOrderDetailID,2,0) OVER (ORDER BY SalesOrderDetailID ) LeadValue, LAG(SalesOrderDetailID,2,0) OVER (ORDER BY SalesOrderDetailID ) LagValue FROM Sales.SalesOrderDetail s WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty GO Above query will give us following result, where NULL are now replaced with value 0. Just like any other analytic function we can easily partition this function as well. Let us see the use of PARTITION BY in this clause. USE AdventureWorks GO SELECT s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty, LEAD(SalesOrderDetailID) OVER (PARTITION BY SalesOrderID ORDER BY SalesOrderDetailID ) LeadValue, LAG(SalesOrderDetailID) OVER (PARTITION BY SalesOrderID ORDER BY SalesOrderDetailID ) LagValue FROM Sales.SalesOrderDetail s WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty GO Above query will give us following result, where now the data is partitioned by SalesOrderID and LEAD and LAG functions are returning the appropriate result in that window. As now there are smaller partition in my query, you will see higher presence of NULL. In future blog post we will see how this functions are compared to SELF JOIN. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Function, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • DragonRise USB Gamepad not working

    - by Gaurav Butola
    I have a Gamepad which is not working, I say "not working" because I was playing Urban Terror and the game was not responding to the gamepad button presses. How do I get the gamepad to work? I tried it in some other games Torcs, SuperTuxKark, Enemy Territory.... but, Same, there is no response to any of the gamepad button presses. Here is the output of lsusb Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 003 Device 003: ID 0079:0011 DragonRise Inc. Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub 003 on third line is my Gamepad.

    Read the article

  • Rails noob - How to work on data stored in models

    - by Raghav Kanwal
    I'm a beginner to Ruby and Rails, and I have made a couple applications like a Microposts clone and a Todo-List for starters, but I'm starting work on another project. I've got 2 models - user and tracker, you log in via the username which is authenticated and you can log down data which is stored in the tracker table. The tracker has a column named "Calories" and I would like Rails to sum all of the values entered if they are on the same date, and output the result which is subtracted from, say 3000 in a new statement after the display of the model. I know what I'm talking about is just ruby code, im just not sure how to incorporate it. :( Could someone please guide me through this? And also link me to some guides/tutorials which teach working on data from models? Thank you :)

    Read the article

  • Web Site Performance and Assembly Versioning – Part 2 Versioning Combined Files Using Subversion

    - by capgpilk
    Ok so it took a while to post this second part. Many apologies, we had a big roll out of a new platform at work and many things had to get sidelined. So this is the second part in a short series of website performance and using versioning to help improve it. Minification and Concatination of JavaScript and CSS Files Versioning Combined Files Using Subversion – this post Versioning Combined Files Using Mercurial – published shortly In the previous post we used AjaxMin to shrink js and css files then concatenated them into one file each which had the file name of site-script.combined.min.js and site-style.combined.min.css. These file names are fine, but you can configure IIS 7 to cache these static files and so lower the amount of data transferred between server and client. This is done by editing the response headers in IIS. 1. In IIS7 Manager, choose the directory where these files are located and select HTTP Response Headers. 2. Check the Expire Web Content and set a time period well into the future. 3. When refreshing the web page, the server will respond with HTTP 304 forcing the browser to retrieve the file from its cache. 4. As can be seen in FireBug, the Cache-Control header has a max age of 31536000 seconds which equates to 365 days.   The server will always send this HTTP 304 message unless the file changes forcing it to send new content. To help force this we can change the file name based on the latest build using the SVN revision number in the filename. So we have lowered data transfer on content that hasn’t changed, but forced it to be sent when you have made a change to the css or js files. Now to get the SVN revision number in to the file name. 1. Import the MSBuildCommunityTasks targets which can be dowloaded from here. 1: <Import Project="$(MSBuildExtensionsPath) 2: \MSBuildCommunityTasks 3: \MSBuild.Community.Tasks.Targets" /> 2. Edit the BeforeBuild target to call out to svn and get the latest revision 1: <SvnVersion LocalPath="$(MSBuildProjectDirectory)" 2: ToolPath="$(ProgramFiles)\VisualSVN Server\bin"> 3: <Output TaskParameter="Revision" PropertyName="Revision" /> 4: </SvnVersion> 3. Set it to update the project AssemblyInfo.cs file for the svn revision. 1: <FileUpdate Files="Properties\AssemblyInfo.cs" 2: Regex="(\d+)\.(\d+)\.(\d+)\.(\d+)" 3: ReplacementText="$1.$2.$3.$(Revision)" /> 4. Now edit the AfterBuild target to get the full dll version. You could combine these two steps and just get the version from svn, I am working on one project that updates the AssemblyInfo file and another project that allows manual editing of the file, but needs that version within the file name; so I just combined the two for this post. 1: <MSBuild.ExtensionPack.Framework.Assembly 2: TaskAction="GetInfo" 3: NetAssembly="$(OutputPath)\mydll.dll"> 4: <Output TaskParameter="OutputItems" ItemName="Info" /> 5: </MSBuild.ExtensionPack.Framework.Assembly> 6: <Message Text="Version: %(Info.AssemblyVersion)" 7: Importance="High" /> 5. Use this Info.AssemblyVersion to write out the combined css and js files as described in the last post. 1: <WriteLinestoFile File="Scripts\site-%(Info.AssemblyVersion).combined.min.js" 2: Lines="@(JSLinesSite)" Overwrite="true" />   In the next post I will cover doing the same, but for a Mercurial repository.

    Read the article

  • Cant get lm-sensors to load ATI Radeon temp or fan

    - by woody
    New to Linux and having minor issues :/ . I followed this guide initially but did not recieve the proper output and did not show my ATI Radeon HD 5000 temp or fan speed. Then used this guide, same problems exhibited. No issues installing and no errors. I think its not reading i2c for some reason. The proprietary driver is installed and functioning correctly according fglrxinfo. I can use aticonfig commands and view both temp and fan. Any ideas on how to get it working under 'sensors'? When i run 'sudo sensors-detect' this is my ouput # sensors-detect revision 5984 (2011-07-10 21:22:53 +0200) # System: LENOVO IdeaPad Y560 (laptop) # Board: Lenovo KL3 This program will help you determine which kernel modules you need to load to use lm_sensors most effectively. It is generally safe and recommended to accept the default answers to all questions, unless you know what you're doing. Some south bridges, CPUs or memory controllers contain embedded sensors. Do you want to scan for them? This is totally safe. (YES/no): y Silicon Integrated Systems SIS5595... No VIA VT82C686 Integrated Sensors... No VIA VT8231 Integrated Sensors... No AMD K8 thermal sensors... No AMD Family 10h thermal sensors... No AMD Family 11h thermal sensors... No AMD Family 12h and 14h thermal sensors... No AMD Family 15h thermal sensors... No AMD Family 15h power sensors... No Intel digital thermal sensor... Success! (driver `coretemp') Intel AMB FB-DIMM thermal sensor... No VIA C7 thermal sensor... No VIA Nano thermal sensor... No Some Super I/O chips contain embedded sensors. We have to write to standard I/O ports to probe them. This is usually safe. Do you want to scan for Super I/O sensors? (YES/no): y Probing for Super-I/O at 0x2e/0x2f Trying family `National Semiconductor/ITE'... Yes Found unknown chip with ID 0x8502 Probing for Super-I/O at 0x4e/0x4f Trying family `National Semiconductor/ITE'... No Trying family `SMSC'... No Trying family `VIA/Winbond/Nuvoton/Fintek'... No Trying family `ITE'... No Some hardware monitoring chips are accessible through the ISA I/O ports. We have to write to arbitrary I/O ports to probe them. This is usually safe though. Yes, you do have ISA I/O ports even if you do not have any ISA slots! Do you want to scan the ISA I/O ports? (YES/no): y Probing for `National Semiconductor LM78' at 0x290... No Probing for `National Semiconductor LM79' at 0x290... No Probing for `Winbond W83781D' at 0x290... No Probing for `Winbond W83782D' at 0x290... No Lastly, we can probe the I2C/SMBus adapters for connected hardware monitoring devices. This is the most risky part, and while it works reasonably well on most systems, it has been reported to cause trouble on some systems. Do you want to probe the I2C/SMBus adapters now? (YES/no): y Using driver `i2c-i801' for device 0000:00:1f.3: Intel 3400/5 Series (PCH) Now follows a summary of the probes I have just done. Just press ENTER to continue: Driver `coretemp': * Chip `Intel digital thermal sensor' (confidence: 9) To load everything that is needed, add this to /etc/modules: #----cut here---- # Chip drivers coretemp #----cut here---- If you have some drivers built into your kernel, the list above will contain too many modules. Skip the appropriate ones! Do you want to add these lines automatically to /etc/modules? (yes/NO) My output for 'sensors' is: acpitz-virtual-0 Adapter: Virtual device temp1: +58.0°C (crit = +100.0°C) coretemp-isa-0000 Adapter: ISA adapter Core 0: +56.0°C (high = +84.0°C, crit = +100.0°C) Core 1: +57.0°C (high = +84.0°C, crit = +100.0°C) Core 2: +58.0°C (high = +84.0°C, crit = +100.0°C) Core 3: +57.0°C (high = +84.0°C, crit = +100.0°C) and my '/etc/modules' is: # /etc/modules: kernel modules to load at boot time. # # This file contains the names of kernel modules that should be loaded # at boot time, one per line. Lines beginning with "#" are ignored. lp rtc # Generated by sensors-detect on Fri Nov 30 23:24:31 2012 # Chip drivers coretemp

    Read the article

  • How to implement turn-based game engine?

    - by Dvole
    Let's imagine game like Heroes of Might and Magic, or Master of Orion, or your turn-based game of choice. What is the game logic behind making next turn? Are there any materials or books to read about the topic? To be specific, let's imagine game loop: void eventsHandler(); //something that responds to input void gameLogic(); //something that decides whats going to be output on the screen void render(); //this function outputs stuff on screen All those are getting called say 60 times a second. But how turn-based enters here? I might imagine that in gameLogic() there is a function like endTurn() that happens when a player clicks that button, but how do I handle it all? Need insights.

    Read the article

  • Chainload boot of Ubuntu installed on 32GB SD card from legacy Grub boot on USB

    - by Gary Darsey
    I have Ubuntu installed on a 32 GB SD card (in the Storage Expansion slot on an Acer Aspire One) with Grub2 installed in the same partition. I boot into legacy Grub on a USB drive and would like to boot by chainloading Grub2 from Grub (kernel/initrd or symlink booting would also be fine), but I haven't figured out how to do this from legacy Grub CLI. Output from blkid for this partition is /dev/mmcblk0p1: LABEL="Ubuntu" UUID="7ceb9fa7-238c-4c5d-bb8e-2c655652ddec" TYPE='ext4" / fdisk -lu information Boot indicator ID 83. Related entries in grub.cfg: search --no-floppy --fs-uuid --set-root 7ceb9fa7-238c-4c5d-bb8e-2c655652ddec linux /boot/vmlinuz-3.5.0-17-generic root=UUID=7ceb9fa7-238c-4c5d-bb8e-2c655652ddec... initrd /boot/initrd.img-3.5.0-17-generic I can't seem to replicate this in legacy Grub. Is there any way get Grub2 to chainload? How do I set root with UUID in legacy Grub? I prefer to boot from USB. Would Grub2 on USB (copying the grub.cfg generated during installation) be an option?

    Read the article

  • add-apt-repository not working UbuntuGnome 12.10

    - by nickcannariato
    When I try to add a ppa using the command: sudo add-apt-repository [insert ppa] the output I get is: Error in sitecustomize; set PYTHONVERBOSE for traceback: EOFError: EOF read where not expected Traceback (most recent call last): File "/usr/bin/add-apt-repository", line 3, in <module> from __future__ import print_function EOFError: EOF read where not expected This is the desktop version. It's a clean install and I didn't get any log errors on install. I haven't added or removed any python versions. Can someone set me straight on how to fix this?

    Read the article

  • What can i use as a 3d Tile map editor?

    - by alfa64
    I need to make grid based levels with 3d models for a dungeon crawler ( as a recent example Legend of Grimrock), but i need to have several layers and place entities with properties and position, angle, etc. I was considering Tiled, using layers as height for each level, but it's very hard to work with and visualize. What can i use for this pourpose? The output format needs to be json, xml, or something i can use on my engine. Ideally i'd want something like Tiled with a 3d visualization/edit mode and support for loading models or at least some visual representation of them.

    Read the article

< Previous Page | 298 299 300 301 302 303 304 305 306 307 308 309  | Next Page >