Search Results

Search found 23949 results on 958 pages for 'test test'.

Page 171/958 | < Previous Page | 167 168 169 170 171 172 173 174 175 176 177 178  | Next Page >

  • c++ and c# speed compared

    - by Mack
    I was worried about C#'s speed when it deals with heavy calculations, when you need to use raw CPU power. I always thought that C++ is much faster than C# when it comes to calculations. So I did some quick tests. The first test computes prime numbers < an integer n, the second test computes some pandigital numbers. The idea for second test comes from here: Pandigital Numbers C# prime computation: using System; using System.Diagnostics; class Program { static int primes(int n) { uint i, j; int countprimes = 0; for (i = 1; i <= n; i++) { bool isprime = true; for (j = 2; j <= Math.Sqrt(i); j++) if ((i % j) == 0) { isprime = false; break; } if (isprime) countprimes++; } return countprimes; } static void Main(string[] args) { int n = int.Parse(Console.ReadLine()); Stopwatch sw = new Stopwatch(); sw.Start(); int res = primes(n); sw.Stop(); Console.WriteLine("I found {0} prime numbers between 0 and {1} in {2} msecs.", res, n, sw.ElapsedMilliseconds); Console.ReadKey(); } } C++ variant: #include <iostream> #include <ctime> int primes(unsigned long n) { unsigned long i, j; int countprimes = 0; for(i = 1; i <= n; i++) { int isprime = 1; for(j = 2; j < (i^(1/2)); j++) if(!(i%j)) { isprime = 0; break; } countprimes+= isprime; } return countprimes; } int main() { int n, res; cin>>n; unsigned int start = clock(); res = primes(n); int tprime = clock() - start; cout<<"\nI found "<<res<<" prime numbers between 1 and "<<n<<" in "<<tprime<<" msecs."; return 0; } When I ran the test trying to find primes < than 100,000, C# variant finished in 0.409 seconds and C++ variant in 5.553 seconds. When I ran them for 1,000,000 C# finished in 6.039 seconds and C++ in about 337 seconds. Pandigital test in C#: using System; using System.Diagnostics; class Program { static bool IsPandigital(int n) { int digits = 0; int count = 0; int tmp; for (; n > 0; n /= 10, ++count) { if ((tmp = digits) == (digits |= 1 << (n - ((n / 10) * 10) - 1))) return false; } return digits == (1 << count) - 1; } static void Main() { int pans = 0; Stopwatch sw = new Stopwatch(); sw.Start(); for (int i = 1; i <= 123456789; i++) { if (IsPandigital(i)) { pans++; } } sw.Stop(); Console.WriteLine("{0}pcs, {1}ms", pans, sw.ElapsedMilliseconds); Console.ReadKey(); } } Pandigital test in C++: #include <iostream> #include <ctime> using namespace std; int IsPandigital(int n) { int digits = 0; int count = 0; int tmp; for (; n > 0; n /= 10, ++count) { if ((tmp = digits) == (digits |= 1 << (n - ((n / 10) * 10) - 1))) return 0; } return digits == (1 << count) - 1; } int main() { int pans = 0; unsigned int start = clock(); for (int i = 1; i <= 123456789; i++) { if (IsPandigital(i)) { pans++; } } int ptime = clock() - start; cout<<"\nPans:"<<pans<<" time:"<<ptime; return 0; } C# variant runs in 29.906 seconds and C++ in about 36.298 seconds. I didn't touch any compiler switches and bot C# and C++ programs were compiled with debug options. Before I attempted to run the test I was worried that C# will lag well behind C++, but now it seems that there is a pretty big speed difference in C# favor. Can anybody explain this? C# is jitted and C++ is compiled native so it's normal that a C++ will be faster than a C# variant. Thanks for the answers!

    Read the article

  • Android FTP seek Bar issue

    - by Androi Developer
    I am trying to Upload & Download file to server using FTP and Download File using HTTP i am able to do this, my problem is when i am trying to show seek bar with Upload status of file using ftp then it's not showing. In this attached image Using HTTP it's showing seekbar with Network spped like this i need to display seek bar & Network sppeed in FTP. Below code i wrote for FTP to upload file into server. Code:-- // Upload System.out.println("upload test is called"); //Toast.makeText(con, "upload FTP test is called", Toast.LENGTH_SHORT).show(); //ContextWrapper context = null; //assetManager= context.getAssets(); assetManager = getResources().getAssets(); input1 = assetManager.open("hello.txt"); final long started = System.currentTimeMillis(); int size = input1.available(); //byte[] buffer = new byte[size]; byte dataByte[] = new byte[1024]; //input1.read(buffer); //String data = "ZK DATA TESTER TEST DATA1sdfsdf"; String data = input1.toString(); System.out.println("dat value is........"+data); final int lenghtOfFile = data.getBytes().length; //final int lenghtOfFile = input1.getBytes().length; System.out.println("length of file....."+lenghtOfFile); ByteArrayInputStream in = new ByteArrayInputStream( data.getBytes()); //toast("Uploading /test.txt"); //Toast.makeText(con,"File Size : " +data.getBytes().length + " bytes",Toast.LENGTH_SHORT).show(); //byte b[] = new byte[1024]; long total = 0; long sleepingTime= 0; System.out.println("started time --"+started); updateUI(status, "Uploading"); while ((count = in.read(dataByte)) != -1) { System.out.println("read value is...."+in.read(dataByte)); while (sleep1) { Thread.sleep(1000); System.out.println("ftp upload is in sleeping mode"); sleepingTime +=1000; } System.out.println("Total count --"+count); total += count; System.out.println("Only Total --"+total); final int progress = (int) ((total * 100) / lenghtOfFile); final long speed = total; //duration = ((System.currentTimeMillis() - started)-sleepingTime) / 1000; boolean result = ObjFtpCon.storeFile("/test.txt", input1); //boolean result = ObjFtpCon.storeFile(map.get("file_address").toString()+"/test.txt", input1); duration = ((System.currentTimeMillis() - started)-sleepingTime) / 1000; /* runOnUiThread(new Runnable() { public void run() { bar.setProgress(progress); // trans.setText("" + progress); //duration = ((System.currentTimeMillis() - started)-sleepingTime) / 1000; //duration = ((System.currentTimeMillis() - started)-sleepingTime) / 1000; //real_time.setText(duration + " secs"); if (duration != 0) { test_avg.setText((((speed / duration)*1000)*0.0078125) + " kbps"); if (pk <= (speed / duration) / 1024) { pk = (speed / duration) / 1024; } if (pk <= ((speed / duration)*1000)*0.0078125) { pk = (long)(((speed / duration)*1000)*0.0078125); } //peak.setText(pk + " kbps"); } } });*/ //in.close(); if (result) { updateUI(status, "Uploaded"); // toast("Uploading succeeded"); // toast("Uploaded at /test.txt"); //duration = ((System.currentTimeMillis() - started)-sleepingTime) / 1000; System.out.println("curreent time..... "+System.currentTimeMillis()); System.out.println("started time --"+started); System.out.println("sleep tome...."+sleepingTime); System.out.println("duration is....."+duration); runOnUiThread(new Runnable() { public void run() { bar.setProgress(progress); // trans.setText("" + progress); //duration = ((System.currentTimeMillis() - started)-sleepingTime) / 1000; real_time.setText(duration + " secs"); if (duration != 0) { test_avg.setText((speed / duration) / 1024 + " kbps"); if (pk <= (speed / duration) / 1024) { pk = (speed / duration) / 1024; } peak.setText(pk + " kbps"); } } }); } /*while(!result){Thread.sleep(1000);}*/ } in.close();

    Read the article

  • Best way for an external (remote) graphics designer to style ASP.NET MVC 4 app?

    - by Tom K
    My customer has his own graphics designer he wants to use to style his web application we're building in ASP.NET MVC 4. Our solution is in Bitbucket, but if he can't run it what choices do we have? I doubt he uses Visual Studio 2012. One idea is for us to publish to our solution to a file system, send it to him, have him create a local IIS website on his machine (assuming he isn't using a Mac). Mocking data or pointing to a test SQL in Azure isn't a problem. Then he can make changes to .css and .cshtml files. Will this even work? The point is that he needs to be able to test his changes. I know he can modify the views and just check-in. But he needs to deliver a working design. So it seems inefficient. The graphics designer will have access to our test site so he can see how it works, what data we have and fields. Another idea is for him to build a static mock site using just HTML/CSS. Later I'd integrate his styles into customer's solution, split his html into partial views which we use and add Razor syntax. Again, we'd like to leverage graphics designer for all of this. Is there a best practice documented around this subject? How do other teams deal with this situation?

    Read the article

  • sybase bcp import fails

    - by chromeplatedbanana
    We're trying to export some tables from our production database to our test database using bcp. The bcp export seems to work fine, but the import always fails with a data type error (see below). We tested on our test database exporting the table content to a file, then importing it in again immediately, but that failed too. e.g., bcp TABLENAME out ~/tempfile -S servername -U username generates a file as expected. If we use -c option then the number of lines is as expected. However, bcp TABLENAME in ~/tempfile -S servername -U username fails with CTLIB Message: - L0/0D/S0/N0/0/0: blk_int(): blk_layer: CT library error: Cannot find an equivalent CS_TYPE for this TDS data type 49 blk_init failed. We get this whenever we try to copy into TABLENAME, whether from the production or test table dump file. I don't understand why export and import for the same TABLENAME is generating a data type error. What am I doing wrong here? Thanks

    Read the article

  • Using AuthzSVNAccessFile for controlling SVN Access produces HTTP 400 Bad Request

    - by meeper
    I have a new repository on an existing subversion server that requires us to perform path based authorization within the repository. I found that the AuthzSVNAccessFile directive in apache is directly responsible for allowing this functionality. After fixing several other problems such as AuthzSVNAccessFile preventing SVNListParentPath from operating properly, I am left with one single problem. I can checkout, I can update, I can commit, BUT I cannot execute an SVN COPY for performing branch/tagging operations. The moment I comment out the AuthzSVNAccessFile line in the Apache config everything works as expected except the obvious path authorizations. Versions: The server OS is Debian 6.0.7 (Squeeze) Apache 2.2.16-6+squeeze11 Server Subversion 1.6.12dfsg-7 Clients are running windows Clients tried are: TortoiseSVN 1.8.2 Build 24708 64bit SVN CLI Client 1.8.3 (r1516576) Authentication is performed via AD to a Windows 2003 domain and appears to be operating normally. I have stripped out all other configurations and repository setups to produce this single configuration that reproduces the problem. Apache Configuration: <VirtualHost *:443> ServerName svn-test.company.com ServerAlias /svn-test ServerAdmin [email protected] SSLEngine On SSLCertificateFile /etc/apache2/apache.pem ErrorLog /var/log/apache2/svn-test_error.log LogLevel warn CustomLog /var/log/apache2/svn-test_access.log combined ServerSignature On # Repository Access to all Repositories <Location "/"> DAV svn SVNParentPath /var/svn SVNListParentPath on AuthBasicProvider ldap AuthType Basic AuthzLDAPAuthoritative Off AuthName "Subversion Test Repository System" AuthLDAPURL "ldap://adserver.company.com:389/DC=corp,DC=company,DC=com?sAMAccountName?sub?(objectClass=*)" NONE AuthLDAPBindDN "CN=service_account,OU=ServiceIDs,OU=corp,OU=Delegated,DC=na,DC=corp,DC=company,DC=com" AuthLDAPBindPassword service_account_password Require valid-user SSLRequireSSL </Location> # <LocationMatch /.+> is a really dirty trick to make listing of repositories work # http://d.hatena.ne.jp/shimonoakio/20080130/1201686016 <LocationMatch /.+> AuthzSVNAccessFile /etc/apache2/svn_path_auth </LocationMatch> </VirtualHost> SVN Access File: [/] * = rw The repository used (AuthTestBasic) consists of the following directory structure and contains no externals (this is a literal listing, not an example): / /branches/ /tags/ /trunk/ /trunk/somefile.txt Tortoise produces the following error during a tag operation in it's tag result window: Adding directory failed: COPY on /authtestbasic/!svn/bc/2/trunk (400 Bad Request) The svn.exe CLI client produces the following error: C:\Users\e20epkt>svn copy https://servername/authtestbasic/trunk https://servername/authtestbasic/tags/tag1 -m "svn cli client" svn: E175002: Adding directory failed: COPY on /authtestbasic/!svn/bc/2/trunk (400 Bad Request) The Apache error log has nothing in it, however the apache access log has the following in it (IP addresses and usernames changed obviously): 10.1.2.100 - - [17/Oct/2013:11:53:40 -0700] "OPTIONS /authtestbasic/trunk HTTP/1.1" 401 2595 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "OPTIONS /authtestbasic/trunk HTTP/1.1" 200 996 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "OPTIONS /authtestbasic/trunk HTTP/1.1" 200 884 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "PROPFIND /authtestbasic/trunk HTTP/1.1" 207 692 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "PROPFIND /authtestbasic/!svn/vcc/default HTTP/1.1" 207 596 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "REPORT /authtestbasic/!svn/bc/0/trunk HTTP/1.1" 404 580 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "PROPFIND /authtestbasic/!svn/vcc/default HTTP/1.1" 207 596 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "REPORT /authtestbasic/!svn/bc/2/trunk HTTP/1.1" 200 674 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "PROPFIND /authtestbasic/!svn/bc/2/trunk HTTP/1.1" 207 548 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "PROPFIND /authtestbasic/tags/tag1 HTTP/1.1" 404 580 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "MKACTIVITY /authtestbasic/!svn/act/f1e9dc07-fb5e-5a41-ac22-907705ef6e5e HTTP/1.1" 201 708 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "PROPFIND /authtestbasic/tags HTTP/1.1" 207 580 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "CHECKOUT /authtestbasic/!svn/vcc/default HTTP/1.1" 201 708 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "PROPPATCH /authtestbasic/!svn/wbl/f1e9dc07-fb5e-5a41-ac22-907705ef6e5e/2 HTTP/1.1" 207 596 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "CHECKOUT /authtestbasic/!svn/ver/1/tags HTTP/1.1" 201 724 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "COPY /authtestbasic/!svn/bc/2/trunk HTTP/1.1" 400 596 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "DELETE /authtestbasic/!svn/act/f1e9dc07-fb5e-5a41-ac22-907705ef6e5e HTTP/1.1" 204 1956 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" You'll see that the second to last line contains the COPY command with the HTTP 400 response, however, there doesn't appear to be any indication as to why. Please note that, while yes this is a test repository on a test server, I am experiencing this same issue in this test setup where I have eliminated all other possible causes (mixed repository configurations, externals, etc). I have also confirmed that all files for the repository (/var/svn/authtestbasic) are owned by the Apache user www-data.

    Read the article

  • monit configuration for php-fpm

    - by Adam Jimenez
    I'm struggling to find a monit config for php-fpm that works. This is what I've tried: ### Monitoring php-fpm: the parent process. check process php-fpm with pidfile /var/run/php-fpm/php-fpm.pid group phpcgi # phpcgi group start program = "/etc/init.d/php-fpm start" stop program = "/etc/init.d/php-fpm stop" ## Test the UNIX socket. Restart if down. if failed unixsocket /var/run/php-fpm.sock then restart ## If the restarts attempts fail then alert. if 3 restarts within 5 cycles then timeout depends on php-fpm_bin depends on php-fpm_init ## Test the php-fpm binary. check file php-fpm_bin with path /usr/sbin/php-fpm group phpcgi if failed checksum then unmonitor if failed permission 755 then unmonitor if failed uid root then unmonitor if failed gid root then unmonitor ## Test the init scripts. check file php-fpm_init with path /etc/init.d/php-fpm group phpcgi if failed checksum then unmonitor if failed permission 755 then unmonitor if failed uid root then unmonitor if failed gid root then unmonitor But it fails because there is no php-fpm.sock (Centos 6)

    Read the article

  • Getting Started with Employee Info Starter Kit (v4.0.0)

    - by joycsharp
    The new release of Employee Info Starter Kit contains lots of exciting features available in Visual Studio 2010 and .NET 4.0. To get started with the new version, you will need less than 5 minutes. Minimum System Requirements Before getting started, please make sure you have installed Visual Studio 2010 RC (or higher) and Sql Server 2005 Express edition (or higher installed on your machine. Running the Starter Kit for First Time 1. Download the starter kit 4.0.0 version form here and extract it. 2. Go to <extraction folder>\Source\Eisk.Solution and click the solution file 3. From the solution explorer, right click the “Eisk.Web” web site project node and select “Set as Startup Project” and hit Ctrl + F5   4. You will be prompted to install database, just follow the instruction. That’s it! You are ready to use this starter kit. Running the Tests Employee Info Starter Kit contains a infrastructure for Integration and Unit Testing, by utilizing cool test tools in Visual Studio 2010. Once you complete the steps, mentioned above, take a minute to run the test cases on the fly. 1. From the solution explorer, to go “Solution Items\e-i-s-k-2010.vsmdi” and click it. You will see the available Tests in the Visual Studio Test Lists. Select all, except the “Load Tests” node (since Load Tests takes a bit time) 2. Click “Run Checked Tests” control from the upper left corner. You will see the tests running and finally the status of the tests, which indicates the current health of you application from different scenarios. Technorati Tags: asp.net,architecture,starter kit,employee info starter kit,visual studio 2010,.net 4.0,entity framework

    Read the article

  • Getting Started with Employee Info Starter Kit (v4.0.0)

    - by Mohammad Ashraful Alam
    The new release of Employee Info Starter Kit contains lots of exciting features available in Visual Studio 2010 and .NET 4.0. To get started with the new version, you will need less than 5 minutes. Minimum System Requirements Before getting started, please make sure you have installed Visual Studio 2010 RC (or higher) and Sql Server 2005 Express edition (or higher installed on your machine. Running the Starter Kit for First Time 1. Download the starter kit 4.0.0 version form here and extract it. 2. Go to <extraction folder>\Source\Eisk.Solution and click the solution file 3. From the solution explorer, right click the “Eisk.Web” web site project node and select “Set as Startup Project” and hit Ctrl + F5   4. You will be prompted to install database, just follow the instruction. That’s it! You are ready to use this starter kit. Running the Tests Employee Info Starter Kit contains a infrastructure for Integration and Unit Testing, by utilizing cool test tools in Visual Studio 2010. Once you complete the steps, mentioned above, take a minute to run the test cases on the fly. 1. From the solution explorer, to go “Solution Items\e-i-s-k-2010.vsmdi” and click it. You will see the available Tests in the Visual Studio Test Lists. Select all, except the “Load Tests” node (since Load Tests takes a bit time) 2. Click “Run Checked Tests” control from the upper left corner. You will see the tests running and finally the status of the tests, which indicates the current health of you application from different scenarios. Technorati Tags: asp.net,architecture,starter kit,employee info starter kit,visual studio 2010,.net 4.0,entity framework

    Read the article

  • What could be causing my Ghostscript to fail

    - by simplr
    When I run ps2pdf I get the following error messages: norman@host:~$ ps2pdf test.ps test.pdf While reading gs_dbt_e.ps: ERROR: /syntaxerror in -file- Operand stack: (gs_cidfm.ps) 1 --nostringval-- Execution stack: %interp_exit --nostringval-- --nostringval-- --nostringval-- %array_continue --nostringval-- --nostringval-- false 1 %stopped_push --nostringval-- --nostringval-- --nostringval-- Dictionary stack: --dict:928/1123(G)-- --dict:0/20(G)-- --dict:74/200(L)-- --dict:928/1123(G)-- --dict:8/8(G)-- --dict:1/1(G)-- Current allocation mode is global Current file position is 4623 norman@host:~$ I have tried re-installing gs and gs-esp without affect. Files test.ps, gs_dbt_e.ps and gs_cidfm.ps all checked against a working system as being good. Regardless of what postscript file I try to convert, the "Current file position is 4623" remains exactly the same. The host is running Ubuntu 7.04. Any suggestions as to what I should re-install will be much appreciated.

    Read the article

  • CodePlex Daily Summary for Tuesday, March 30, 2010

    CodePlex Daily Summary for Tuesday, March 30, 2010New ProjectsCloudMail: Want to send email from Azure? Cloud Mail is designed to provide a small, effective and reliable solution for sending email from the Azure platfor...CommunityServer Extensions: Here you can find some CommunityServer extensions and bug fixes. The main goal is to provide you with the ability to correct some common problems...ContactSync: ContactSync is a set of .NET libraries, UI controls and applications for managing and synchronizing contact information. It includes managed wrapp...Dng portal: DNG Portal base on asp.net MVCDotNetNuke Referral Tracker: The Referral Tracker module allows you to save URL variables, the referring page, and the previous page into a session variable or cookie. Then, th...Foursquare for Windows Phone 7: Foursquare for Windows Phone 7.GEGetDocConfig: SharePoint utility to list information concerning document libraries in one or more sites. Displays Size, Validity, Folder, Parent, Author, Minor a...Google Maps API for .NET: Fast and lightweight client libraries for Google Maps API.kbcchina: kbc chinaLoad Test User Mock Toolkits: 用途 This project is a framework mocking the user actvities with VSTS Load Test tool to faster the test script development. 此项目包括一套模拟用户行为的通用框架,可以简化...Resonance: Resonance is a system to train neural networks, it allows to automate the train of a neural network, distributing the calculation on multiple machi...SharePoint Company Directory / Active Directory Self Service System: This is a very simple system which was designed for a Bank to allow users to update their contact information within SharePoint . Then this info ca...SmartShelf: Manage files and folders on Windows and Windows Live.sysFix: sysFix is a tool for system administrators to easily manage and fix common system errors.xnaWebcam: Webcam usage in XNA GameStudio 3.1New ReleasesAll-In-One Code Framework: All-In-One Code Framework 2010-03-29: Improved and Newly Added Examples:For an up-to-date list, please refer to All-In-One Code Framework Sample Catalog. Samples for Azure Name Des...ARSoft.Tools.Net - C# DNS and SPF Library: 1.3.0: Added support for creating own dns server implementations. Added full IPv6 support to dns client Some performance optimizations to dns clientArtefact Animator: Artefact Animator - Silverlight 3 and WPF .Net 3.5: Artefact Animator Version 2.0.4.1 Silverlight 3 ArtefactSL.dll for both Debug and Release. WPF 3.5 Artefact.dll for both Debug and Release.BatterySaver: Version 0.4: Added support for a system tray icon for controlling the application and switching profiles (Issue)BizTalk Server 2006 Orchestration Profiler: Profiler v1.2: This is a point release of the profiler and has been updatd to work on 64 bit systems. No other new functionality is available. To use this ensure...CloudMail: CloudMail_0.5_beta: Initial public release. For documentation see http://cloudmail.codeplex.com/documentation.CycleMania Starter Kit EAP - ASP.NET 4 Problem - Design - Solution: Cyclemania 0.08.44: See Source Code tab for recent change history.Dawf: Dual Audio Workflow: Beta 2: Fix little bugs and improve usablity by changing the way it finds the good audio.DotNetNuke Referral Tracker: 2.0.1: First releaseFoursquare for Windows Phone 7: Foursquare 2010.03.29.02: Foursquare 2010.03.29.02GEGetDocConfig: GEGETDOCCONFIG.ZIP: Installation: Simply download the zip file and extract the executable into its own directory on the SharePoint front end server Note: There will b...GKO Libraries: GKO Libraries 0.2 Beta: Added: Binary search Unmanaged wrappers, interop and pinvoke functions and structures Windows service wrapper Video mode helpers and more.....Google Maps API for .NET: GoogleMapsForNET v0.9 alpha release: First version, contains Core library featuring: Geocoding API Elevation API Static Maps APIGoogle Maps API for .NET: GoogleMapsForNET v0.9.1 alpha release: Fixed dependencies issues; added NUnit binaries and updated Newtonsoft Json library.Google Maps API for .NET: GoogleMapsForNET v0.9.2a alpha release: Recommended update.Code clean-up; did refactoring and major interface changes in Static Maps because it wasn't aligned to the 'simplest and least r...Home Access Plus+: v3.2.0.0: v3.2.0.0 Release Change Log: More AJAX to reduce page refreshes (Deleting, New Folder, Rename moved from browser popups) Only 3 Browser Popups (1...Html to OpenXml: HtmlToOpenXml 1.1: The dll library to include in your project. The dll is signed for GAC support. Compiled with .Net 3.5, Dependencies on System.Drawing.dll and Docu...Latent Semantic Analysis: Latest sources: Just the latest sources. Just click the changeset. Please note that in order to compile this code you need to download some additional code. You ...Load Test User Mock Toolkits: Load Test User Mock Toolkits Help Doc: Samples and The framework introduction. 包括框架介绍和典型示例Load Test User Mock Toolkits: Open.LoadTest.User.Mock.Toolkits 1.0 alpha: 此版本为非正式版本,未对性能方面进行优化。而且框架正在重构调整中。Mobile Broadband Logging Monitor: Mobile Broadband Logging Monitor 1.2.4: This edition supports: Newer and older editions of Birdstep Technology's EasyConnect HUAWEI Mobile Partner MWConn User defined location for s...Nito.KitchenSink: Version 3: Added Encoding.GetString(Stream, bool) for converting an entire stream into a string. Changed Stream.CopyTo to allow the stream to be closed/abor...Numina Application Framework: Numina.Framework Core 49088: Fixed Bug with Headers introduced in rev. 48249 with change to HttpUtil class. admin/User_Pending.aspx page users weren't able to be deleted Do...OAuthLib: OAuthLib (1.6.4.0): Fix for 6390 Make it possible to configure time out value.Quack Quack Says the Duck: Quack Quack Says The Duck 1.1.0.0: This new release pushes some work onto a background thread clearing issues with multiple screen clicks while the UI was blocking.Rapidshare Episode Downloader: RED v0.8.4: - Added Edit feature - Moved season & episode int to string into a separate function - Fixed some more minor issues - Added 'Previous' feature - F...RoTwee: RoTwee (8.1.3.0): Update OAuthLib to 1.6.4.0SharePoint Company Directory / Active Directory Self Service System: SharePoint Company Directory with AD Import: This is a very simple system which was designed for a Bank to allow users to update their contact information within SharePoint . Then this info ca...Simply Classified: v1.00.12: Comsite Simply Classified v1.00.12 - STABLE - Tested against DotNetNuke v4.9.5 and v5.2.x Bug Fixes/Enhancements: BUGFIX: Resolved issues with 1...sPATCH: sPatch v0.9: Completely Recoded with wxWidgetsFollowing Content is different to .NET Patcher no requirement for .NET Framework Manual patch was removed to av...SSAS Profiler Trace Scheduler: SSAS Profiler Trace Scheduler: AS Profiler Scheduler is a tool that will enable Scheduling of SQL AS Tracing using predefined Profiler Templates. For tracking different issues th...sysFix: sysfix build v5: A stable beta release, please refer to home page for further details.VOB2MKV: vob2mkv-1.0.4: This is a feature update of the VOB2MKV utility. The command-line parsing in the VOB2MKV application has been greatly improved. You can now get f...xnaWebcam: xnaWebcam 0.1: xnaWebcam 0.1 Program Version 0.1: -Show Webcam Device -Draw.String WebcamTexture.VideoDevice.Name.ToString() Instructions: 1. Plug-in your Webca...xnaWebcam: xnaWebcam 0.2: xnaWebcam 0.2 Version 0.2: -setResolution -Keys.Escape: this.Exit() << Exit the Game/Application. --- Version 0.1: -Show Webcam Device -Draw.Strin...xnaWebcam: xnaWebcam 0.21: xnaWebcam 0.2 Version 0.21: -Fix: Don't quit game/application after closing mainGameWindow -Fix: Text Position; Window.X, Window.Y --- Version 0.2...Xploit Game Engine: Xploit_1_1 Release: Added Features Multiple Mesh instancing.Xploit Game Engine: Xploit_1_1 Source Code: Updates Create multiple instances of the same Meshe using XModelMesh and XSkinnedMesh.Yakiimo3D: DX11 DirectCompute Buddhabrot Source and Binary: DX11 DirectCompute Buddhabrot/Nebulabrot source and binary.Most Popular ProjectsRawrWBFS ManagerASP.NET Ajax LibraryMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitAJAX Control ToolkitWindows Presentation Foundation (WPF)LiveUpload to FacebookASP.NETMicrosoft SQL Server Community & SamplesMost Active ProjectsRawrjQuery Library for SharePoint Web ServicesBlogEngine.NETLINQ to TwitterManaged Extensibility FrameworkMicrosoft Biology FoundationFarseer Physics EngineN2 CMSNB_Store - Free DotNetNuke Ecommerce Catalog Modulepatterns & practices – Enterprise Library

    Read the article

  • NullPointerException on jetty 7 startup with jetty deployment descriptor

    - by Draemon
    I'm getting the following error when I start Jetty: 2010-03-01 12:30:19.328:WARN::Failed startup of context WebAppContext@15ddf5@15ddf5/webapp,null,/path/to/jetty-distribution-7.0.1.v20091125/webapps-plus/webapp.war With this commandline: java -jar start.jar OPTIONS=All lib=/path/to/jetty-distribution-7.0.1.v20091125/lib/ext etc/jetty.xml etc/jetty-plus.xml /path/to/webapp/src/configuration/test.xml And test.xml contains: <?xml version="1.0" encoding="ISO-8859-1"?> <!DOCTYPE Configure PUBLIC "-//Mort Bay Consulting//DTD Configure//EN" "http://www.eclipse.org/jetty/configure.dtd"> <Configure class="org.eclipse.jetty.webapp.WebAppContext"> <Set name="contextPath">/webapp</Set> <Set name="war"><SystemProperty name="jetty.home" default="."/>/webapps-plus/webapp.war</Set> </Configure> If I don't include test.xml on the commandline it works fine.

    Read the article

  • Why is gcc failing with "unrecognized command line option "-L/lusr/opt/mpfr-2.4.2/lib" "?

    - by Mike
    My sysadmin recently installed a new version of GCC, in /lusr/opt/gcc-4.4.3. I tested it as follows: mike@canon:~$ cat test.c int main(){ return 0; } mike@canon:~$ gcc test.c /lusr/opt/gcc-4.4.3/libexec/gcc/i686-pc-linux-gnu/4.4.3/cc1: error while loading shared libraries: libmpfr.so.1: cannot open shared object file: No such file or directory After informing my sysadmin about this, he said to add /lusr/opt/mpfr-2.4.2/lib:/lusr/opt/gmp-4.3.2/lib to my LD_LIBRARY_PATH. After doing this, I get the following error: mike@canon:~$ gcc test.c cc1: error: unrecognized command line option "-L/lusr/opt/mpfr-2.4.2/lib" First, my sysadmin wasn't entirely sure this was the best workaround(though he did say it worked for him...), so is there a better solution? Second, why am I getting a linker error from cc, and how can I fix it? Some information which may be helpful: mike@canon:~$ env | grep mpfr OLDPWD=/lusr/opt/mpfr-2.4.2/lib LD_LIBRARY_PATH=/lusr/opt/mpfr-2.4.2/lib:/lusr/opt/gmp-4.3.2/lib: mike@canon:~$ echo $LDFLAGS (the above is a blank line)

    Read the article

  • Simple Branching and Merging with SVN

    Its a good idea not to do too much work without checking something into source control.  By too much work I mean typically on the order of a couple of hours at most, and certainly its a good practice to check in anything you have before you leave the office for the day.  But what if your changes break the build (on the build server you do have a build server dont you?) or would cause problems for others on your team if they get the latest code?  The solution with Subversion is branching and merging (incidentally, if youre using Microsoft Visual Studio Team System, you can shelve your changes and share shelvesets with others, which accomplishes many of the same things as branching and merging, but is a bit simpler to do). Getting Started Im going to assume you have Subversion installed along with the nearly ubiquitous client, TortoiseSVN.  See my previous post on installing SVN server if you want to get it set up real quick (you can put it on your workstation/laptop just to learn how it works easily enough). Overview When you know you are going to be working on something that you wont be able to check in quickly, its a good idea to start a branch.  Its also perfectly fine to create the branch after-the-fact (have you ever started something thinking it would be an hour and 4 hours later realized you were nowhere near done?).  In any event, the first thing you need to do is create a branch.  A branch is simply a copy of the current trunk (a typical subversion setup has root directories called trunk, tags, and branches its a good idea to keep this and to put your branches in the branches folder).  Once you have a new branch, you need to switch your working copy so that it is bound to your branch.  As you work,  you may want to merge in changes that are happening in the trunk to your branch, and ultimately when you are done youll want to merge your branch back into the trunk.  When done, you can delete your branch (or not, but it may add clutter).  To sum up: Create a new branch Switch your local working copy to the new branch Develop in the branch (commit changes, etc.) Merge changes from trunk into your branch Merge changes from branch into trunk Delete the branch Create a new branch From the root of your repository, right-click and select TortoiseSVN > Branch/tag as shown at right (click to enlarge).  This will bring up the Copy (Branch / Tag) interface.  By default the From WC at URL: should be pointing at the trunk of your repository.  I recommend (after ensuring that you have the latest version) that you choose to make the copy from the HEAD revision in the repository (the first radio button).  In the To URL: textbox, you should change the URL from /trunk to /branches/NAME_OF_BRANCH.  You can name the branch anything you like, but its often useful to give it your name (if its just for your use) or some useful information (such as a datestamp or a bug/issue ID from that it relates to, or perhaps just the name of the feature you are adding. When youre done with that, enter in a log message for your new branch.  If you want to immediately switch your local working copy to the new branch/tag, check the box at the bottom of the dialog (Switch working copy to new branch/tag).  You can see an example at right. Assuming everything works, you should very quickly see a window telling you the Copy finished, like the one shown below: Switch Local Working Copy to New Branch If you followed the instructions above and checked the box when you created your branch, you dont need to do this step.  However, if you have a branch that already exists and you would like to switch over to working on it, you can do so by using the Switch command.  Youll find it in the explorer context menu under TortoiseSVN > Switch: This brings up a dialog that shows you your current binding, and lets you enter in a new URL to switch to: In the screenshot above, you can see that Im currently bound to a branch, and so I could switch back to the trunk or to another branch.  If youre not sure what to enter here, you can click the [] next to the URL textbox to explore your repository and find the appropriate root URL to use.  Also, the dropdown will show you URLs that might be a good fit (such as the trunk of the current repository). Develop in the Branch Once you have created a branch and switched your working copy to use it,  you can make changes and Commit them as usual.  Your commits are now going into the branch, so they wont impact other users or the build server that are working off of the trunk (or their own branches).  In theory you can keep on doing this forever, but practically its a good idea to periodically merge the trunk into your branch, and/or keep your branches short-lived and merge them back into the trunk before they get too far out of sync. Merge Changes from Trunk into your Branch Once you have been working in a branch for a little while, change to the trunk will have occurred that youll want to merge into your branch.  Its much safer and easier to integrate changes in small increments than to wait for weeks or months and then try to merge in two very different codebases.  To perform the merge, simply go to the root of your branch working copy and right click, select TortoiseSVN->Merge.  Youll be presented with this dialog: In this case you want to leave the default setting, Merge a range of revisions.  Click Next.  Now choose the URL to merge from.  You should select the trunk of your current repository (which should be in the dropdownlist, or you can click the [] to browse your repository for the correct URL).  You can leave everything else blank since you want to merge everything: Click Next.  Again you can leave the default settings.  If you want to do something more granular than everything in the trunk, you can select a different Merge depth, to include merging just one item in the tree.  You can also perform a Test merge to see what changes will take place before you click Merge (which is often a good idea).  Heres what the dialog should look like before you click Merge: After clicking Merge (or Test merge) you should see a confirmation like this (it will say Test Only in the title if you click Test merge): Now you should build your solution, run all of your tests, and verify that your branch still works the way it should, given the updates that youve just integrated from the trunk.  Once everything works, Commit your changes, and then continue with your work on the branch.  Note that until you commit, nothing has actually changed in your branch on the server.  Other team members who may also be working in this branch wont be impacted, etc.  The Merge is purely a client-side operation until you perform a Commit. In a more real-world scenario, you may have conflicts.  When you do, youll be presented with a dialog like this one: Its up to you which option you want to go with.  The more frequently you Merge, the fewer of these youll have to deal with.  Also, be very sure that youre merging the right folders together.  If you try and merge your trunk with some subfolder in your branchs structure, youll end up with all kinds of conflicts and problems.  Fortunately, theyre only on your working copy (unless you commit them!) but if you see something like that, be sure to doublecheck your URL and your local file location. Merge Your Branch Back Into Trunk When youre done working in your branch, its time to pull it back into the trunk.  The first thing you should do is follow the previous steps instructions for merging the latest from the trunk into your branch.  This lets you ensure that what you have in your branch works correctly with the current trunk.  Once youve done that and committed your changes to your branch, youre ready to proceed with this step. Once youre confident your branch is good to go, you should go to its root folder and select TortoiseSVN->Merge (as above) from the explorer right-click menu.  This time, select Reintegrate a branch as shown below: Click Next.  Youll want it to merge with the trunk, which should be the default: Click Next. Leave the default settings: Click Test merge to see a test, and then if all looks good, click Merge.  Note that if you havent checked in your working copy changes, youll see something like this: If on the other hand things are successful: After this step, its likely you are finished working in your branch.  Dont forget to use the ToroiseSVN->Switch command to change your working copy back to the trunk. Delete the Branch You dont have to delete the branch, but over time your branches area of your repository will get cluttered, and in any event if theyre not actively being worked on the branches are just taking up space and adding to later confusion.  Keeping your branches limited to things youre actively working on is simply a good habit to get into, just like making sure your codebase itself remains tidy and not filled with old commented out bits of code. To delete the branch after youre finished with it, the simplest thing to do is choose TortoiseSVN->Repo Browser.  From there, assuming you did this from your branch, it should already be highlighted.  In any event, navigate to your branch in the treeview on the left, and then right-click and select Delete.  Enter a log message if youd like: Click OK, and its gone.  Dont be too afraid of this, though.  You can still get to the files by viewing the log for branches, and selecting a previous revision (anything before the delete action): If for some reason you needed something that was previously in this branch, you could easily get back to any changeset you checked in, so you should have absolutely no fear when it comes to deleting branches youre done with.   Resources If youre using Eclipse, theres a nice write-up of the steps required by Zach Cox that I found helpful here. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Network Transfer Rate on SMB/FTP

    - by Jack Sleight
    Hi, We're trying to optimise our network transfer rate from the client PCs to the file server, but having no luck. If I run an iperf test between a client PC (Windows XP) and our server (Linux) with a large TCP window size (the default is 8kb) I can get 60 Mbps. But when I run an SMB transfer speed test all I get is around 35 Mbps. When I run an FTP transfer speed test all I get is around 32 Mbps. I thought this was to do with the TCP window size, but I have now increased that to 256960 and it made no difference at all. Any ideas what I'm doing wrong? Or is 35 Mbps all I should expect? Cheers, Jack

    Read the article

  • Using Fiddler with BizTalk's HTTP Adapter

    - by Christopher House
    I'm working on an orchestration that's retrieving some data from a Java servlet.  The servlet takes a parameter string via HTTP post and returns POX (plain old XML, no SOAP here).  I was having trouble getting a valid response from the servlet when I was sending some test messages and wanted to see what my messages were looking like as they went across the wire.  Normally I was using WCF, I'd setup message logging, but since that's obviously not an option with the HTTP adapter, my thoughts turned to Fiddler.  A quick Google search turned up some promising results.  The posts I read all referred to using Fiddler with the SOAP adapter, but I thoght I could apply the same ideas to the HTTP adapter.  This led me to try setting the following context properties: HttpRequestMessage(HTTP.UseProxy) = true; HttpRequestMessage(HTTP.ProxyName) = "127.0.0.1"; HttpRequestMessage(HTTP.ProxyPort) = 8888; I rebuilt my orch, gac'd it, bounced my host and tried submitting a test message.  Fiddler was running but I didn't see any traffic show up.  I tried fully undeploying/redeploying my application and still, no traffic in Fiddler.  I was starting to think that BizTalk was ignoring the proxy settings.  To confirm this, I closed Fiddler and submitted a test message.  Sure enough, the orch ran to completion, proving that BizTalk was ignoring the proxy settings. I went back to my orch to see if there could be any other context proprties I needed to set.  I saw one that looked promising:  HTTP.UseHandlerProxySettings.  I set this to false, rebuilt my orch and this time when I submitted, I got an error message, which made sense, I didn't have Fiddler running.  I started up Fiddler, submitted another message and there it was, my HTTP traffic, just as I hoped.  And, I was quickly able to figure out what the problem was...I had forgotten to set HTTP.ContentType to application/x-www-form-urlencoded.

    Read the article

  • stress testing opencl/Ati GPU on 12.04

    - by lurscher
    What does people normally use to stress test their GPU on ubuntu 12.04? I tried installing Phoronix Benchmark suite ubuntu .deb but it tries to install freeglu3-dev and at the same time complains about $ phoronix-test-suite benchmark pts/opencl The following dependencies are needed and will be installed: freeglut3-dev This process may take several minutes. Reading package lists... Building dependency tree... Reading state information... freeglut3-dev is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 90 not upgraded. There are dependencies still missing from the system: - OpenGL Utility Kit / GLUT 1: Ignore missing dependencies and proceed with installation. 2: Skip installing the tests with missing dependencies. 3: Re-attempt to install the missing dependencies. 4: Quit the current Phoronix Test Suite process. Missing dependencies action: 4 so even if it is installing freeglu3, it still complains about missing GLUT. You can't win against GLUT it seems So, what does people use for this? i mean, really, because i have tried googling for an hour and it is not paying up Thanks!

    Read the article

  • listenbacklog directive not working in apache2.2

    - by andrés
    I was trying to make apache 2.2 reject connections if MaxClients was reached, to do this I found the directive ListenBacklog.To test it, I configured apache in the following way: <IfModule mpm_prefork_module> StartServers 5 MinSpareServers 5 MaxSpareServers 10 MaxClients 10 ListenBacklog 1 MaxRequestsPerChild 0 </IfModule> I've made a little script in JMeter to test this. The test launchs 50 users in 1 second (it requests a phpinfo page) but none is rejected, they all wait! I don't understand how this directive works... my operating system is Ubuntu.

    Read the article

  • SOA Suite Integration: Part 2: A basic BPEL process

    - by Anthony Shorten
    This is the next in the series about SOA Suite integration with Oracle Utilities Application Framework. One of the first scenarios I am going to illustrate in this series is building a basic BPEL process using Web Service calls to the Oracle Utilities Application Framework. The scenario is this. I will pass in the userid and the BPEL process will call our the AS-User Web Service we created in Part 1. This is just a basic test and illustrate how to import the Web Service into SOA Suite. To use this scenario, you will need access to Oracle SOA Suite, access to a copy of any Oracle Utilities Application Framework based product and Oracle JDeveloper (to build the process). First of all you need to start Oracle JDeveloper and create a new SOA Project to house the BPEL process in. For the purposes of this example I will call the project simpleBPEL and verify that SOA is part of the project. I will select "Composite with BPEL" to denote it as a BPEL process. I can also the same process to create a Mediator or OSB project (refer to the JDeveloper documentation on these technologies). For this example I will use BPEL 1.1 as my specification standard (BPEL 2.0 can also be used if desired). I give the individual BPEL process as simpleBPEL (you can use a different name but I wanted to keep the project and process the same for this example). I will also build a Synchronous BPEL Process as I want a response from the Web Service. I will leave the defaults to save time. I have no have a blank canvas to build my BPEL process against. Note: for simplicity I am going to use as much defaulting as possible. In fact I am not going to specify an input schema for the incoming call as I will use the basic single field used by BPEL as default. The first step is to import the AS-User Web Service into my BPEL project. To do this I use the standard Web Service BPEL component from the Component Palette to import the WSDL into the BPEL project. Now the tricky part (a joke), you drag and drop the component from the Palette onto the right side of the canvas in the Partner Links swim lane. This swim lane is reserved for Partner Links that have a Partner Role (i.e. being called rather than calling). When you drop the Web Service onto the canvas the Create Web Service wizard is invoked to ask for details of the Web Service. At this point you give the BPEL node a name. I have used the name RetrieveUser as a name. I placed the WSDL URL from the XAI Inbound Service screen in the WSDL URL. Once you specify the URL you can press the Find existing WSDL's button to load the information into BPEL from the call. You will notice the Port Type is prefilled with the port from the WSDL. I also suggest that you check copy wsdl and it's dependent artifacts into the project if you intending to work on the BPEL process offline. If you do not check this your target application must be accessible when you work on the BPEL process (that is not always convenient). Note: For the perceptive of you will notice that the URL specified in this example is different to the URL in the last post. The reason is for the demonstrations I shifted to a new server and did not redo all of the past screen captures. If you copy the WSDL into the project you will get an information screen about Localize Files. It is just a confirmation screen. The last confirmation screen is a summary of the partner link (the main tab is locked for editing at this stage). At this stage you have successfully imported the Web Service. To complete the setup of the Web Service you need to set the credentials for the Web Service to use. Refer to the past post on how to do that. Now to use the Web Service. To call the Web Service (as it is just imported not connected to the BPEL process yet), you must add an Invoke action to your BPEL Process. To do this, select Invoke action from the BPEL Constructs zone on the Component Palette and drop it on the edit nodes between the receiveInput and replyOutput nodes This will create an empty Invoke action. You will notice some connectors on the Invoke node. Grab the node closest to your Web Service and drag it to connect the Invoke to your Web Service. This instructs BPEL to use the Invoke to call the Web Service. Once the Invoke action is connected to the Web Service an Edit Invoke edit dialog is displayed. At this point I suggest you name the Invoke node. It is important to name the nodes straightaway and name them appropriately for you to trace the logic. I used InvokeUser as the name in this example. To complete the node configuration you must create Variables to hold the input and output for the call. To do this clock on Automatically Create Input Variable on the Edit Invoke dialog. You will be presented with a default variable name. It uses the node name (that is why it is important to name the node before hitting this button) as a prefix. You can name the variable anything but I usually take the default. Repeat the same for the output variable. You now have a completed node for invoking the service. You have a very basic BPEL process which contains an input, invoke and output node. It is not complete yet though. You need to tell the BPEL process how to pass data from the input to the invoke step and how to take the output from the service call and pass it back to the service. You need to now add an Assign node to assign the input to the Web Service. To do this select Assign activity from BPEL Constructs zone in the Component Palette. Drag and drop the Assign activity between the receiveInput and InvokeUser nodes as you want to pass data between these two nodes. You have now added a new Assign node to your BPEL process Double clicking the node allows you to specify the name of the node. I use AssignUser to describe that I am assigning user data. On the Copy Rules tab you can specify the mapping between the input variable InputVariable/payload/process/input string and the input variable for the Web Service call. We are passing data from the input to BPEL to the relevant input variable on the Web Service. This is simply drag and drop between the two data structures. In the example, I am using the input to pass to the user element in my Web Service as the user is the primary key for the object. The fields become linked (which means data from source will be copied to target). Almost there. You now need to process the output from the Web Service call to the outputVariable of the client call. I have decided to pass back one piece of data, the name associated with the user by concatenating the firstName and lastName elements from the Web Service call. To do this I will use a Transform as it is not just a matter of an Assign action. It is a concatenation operation. This also illustrates how you can use BPEL functionality to transform data from a Web Service call. As with the other components you drag and drop the Transform component to the appropriate place in the BPEL process. In this case we want to transform the output from the Web Service call so we want it after the InvokeUser action and the replyOutput action. The Transform component is actually part of the Oracle Extensions to the BPEL specification. Double clicking the Transform node will allow you to name the node.  In this example I used TransformName. To complete the transform I need to tell the product the source of the transformation and the target of the transform. In the example this is the InvokeUser output variable. I also named the mapper file to TransformName. By clicking the + or pencil icon next to the map I can create the map. The mapping screen is shows the source and target schemas for me to map across. As with the assign I can map the relevant elements. In my example, I first map the firstName from the Web Service to the result element. As I want to concatenate the names, I drop the concat function on the call line. I now attach the last name to the function to indicate the concatenation of the field. By default the names will be concatenated with no space. To make the name legible I add a space between the field by clicking the function and adding a space in the call. I now have a completed mapping. I can now save the whole project as my BPEL process is now complete. As you can see the following happens: We accept input from the client (the userid for the call) in the receiveInput step. We assign that value to the input parameters for the Web Service call in the AssignUser step. We invoke the Web Service call to retrieve the data from the product in the InvokeUser step. We take the output from the InvokeUser step and concatenate the names in the TransformName step. We pass back the data in the replyOutput step. At this point we can deploy the BPEL process to the SOA Suite server. I will not cover this aspect as it really all SOA Suite specific (it is all done via Oracle JDeveloper). Now we need to test the service in SOA Suite. We will use the Fusion Middleware Control test facility. I will assume that credentials have also been setup as per our previous post (else you will get a 401 error). You navigate to the deployed BPEL process within Fusion Middleware Control and select the Test Service option. Specify some test data on the payload at the bottom of the Test Service screen. In my case I am returning my own userid information. On the response tab you will see the result. It works. You can verify the steps using the Audit trace facility on individual calls. As you can see this is a basic BPEL but you get the idea of importing the Web Service is pretty straightforward. You can create more sophisticated BPEL processes using the full facilities in Oracle SOA Suite. I just showed you the basic principals.

    Read the article

  • How would i down-sample a .wav file then reconstruct it using nyquist? - in matlab [closed]

    - by martin
    This is all done in MatLab 2010 My objective is to show the results of: undersampling, nyquist rate/ oversampling First i need to downsample the .wav file to get an incomplete/ or impartial data stream that i can then reconstuct. Heres the flow chart of what im going to be doing So the flow is analog signal - sampling analog filter - ADC - resample down - resample up - DAC - reconstruction analog filter what needs to be achieved: F= Frequency F(Hz=1/s) E.x. 100Hz = 1000 (Cyc/sec) F(s)= 1/(2f) Example problem: 1000 hz = Highest frequency 1/2(1000hz) = 1/2000 = 5x10(-3) sec/cyc or a sampling rate of 5ms This is my first signal processing project using matlab. what i have so far. % Fs = frequency sampled (44100hz or the sampling frequency of a cd) [test,fs]=wavread('test.wav'); % loads the .wav file left=test(:,1); % Plot of the .wav signal time vs. strength time=(1/44100)*length(left); t=linspace(0,time,length(left)); plot(t,left) xlabel('time (sec)'); ylabel('relative signal strength') **%this is were i would need to sample it at the different frequecys (both above and below and at) nyquist frequency.*I think.*** soundsc(left,fs) % shows the resaultant audio file , which is the same as original ( only at or above nyquist frequency however) Can anyone tell me how to make it better, and how to do the various sampling at different frequencies?

    Read the article

  • DDD Melbourne -lessons leant

    - by Michael Freidgeim
    I've attended DDD Melbourne and want to list the interesting points, that I've leant and want to follow. To read more: * Moles-Mocking Isolation framework for .NET. Documentation is here.   (See also Mocking frameworks comparison created October 4, 2009 ) * WebFormsMVP * PluralSight   http://www.pluralsight-training.net/offers/default.aspx?cc=trial   * ELMAH: Error Logging Modules and Handlers *Rhino.Mocks   * VS UI Test Recorder -see posts Visual Studio 2010 Coded UI Test User Guide. Note that Microsoft Test Manager (MTM) toolis a separate application, that can be started from Program files/VS 2010 menu.It is not a menu inside Visual Studio.   * CodeContract- seems great in Debug. Will be good if in production  will be possible runtime configuration, ability to log instead of throw exception. Current recommendation to customize Debug.Assert is not trivial The programmer is free to use the customization provided by Debug.Assert using assert listeners to obtain whatever runtime behavior they desire (e.g., ignoring the error, logging it, or throwing an exception).   // Clears the existing list of assert listener (the default pop-up box) System.Diagnostics.Debug.Listeners.Clear(); // Install your own listener System.Diagnostics.Debug.Listeners.Add(MyTraceListener); Note that you can't catch specific ContractException, but can catch generic Exception(see How come you cannot catch Code Contract exceptions?)   Books recommended "Working effectively with legacy code" by Michael Feathers (corresponding article)   Fowler, Martin Refactoring: Improving the Design of Existing Code, slides http://jaoo.dk/jaoo1999/schedule/MartinFowlerRefractoring.pdf

    Read the article

  • connecting to redis from wsgi application

    - by W_P
    I am running redis and httpd with mod_wsgi on the same server. I have a wsgi application using a virtualenv and the follow virtualhost conf: WSGIPythonPath /var/www/wsgi <VirtualHost *:80> ServerName test.example.com DocumentRoot /var/www/html WSGIScriptAlias / /var/www/wsgi/myapp/app.wsgi CustomLog logs/test.example.com-access-log common ErrorLog logs/test.example.com-error-log <Directory /var/www/wsgi/myapp> Order allow,deny Allow from all WSGIScriptReloading On </Directory> </VirtualHost> I have a sites.pth in my /var/www/wsgi directory, adding myapp to the PYTHONPATH, and this is my /var/www/wsgi/myapp/app.wsgi: activate_this = '/var/www/envs/myapp/bin/activate_this.py' execfile( activate_this, dict(__file__=activate_this) ) from myapp.application import app as application Sorry, I am trying to edit the question to have the rest of my info by it keeps timing out before the edits go through

    Read the article

  • Lightning talk: Coderetreat

    - by Michael Williamson
    In the spirit of trying to encourage more deliberate practice amongst coders in Red Gate, Lauri Pesonen had the idea of running a coderetreat in Red Gate. Lauri and I ran the first one a few weeks ago: given that neither of us hadn’t even been to a coderetreat before, let alone run one, I think it turned out quite well. The participants gave positive feedback, saying that they enjoyed the day, wrote some thought-provoking code and would do it again. Sam Blackburn was one of the attendees, and gave a lightning talk to the other developers in one of our regular lightning talk sessions: In case you can’t watch the video, I’ve transcribed the talk below, although I’d recommend watching the video if you can — I didn’t have much time to do the transcribing! So, what is a coderetreat? So it’s not just something in Red Gate, there’s a website and everything, although it’s not a very big website. It calls itself a community network. The basic ideas behind coderetreat are: you’ve got one day, and you split it into one hour sections. You spend three quarters of that coding, and do a little retrospective at the end. You’re supposed to start fresh each, we were told to delete our code after every session. We were in pairs, swapping after each session, and we did the same task every time. In fact, Conway’s Game of Life is the only task mentioned anywhere that I find for coderetreat. So I don’t know what we’ll do next time, or if we’re meant to do the same thing again. There are some guiding principles which felt to us like restrictions, that you have to code in crazy ways to encourage better code. Final thing is that it’s supposed to be free for outsiders to join. It’s meant to be a kind of networking thing, where you link up with people from other companies. We had a pilot day with Michael and Lauri. Since it was basically the first time any of us had done anything like this, everybody was from Red Gate. We didn’t chat to anybody else for the initial one. The task was Conway’s Game of Life, which most of you have probably heard of it, all but one of us knew about it when did the coderetreat. I won’t got into the details of what it is, but it felt like the right size of task, basically one or two groups actually produced something working by the end of the day, and of course that doesn’t mean it’s necessarily a day’s work to produce that because we were starting again every hour. The task really drives you more than trying to create good code, I found. It was really tempting to try and get it working rather than stick to the rules. But it’s really good to stop and try again because there are so many what-ifs when you’ve finished writing something, “what if I’d done it this way?”. You can answer all those questions at a coderetreat because it’s not about getting a product out the door, it’s about learning and playing with ideas. So we had all these different practices we were trying. I’ll try and go through most of these. Single responsibility is this idea that everything should do just one thing. It was the very first session, we were still trying to figure out how do you go about the Game of Life? So by the end of forty-five minutes hadn’t produced very much for that first session. We were still thinking, “Do we start with a board, how do we represent all these squares? It can be infinitely big, help, this is getting really difficult!”. So, most of us didn’t really get anywhere on the first one. Although it was interesting that some people started with the board, one group started with the FateDecider class that decides whether things live or die. A sort of god class, but in a good way. They managed to implement all of the rules without even defining how the squares were arranged or anything like that. Another thing we tried was TDD (test-driven development). I’m sure most of you know what TDD is: Watch a test, watch it fail for the right reason Write code to pass the test, watch it pass Refactor, check the test still passes Repeat! It basically worked, we were able to produce code, but we often found the tests defined the direction that code went, which is obviously the idea of TDD. But you tend to find that by the time you’ve even written your first assertion, which is supposed to be the very first thing you write, because you write your tests backwards from the assertions back to the initial conditions, you’ve already constrained the logic of the code in some way by the time you’ve done that. You then get to this situation of, “Well, we actually want to go in a slightly different direction. Can we do this?”. Can we write tests that don’t constrain the architecture? Wrapping up all primitives: it’s kind of turtles all the way down. We had a Size, which has a Width and Height, which both derive from Dimension. You’ve got pages of code before you’ve even done anything. No getters and setters (use tell don’t ask instead): mocks and stubs for tests are required if you want to assert that your results are what you think they should be. You can’t just check the internal state of the code. And people found that really challenging and it made them think in a different way which I think is really good. Not having mutable state: that was kind of confusing because we weren’t quite sure what fitted within that rule and what didn’t, and I think we were trying too hard to follow the rule rather than the guideline. No if-statements: supposed to use polymorphism instead, but polymorphism still requires a factory with conditional behaviour. We did something really crazy to get around this: public T If(bool condition, Func<T> left, Func<T> right) { var dict = new Dictionary<bool, Func<T>> {{true, left}, {false, right}}; return dict[condition].Invoke(); } That is not really polymorphism, is it? For-loops: you can always replace a for-loop with recursion, but it doesn’t tend to make it any more readable unless it’s the kind of task that really lends itself to that. So it was interesting, it was good practice, but it wouldn’t make it easier it’s the kind of tree-structure algorithm where that would help. Having a limit on the number of levels of indentation: again, I think it does produce very nice, clean code, but it wasn’t actually a challenge because you just extract methods. That’s quite a useful thing because you can apply that to real code and say, “Okay, should this method really be going crazy like this?” No talking: we hated that. It’s like there’s two of you at a computer, and one of you is doing the typing, what does the other guy do if they’re not allowed to talk. The answer is TDD ping-pong – one person writes the tests, and then the other person writes the code to pass the test. And that creates communication without actually having to have discussion about things which is kind of cool. No code comments: just makes no difference to anything. It’s a forty-five minute exercise, so what are you going to put comments in code for? Finally, this is my fault. I discovered an entertaining way of doing the calculation that was kind of cool (using convolutions over the state of the board). Unfortunately, it turns out to be really hard to implement in C#, so didn’t even manage to work out how to do that convolution in C#. It’s trivial in some high-level languages, but you need something matrix-orientated for it to really work. That’s most of it, really. The thoughts that people went away with: we put down our answers to questions like “What have you learnt?” and “What surprised you?”, “How are you going to do things differently?”, and most people said redoing the problem is really, really good for understanding it properly. People hate having a massive legacy codebase that they can’t change, so being able to attack something three different ways in an environment where the end-product isn’t important: that’s something people really enjoyed. Pair-programming: also people said that they wanted to do more of that, especially with TDD ping-pong, where you write the test and somebody else writes the code. Various people thought different things about immutables, but most people thought they were good, they promote functional programming. And TDD people found really hard. “Tell, don’t ask” people found really, really hard and really, really, really hard to do well. And the recursion just made things trickier to debug. But most people agreed that coderetreats are really cool, and we should do more of them.

    Read the article

  • Sucky MSTest and the "WaitAll for multiple handles on a STA thread is not supported" Error

    - by Anne Bougie
    If you are doing any multi-threading and are using MSTest, you will probably run across this error. For some reason, MSTest by default runs in STA threading mode. WTF, Microsoft! Why so stuck in the old COM world?  When I run the same test using NUnit, I don't have this problem. Unfortunately, my company has chosen MSTest, so I have a lot of testing problems. NUnit is so much better, IMO. After determining that I wasn't referencing any unmanaged code that would flip the thread into STA, which can also cause this error, the only thing left was the testing suite I was using. I dug around a little and found this obscure setting for the Test Run Config settings file that you can't set using its interface. You have to open it up as a text file and add the following setting:  <ExecutionThread apartmentState="MTA" /> This didn't break any other tests, so I'm not sure why it's not the default, or why there is nothing in the test run configuration app to change this setting. Here is the code I was testing:  public void ProcessTest(ProcessInfo[] infos) {    WaitHandle[] waits = new WaitHandle[infos.Length];    int i = 0;    foreach (ProcessInfo info in infos)    {       AutoResetEvent are = new AutoResetEvent(false);       info.Are = are;       waits[i++] = are;         Processor pr = new Processor();       WaitCallback callback = pr.ProcessTest;       ThreadPool.QueueUserWorkItem(callback, info);    }      WaitHandle.WaitAll(waits); }

    Read the article

  • Need to add request headers to every request in Apache

    - by 115748146017471869327
    I'm trying to add a header value to every request via Apache (ver 2.2). I've edited my VirtualHost to include the following vaiations: (I've tried both RequestHeader and Header, add and set in all of these cases) RequestHeader set X-test_url "Test" or <Directory /> RequestHeader set X-test_url "Test" </Directory> or <Location ~ "/*" > RequestHeader set X-test_url "Test" </Location> It's hard to explain how I've gotten to this point, but I have to get this done in Apache. Again I'm trying to add the header value to every request. Thanks.

    Read the article

  • Cannot create file in directory even though it's writable by a group I belong to

    - by Alan Berndt
    I have a directory structure owned by a certain group, and I am a member of the group that owns these directories. I am able to create files in one directory, but not in another, even though the permissions are the same. alan@bricky:/mnt/storage/media$ stat Music Music\ \(Lossy\)/ File: `Music' Size: 34 Blocks: 8 IO Block: 4096 directory Device: fb00h/64256d Inode: 4215424 Links: 3 Access: (2775/drwxrwsr-x) Uid: ( 1001/ media) Gid: ( 1001/ media) Access: 2011-08-19 11:45:03.182586898 -0700 Modify: 2011-08-19 11:44:01.412840027 -0700 Change: 2011-08-19 11:45:02.734603240 -0700 File: `Music (Lossy)/' Size: 6 Blocks: 8 IO Block: 4096 directory Device: fb00h/64256d Inode: 1512056832 Links: 2 Access: (2775/drwxrwsr-x) Uid: ( 1001/ media) Gid: ( 1001/ media) Access: 2011-08-19 11:45:03.190586606 -0700 Modify: 2011-08-19 10:34:46.526530313 -0700 Change: 2011-08-19 11:45:02.738603094 -0700 alan@bricky:/mnt/storage/media$ touch Music/test alan@bricky:/mnt/storage/media$ touch Music\ \(Lossy\)/test touch: cannot touch `Music (Lossy)/test': Permission denied

    Read the article

< Previous Page | 167 168 169 170 171 172 173 174 175 176 177 178  | Next Page >