Search Results

Search found 16899 results on 676 pages for 'local'.

Page 138/676 | < Previous Page | 134 135 136 137 138 139 140 141 142 143 144 145  | Next Page >

  • questions about name mangling in C++

    - by Tim
    I am trying to learn and understand name mangling in C++. Here are some questions: (1) From devx When a global function is overloaded, the generated mangled name for each overloaded version is unique. Name mangling is also applied to variables. Thus, a local variable and a global variable with the same user-given name still get distinct mangled names. Are there other examples that are using name mangling, besides overloading functions and same-name global and local variables ? (2) From Wiki The need arises where the language allows different entities to be named with the same identifier as long as they occupy a different namespace (where a namespace is typically defined by a module, class, or explicit namespace directive). I don't quite understand why name mangling is only applied to the cases when the identifiers belong to different namespaces, since overloading functions can be in the same namespace and same-name global and local variables can also be in the same space. How to understand this? Do variables with same name but in different scopes also use name mangling? (3) Does C have name mangling? If it does not, how can it deal with the case when some global and local variables have the same name? C does not have overloading functions, right? Thanks and regards!

    Read the article

  • Fabric "TypeError: not all arguments converted during string formatting"

    - by Brian Carpio
    I have the following fabric task: @task def deploy_west_ec2_ami(name, puppetClass, size='m1.small', region='us-west-1', basedn='joe', ldap='arch-ldap-01', secret='secret', subnet='subnet-d43b8ab d', sgroup='sg-926578fe'): execute(deploy_ec2_ami, name='%s',puppetClass='%s',size='%s',region='%s',basedn='%s',ldap='%s',secret='%s',subnet='%s',sgroup='%s' %(name, puppetClass , size, region, basedn, ldap, secret, subnet, sgroup)) However when I run the command: fab deploy_west_ec2_ami:test,java I get the following Traceback: Traceback (most recent call last): File "/usr/local/lib/python2.6/dist-packages/fabric/main.py", line 710, in main *args, **kwargs File "/usr/local/lib/python2.6/dist-packages/fabric/tasks.py", line 321, in execute results['<local-only>'] = task.run(*args, **new_kwargs) File "/usr/local/lib/python2.6/dist-packages/fabric/tasks.py", line 113, in run return self.wrapped(*args, **kwargs) File "/home/bcarpio/Projects/githubenterprise/awsdeploy/fabfile.py", line 35, in deploy_west_ec2_ami execute(deploy_ec2_ami, name='%s',puppetClass='%s',size='%s',region='%s',basedn='%s',ldap='%s',secret='%s',subnet='%s',sgroup='%s' %(name, puppetClass, size, region, basedn, ldap, secret, subnet, sgroup)) TypeError: not all arguments converted during string formatting I am not sure I understand why. I am pretty sure I have all the values defined here just fine. Also when I run the execute task deploy_ec2_ami as so: deploy_ec2_ami:test,java,m1.small,us-west-1,'dc\=test\,dc\=net',ldap-01,secret,subnet-d43b8abd,sg-926578fe It works just fine

    Read the article

  • Diagnosing IIS Shutdowns

    - by Tom Ritter
    Symptoms: I attach a debugger, I wait a little while, it automatically detaches I watch the event log during normal operation - after a single request comes in, it waits a little bit, the shuts down Disagnosing. I've followed the following steps for logging shutdowns in IIS: http://weblogs.asp.net/scottgu/archive/2005/12/14/433194.aspx http://blogs.msdn.com/tess/archive/2006/08/02/asp-net-case-study-lost-session-variables-and-appdomain-recycles.aspx I know these are working because... What I see in the Event Logs when I change the web.config: The description for Event ID 0 from source ASP.NET 2.0.50727.0 cannot be found. Either the component that raises this event is not installed on your local computer or the installation is corrupted. You can install or repair the component on the local computer. If the event originated on another computer, the display information had to be saved with the event. The following information was included with the event: _shutdownMessage=IIS configuration change HostingEnvironment initiated shutdown CONFIG change CONFIG change HostingEnvironment caused shutdown _shutdownStack= at System.Environment.GetStackTrace(Exception e, Boolean needFileInfo) at System.Environment.get_StackTrace() at System.Web.Hosting.HostingEnvironment.InitiateShutdownInternal() at System.Web.Hosting.HostingEnvironment.InitiateShutdown() at System.Web.Hosting.PipelineRuntime.StopProcessing() the message resource is present but the message is not found in the string/message table But it doesn't help because the mysetery error doesn't tell me anything. I see the same thing as from before I added this extra logging: The description for Event ID 0 from source ASP.NET 2.0.50727.0 cannot be found. Either the component that raises this event is not installed on your local computer or the installation is corrupted. You can install or repair the component on the local computer. If the event originated on another computer, the display information had to be saved with the event. The following information was included with the event: _shutdownMessage=HostingEnvironment initiated shutdown HostingEnvironment caused shutdown _shutdownStack= at System.Environment.GetStackTrace(Exception e, Boolean needFileInfo) at System.Environment.get_StackTrace() at System.Web.Hosting.HostingEnvironment.InitiateShutdownInternal() at System.Web.Hosting.HostingEnvironment.InitiateShutdown() at System.Web.Hosting.PipelineRuntime.StopProcessing() the message resource is present but the message is not found in the string/message table Anyone have any ideas for more debugging?

    Read the article

  • Orbited exception Data must not be unicode.

    - by Sid
    I am working with orbited and once I switch on orbited in production mode it throws the following error on my screen -- <exception caught here> --- File "/usr/lib/python2.6/dist-packages/twisted/web/server.py", line 150, in process self.render(resrc) File "/usr/lib/python2.6/dist-packages/twisted/web/server.py", line 157, in render body = resrc.render(self) File "/usr/local/lib/python2.6/dist-packages/orbited-0.7.10-py2.6.egg/orbited/transports/base.py", line 21, in render self.conn.transportOpened(self) File "/usr/local/lib/python2.6/dist-packages/orbited-0.7.10-py2.6.egg/orbited/cometsession.py", line 322, in transportOpened self.cometTransport.flush() File "/usr/local/lib/python2.6/dist-packages/orbited-0.7.10-py2.6.egg/orbited/transports/base.py", line 45, in flush self.write(self.packets) File "/usr/local/lib/python2.6/dist-packages/orbited-0.7.10-py2.6.egg/orbited/transports/htmlfile.py", line 42, in write self.request.write(payload); File "/usr/lib/python2.6/dist-packages/twisted/web/http.py", line 862, in write self.transport.write(data) File "/usr/lib/python2.6/dist-packages/twisted/internet/tcp.py", line 420, in write abstract.FileDescriptor.write(self, bytes) File "/usr/lib/python2.6/dist-packages/twisted/internet/abstract.py", line 170, in write raise TypeError("Data must not be unicode") exceptions.TypeError: Data must not be unicode I have absolutely no clue as to what could be the problem. Could anyone point me in the right direction.

    Read the article

  • How to implement message queuing and handling in AWS with NServiceBus

    - by Pete Lunenfeld
    I am creating a new ASP MVC order application in the Amazon (AWS) cloud with the persistence layer at my local datacenter. I will be using the CQRS pattern. The goal of the project is high availability using Queue(s) to store and forward writes (commands/events) that can be picked up and handled asynchronously at my local datacenter. Then, ff the WAN or my local datacenter fails, my cloud MVC app can still take orders and just queue them up until processing can resume. My first thought was to use AWS SQS for the queuing and create my own queue consumer/dispatcher/handler in my own c# application to process the incoming messages/events. MVC (@ Amazon) -- Event/POCO -- SQS -- QueueReader (@ my datacenter) -- DB Then I found NServiceBus. NSB seems to handle lots of details very nicely: message handling, retries, error handling, etc. I hate to reinvent the wheel, and NServiceBus seems like a full featured and mature product that would be perfect for me. But on further research, it does NOT look like NServiceBus is really meant to be used over the WAN in physically separated environments (Cloud to my Datacenter). Google and SO don't really paint a good picture of using NServiceBus across the WAN like I need. How can I use NServiceBus across the WAN? Or is there a better solution to handle queuing and message handling between Amazon an my local datacenter?

    Read the article

  • threading in c#

    - by I__
    i am using this code: private void Form1_Load(object sender, EventArgs e) { } private void serialPort1_DataReceived(object sender, System.IO.Ports.SerialDataReceivedEventArgs e) { string response = serialPort1.ReadLine(); this.BeginInvoke(new MethodInvoker( () => textBox1.AppendText(response + "\r\n") )); } ThreadStart myThreadDelegate = new ThreadStart(ThreadWork.DoWork); Thread myThread = new Thread(myThreadDelegate); myThread.Start(); but am getting lots of errors: Error 2 The type or namespace name 'ThreadStart' could not be found (are you missing a using directive or an assembly reference?) C:\Users\alexluvsdanielle\AppData\Local\Temporary Projects\WindowsFormsApplication1\Form1.cs 31 44 WindowsFormsApplication1 Error 3 The name 'ThreadWork' does not exist in the current context C:\Users\alexluvsdanielle\AppData\Local\Temporary Projects\WindowsFormsApplication1\Form1.cs 31 56 WindowsFormsApplication1 Error 4 The type or namespace name 'Thread' could not be found (are you missing a using directive or an assembly reference?) C:\Users\alexluvsdanielle\AppData\Local\Temporary Projects\WindowsFormsApplication1\Form1.cs 32 31 WindowsFormsApplication1 Error 5 A field initializer cannot reference the non-static field, method, or property 'WindowsFormsApplication1.Form1.myThreadDelegate' C:\Users\alexluvsdanielle\AppData\Local\Temporary Projects\WindowsFormsApplication1\Form1.cs 32 38 WindowsFormsApplication1 what am i doing wrong?

    Read the article

  • #include - brackets vs quotes in XCode?

    - by Chris Becke
    In MSVC++ #include files are searched for differently depending on whether the file is enclosed in "" or <. The quoted form searches first in the local folder, then in /I specified locations, The angle bracket form avoids the local folder. This means, in MSVC++, its possible to have header files with the same name as runtime and SDK headers. So, for example, I need to wrap up the windows sdk windows.h file to undefine some macro's that cause trouble. With MSVS I can just add a (optional) windows.h file to my project as long as I include it using the quoted form :- // some .cpp file #include "windows.h" // will include my local windows.h file And in my windows.h, I can pull in the real one using the angle bracket form: // my windows.h #include <windows.h> // will load the real one #undef ConflictingSymbol Trying this trick with GCC in XCode didn't work. angle bracket #includes in system header files in fact are finding my header files with similar names in my local folder structure. The MSVC system means its quite safe to have a "String.h" header file in my own folder structre. On XCode this seems to be a major no no. Is there some way to control this search path behaviour in XCode to be more like MSVC's? Or do I just have to avoid naming any of my headers anything that might possibly conflict with a system header. Writing cross platform code and using lots of frameworks means the possibility of incidental conflicts seems large.

    Read the article

  • How do I enforce the order of qmake library dependencies?

    - by James Oltmans
    I'm getting a lot of errors because qmake is improperly ordering the boost libraries I'm using. Here's what .pro file looks like QT += core gui TARGET = MyTarget TEMPLATE = app CONFIG += no_keywords \ link_pkgconfig SOURCES += file1.cpp \ file2.cpp \ file3.cpp PKGCONFIG += my_package \ sqlite3 LIBS += -lsqlite3 \ -lboost_signals \ -lboost_date_time HEADERS += file1.h\ file2.h\ file3.h FORMS += mainwindow.ui RESOURCES += Resources/resources.qrc This produces the following command: g++ -Wl,-O1 -o MyTarget file1.o file2.o file3.o moc_mainwindow.o -L/usr/lib/x86_64-linux-gnu -lboost_signals -lboost_date_time -L/usr/local/lib -lmylib1 -lmylib2 -lsqlite3 -lQtGui -lQtCore Note: mylib1 and mylib2 are statically compiled by another project, placed in /usr/local/lib with an appropriate pkg-config .pc file pointing there. The .pro file references them via my_package in PKGCONFIG. The problem is not with pkg-config's output but with Qt's ordering. Here's the .pc file: prefix=/usr/local exec_prefix=${prefix} libdir=${exec_prefix}/lib includedir=${prefix}/include Name: my_package Description: My component package Version: 0.1 URL: http://example.com Libs: -L${libdir} -lmylib1 -lmylib2 Cflags: -I${includedir}/my_package/ The linking stage fails spectacularly as mylib1 and mylib2 come up with a lot of undefined references to boost libraries that both the app and mylib1 and mylib2 are using. We have another build method using scons and it properly orders things for the linker. It's build command order is below. g++ -o MyTarget file1.o file2.o file3.o moc_mainwindow.o -L/usr/local/lib -lmylib1 -lmylib2 -lsqlite3 -lboost_signals -lboost_date_time -lQtGui -lQtCore Note that the principle difference is the order of the boost libs. Scons puts them at the end just before QtGui and QtCore while qmake puts them first. The other differences in the compile commands are unimportant as I have hand modified the qmake produced make file and the simple reordering fixed the problem. So my question is, how do I enforce the right order in my .pro file despite what qmake thinks they should be?

    Read the article

  • Problem with derived ControlTemplates in WPF

    - by Frank Fella
    The following xaml code works: <Window x:Class="DerivedTemplateBug.Window1" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:local="clr-namespace:DerivedTemplateBug" Title="Window1" Height="300" Width="300"> <Button> <Button.Template> <ControlTemplate> <Border BorderBrush="Black" BorderThickness="2"> <TextBlock>Testing!</TextBlock> </Border> </ControlTemplate> </Button.Template> </Button> </Window> Now, if you define the following data template: using System.Windows.Controls; namespace DerivedTemplateBug { public class DerivedTemplate : ControlTemplate { } } And then swap the ControlTemplate for the derived class: <Window x:Class="DerivedTemplateBug.Window1" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:local="clr-namespace:DerivedTemplateBug" Title="Window1" Height="300" Width="300"> <Button> <Button.Template> <local:DerivedTemplate> <Border BorderBrush="Black" BorderThickness="2"> <TextBlock>Testing!</TextBlock> </Border> </local:DerivedTemplate> </Button.Template> </Button> </Window> You get the following error: Invalid ContentPropertyAttribute on type 'DerivedTemplateBug.DerivedTemplate', property 'Content' not found. Can anyone tell me why this is the case?

    Read the article

  • symfony + doctrine + inheritance, how to make them work?

    - by imac
    I am beginning to work with Symfony, I've found some documentation about inheritance. But also found this discouraging article, which make me doubt if Doctrine handles inheritance any good at all... Has anyone find a smart solution for inheritance in Symfony+Doctrine? As an example, I have already structured the database something like this: CREATE TABLE `poster` ( `poster_id` int(11) NOT NULL AUTO_INCREMENT, `user_name` varchar(50) NOT NULL, PRIMARY KEY (`poster_id`), UNIQUE KEY `id` (`poster_id`), ) ENGINE=InnoDB AUTO_INCREMENT=3 DEFAULT CHARSET=latin1; CREATE TABLE `user` ( `user_id` int(11) NOT NULL, `real_name` varchar(50) DEFAULT NULL, PRIMARY KEY (`user_id`), UNIQUE KEY `user_id` (`user_id`), CONSTRAINT `user_fk` FOREIGN KEY (`user_id`) REFERENCES `poster` (`poster_id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; From that, Doctrine generated this "schema.yml": Poster: connection: doctrine tableName: poster columns: poster_id: type: integer(4) fixed: false unsigned: false primary: true autoincrement: true user_name: type: string(50) fixed: false unsigned: false primary: false notnull: true autoincrement: false relations: Post: local: poster_id foreign: poster_id type: many User: local: poster_id foreign: user_id type: many Version: local: poster_id foreign: poster_id type: many User: connection: doctrine tableName: user columns: user_id: type: integer(4) fixed: false unsigned: false primary: true autoincrement: false real_name: type: string(50) fixed: false unsigned: false primary: false notnull: false autoincrement: false relations: Poster: local: user_id foreign: poster_id type: one User creation for this structure with Doctrine auto-generated forms does not work. Any clue will be appreciated.

    Read the article

  • Syncing Data to Remote Services, Best Practices for Caching?

    - by viatropos
    I want to be able to publish events to Eventbrite, Eventful, and Google Calendar for my Google Apps. Each service has slightly different properties for events... I will be syncing many other things too, such as users with Google Contacts and MailChimp, Documents with Google Docs and some other services, etc... So I'm wondering, what is the recommended way of retrieving the data for the end user so that it's reasonably maintainable and optimized? Here are the things I'm thinking that I'm having trouble with: My App keeps a central database of all the models (Event, Document, User, Form, etc.), and whenever Admin creates an object (e.g. create through Eventbrite or through our Admin panel), we sync them and store a copy in our local database. When User goes to the site /events, App retrieves the events from the database. Read Events from a target feed, such as the Eventbrite or Eventful feed, and scrap the local database. Basically, I'm wondering, if we're storing all of the data on a remote service, do we really need to have a local database copy of the data? When would we need to have a local database, when wouldn't we?

    Read the article

  • Chache problem running two consecutive HTTP GET requests from an APP1 to an APP2

    - by user502052
    I use Ruby on Rails 3 and I have 2 applications (APP1 and APP2) working on two subdomains: app1.domain.local app2.domain.local and I am tryng to run two consecutive HTTP GET requests from APP1 to APP2 like this: Code in APP1 (request): response1 = Net::HTTP.get( URI.parse("http://app2.domain.local?test=first&id=1") ) response2 = Net::HTTP.get( URI.parse("http://app2.domain.local/test=second&id=1") ) Code in APP2 (response): respond_to do |format| if <model_name>.find(params[:id]).<field_name> == "first" <model_name>.find(params[:id]).update_attribute ( <field_name>, <field_value> ) format.xml { render :xml => <model_name>.find(params[:id]).<field_name> } elsif <model_name>.find(params[:id]).<field_name> == "second" format.xml { render :xml => <model_name>.find(params[:id]).<field_name> } end end After the first request I get the correct XML (response1 is what I expect), but on the second it isn't (response2 isn't what I expect). Doing some tests I found that the second time that <model_name>.find(params[:id]).<field_name> run (for the elsif statements) it returns always a blank value so that the code in the elseif statement is never run. Is it possible that the problem is related on caching <model_name>.find(params[:id]).<field_name>? P.S.: I read about eTag and Conditional GET, but I am not sure that I must use that approach. I would like to keep all simple.

    Read the article

  • Disabling Remote-Debugging Connection on Windows 2000

    - by Kim Leeper
    I have two machines one running Win 2000 and one running Win XP both with VSC++ 6. I created an application on the Win XP machine (local) and sucessfuly used the Win2000 machine (remote) as the target for debugging. The code was in a shared drive on the Win2000 machine. This setup worked well, just like in the movies! However, I now wish to use my Win2000 machine as a develpment machine again and I find I can not. When attempting to execute a natively compiled application on this machine, I get a dialog with the title of "Remote Execution Path And Filename" asking me for same. When dialog is cancelled as the program that is attemptng to execute is not remote, the program terminates without error. Extra info! On the WinXP machine, VSC++, under the Build menu-Debugger Remote Connection-Remote Connection Dialog-the "Connection:" list box has two entries 'Local' and 'Network', on the Win2000 machine the 'Local' entry is not present, only the entry for 'Network'. How do I get back my 'Local' entry on what used to be my target machine?

    Read the article

  • Speed/expensive of SQLite query vs. List.contains() for "in-set" icon on list rows

    - by kpdvx
    An application I'm developing requires that the app main a local list of things, let's say books, in a local "library." Users can access their local library of books and search for books using a remote web service. The app will be aware of other users of the app through this web service, and users can browse other users' lists of books in their library. Each book is identified by a unique bookId (represented as an int). When viewing books returned through a search result or when viewing another user's book library, the individual list row cells need to visually represent if the book is in the user's local library or not. A user can have at most 5,000 books in the library, stored in SQLite on the device (and synchronized with the remote web service). My question is, to determine if the book shown in the list row is in the user's library, would it be better to directly ask SQLite (via SELECT COUNT(*)...) or to maintain, in-memory, a List or int[] array of some sort containing the unique bookIds. So, on each row display do I query SQLite or check if the List or int[] array contains the unique bookId? Because the user can have at most 5,000 books, each bookId occupies 4 bytes so at most this would use ~ 20kB. In thinking about this, and in typing this out, it seems obvious to me that it would be far better for performance if I maintained a list or int[] array of in-library bookIds vs. querying SQLite (the only caveat to maintaining an int[] array is that if books are added or removed I'll need to grow or shrink the array by hand, so with this option I'll most likely use an ArrayList or Vector, though I'm not sure of the additional memory overhead of using Integer objects as opposed to primitives). Opinions, thoughts, suggestions?

    Read the article

  • Sorting in Lua, counting number of items

    - by Josh
    Two quick questions (I hope...) with the following code. The script below checks if a number is prime, and if not, returns all the factors for that number, otherwise it just returns that the number prime. Pay no attention to the zs. stuff in the script, for that is client specific and has no bearing on script functionality. The script itself works almost wonderfully, except for two minor details - the first being the factor list doesn't return itself sorted... that is, for 24, it'd return 1, 2, 12, 3, 8, 4, 6, and 24 instead of 1, 2, 3, 4, 6, 8, 12, and 24. I can't print it as a table, so it does need to be returned as a list. If it has to be sorted as a table first THEN turned into a list, I can deal with that. All that matters is the end result being the list. The other detail is that I need to check if there are only two numbers in the list or more. If there are only two numbers, it's a prime (1 and the number). The current way I have it does not work. Is there a way to accomplish this? I appreciate all the help! function get_all_factors(number) local factors = 1 for possible_factor=2, math.sqrt(number), 1 do local remainder = number%possible_factor if remainder == 0 then local factor, factor_pair = possible_factor, number/possible_factor factors = factors .. ", " .. factor if factor ~= factor_pair then factors = factors .. ", " .. factor_pair end end end factors = factors .. ", and " .. number return factors end local allfactors = get_all_factors(zs.param(1)) if zs.func.numitems(allfactors)==2 then return zs.param(1) .. " is prime." else return zs.param(1) .. " is not prime, and its factors are: " .. allfactors end

    Read the article

  • C# SQL Data Adapter Fill on existing typed Dataset

    - by René
    I have an option to choose between local based data storing (xml file) or SQL Server based. I already created a long time ago a typed dataset for my application to save data local in the xml file. Now, I have a bool that changes between Server based version and local version. If true my application get the data from the SQL Server. I'm not sure but It seems that Sql Adapter's Fill Method can't fill the Data in my existing schema SqlCommand cmd = new SqlCommand("Select * FROM dbo.Categories WHERE CatUserId = 1", _connection); cmd.CommandType = CommandType.Text; _sqlAdapter = new SqlDataAdapter(cmd); _sqlAdapter.TableMappings.Add("Categories", "dbo.Categories"); _sqlAdapter.Fill(Program.Dataset); This should fill my data from dbo.Categories to Categories (in my local, typed dataset). but it doesn't. It creates a new table with the name "Table". It looks like it can't handle the existing schema. I can't figure it out. Where is the problem? btw. of course the database request I do isn't very useful that way. It's just a simplified version for testing...

    Read the article

  • Git Svn dcommit error - restart the commit

    - by Rob Wilkerson
    Last week, I made a number of changes to my local branch before leaving town for the weekend. This morning I wanted to dcommit all of those changes to the company's Svn repository, but I get a merge conflict in one file: Merge conflict during commit: Your file or directory 'build.properties.sample' is probably out-of-date: The version resource does not correspond to the resource within the transaction. Either the requested version resource is out of date (needs to be updated), or the requested version resource is newer than the transaction root (restart the commit). I'm not sure exactly why I'm getting this, but before attempting to dcommit, I did a git svn rebase. That "overwrote" my commits. To recover from that, I did a git reset --hard HEAD@{1}. Now my working copy seems to be where I expect it to be, but I have no idea how to get past the merge conflict; there's not actually any conflict to resolve that I can find. Any thoughts would be appreciated. EDIT: Just wanted to specify that I am working locally. I have a local branch for the trunk that references svn/trunk (the remote branch). All of my work was done on the local trunk: $ git branch maint-1.0.x master * trunk $ git branch -r svn/maintenance/my-project-1.0.0 svn/trunk Similarly, git log currently shows 10 commits on my local trunk since the last commit with a Svn ID. Hopefully that answers a few questions. Thanks again.

    Read the article

  • Convert IIS / Tomcat Web Application to a multi-server environment.

    - by bill_the_loser
    I have an existing web application built in .Net, running on IIS that leverages a java servlet that we have running on Tomcat 5.5. We need to scale the application and I'm confused about what relates to our situation and what we need to do to get the servlet running on multiple servers. Right now I have 4 servers that can each individually process results, it almost seems like all I should have to do is add the ajp13 worker processes from three additional machines to the machine hosting the load balancer worker. But I can't imagine it should be that easy. What do I need to do to distribute the Tomcat load to the extra three machines? Thanks. Update: The current configuration is using a workers2.properties configuration file. From all of the documentation online I have not been able to determine the distinction between the workers.properties and the workers2.properties. Most of the examples that I have found are configuring the workers.properties and revolve around adding workers and registering them in the worker.list element. The workers2.properties does not appears to have a worker.list element and the syntax is different enough between the workers.properties and the workers2.properties that I'm doubtful that I can add that element. If I just add my multiple AJP workers to the workers2.properties file do I need to worry about the apparent lack of a worker.list element? [ajp13:localhost:8009] channel=channel.socket:localhost:8009 group=lb [ajp13:host2.mydomain.local:8009] channel=channel.socket:host2.mydomain.local:8009 group=lb [ajp13:host3.mydomain.local:8009] channel=channel.socket:host3.mydomain.local:8009 group=lb A couple of side notes... One I've noticed that sometime Tomcat doesn't seem to reload my changes and I don't know why. Also, I have no idea why this configuration has a workers2.properties and not a workers.properties. I've been assuming that it's based on version but I haven't seen anything to back up that assumption.

    Read the article

  • Change cookies when doing jQuery.ajax requests in Chrome Extensions

    - by haskellguy
    I have wrote a plugin for facebook that sends data to testing-fb.local. The request goes through if the user is logged in. Here is the workflow: User logs in from testing-fb.local Cookies are stored When $.ajax() are fired from the Chrome extension Chrome extension listen with chrome.webRequest.onBeforeSendHeaders Chrome extension checks for cookies from chrome.cookies.get Chrome changes the Set-Cookies header to be sent And the request goes through. I wrote this part of code that shoud be this: function getCookies (callback) { chrome.cookies.get({url:"https://testing-fb.local", name: "connect.sid"}, function(a){ return callback(a) }) } chrome.webRequest.onBeforeSendHeaders.addListener( function(details) { getCookies(function(a){ // Here something happens }) }, {urls: ["https://testing-fb.local/*"]}, ['blocking']); Here is my manifest.json: { "name": "test-fb", "version": "1.0", "manifest_version": 1, "description": "testing", "permissions": [ "cookies", "webRequest", "tabs", "http://*/*", "https://*/*" ], "background": { "scripts": ["background.js"] }, "content_scripts": [ { "matches": ["http://*.facebook.com/*", "https://*.facebook.com/*"], "exclude_matches" : [ "*://*.facebook.com/ajax/*", "*://*.channel.facebook.tld/*", "*://*.facebook.tld/pagelet/generic.php/pagelet/home/morestories.php*", "*://*.facebook.tld/ai.php*" ], "js": ["jquery-1.8.3.min.js", "allthefunctions.js"] } ] } In allthefunction.js I have the $.ajax calls, and in background.js is where I put the code above which however looks not to run.. In summary, I have not clear: What I should write in Here something happens If this strategy is going to work Where should I put this code?

    Read the article

  • Windows service: Listening on socket while running as LocalSystem

    - by Socob
    I'm writing a small server-like program in C for Windows (using MinGW/GCC, testing on Windows 7) which is eventually supposed to run as a service with the LocalSystem account. I am creating a socket, and using Windows Sockets bind(), listen() and accept() to listen for incoming connections. If I run the application from the command line (i.e. not as a service, but as a normal user), I have no problems connecting to it from external IPs. However, if I run the program as a service with the LocalSystem account, I can only connect to the service from my own PC, either with 127.0.0.1 or my local address, 192.168.1.80 (I'm behind a router in a small local network). Neither external IPs nor other PCs in the same local network, using my local address, can connect now, even though there were no problems without running as a service. Now, I've heard that networking is handled differently or even not accessible (?) when running as LocalSystem or LocalService or that services cannot access both the desktop and the network (note: my service is not interactive) at the same time due to security considerations. Essentially, I need to find out what's going wrong/how to listen for connections in a service. Is running as NetworkService the same as running as LocalSystem, but with network access? Surely there must be servers that can run as background services, so how do they do it?

    Read the article

  • Using Node.js as an accelerator for WCF REST services

    - by Elton Stoneman
    Node.js is a server-side JavaScript platform "for easily building fast, scalable network applications". It's built on Google's V8 JavaScript engine and uses an (almost) entirely async event-driven processing model, running in a single thread. If you're new to Node and your reaction is "why would I want to run JavaScript on the server side?", this is the headline answer: in 150 lines of JavaScript you can build a Node.js app which works as an accelerator for WCF REST services*. It can double your messages-per-second throughput, halve your CPU workload and use one-fifth of the memory footprint, compared to the WCF services direct.   Well, it can if: 1) your WCF services are first-class HTTP citizens, honouring client cache ETag headers in request and response; 2) your services do a reasonable amount of work to build a response; 3) your data is read more often than it's written. In one of my projects I have a set of REST services in WCF which deal with data that only gets updated weekly, but which can be read hundreds of times an hour. The services issue ETags and will return a 304 if the client sends a request with the current ETag, which means in the most common scenario the client uses its local cached copy. But when the weekly update happens, then all the client caches are invalidated and they all need the same new data. Then the service will get hundreds of requests with old ETags, and they go through the full service stack to build the same response for each, taking up threads and processing time. Part of that processing means going off to a database on a separate cloud, which introduces more latency and downtime potential.   We can use ASP.NET output caching with WCF to solve the repeated processing problem, but the server will still be thread-bound on incoming requests, and to get the current ETags reliably needs a database call per request. The accelerator solves that by running as a proxy - all client calls come into the proxy, and the proxy routes calls to the underlying REST service. We could use Node as a straight passthrough proxy and expect some benefit, as the server would be less thread-bound, but we would still have one WCF and one database call per proxy call. But add some smart caching logic to the proxy, and share ETags between Node and WCF (so the proxy doesn't even need to call the servcie to get the current ETag), and the underlying service will only be invoked when data has changed, and then only once - all subsequent client requests will be served from the proxy cache.   I've built this as a sample up on GitHub: NodeWcfAccelerator on sixeyed.codegallery. Here's how the architecture looks:     The code is very simple. The Node proxy runs on port 8010 and all client requests target the proxy. If the client request has an ETag header then the proxy looks up the ETag in the tag cache to see if it is current - the sample uses memcached to share ETags between .NET and Node. If the ETag from the client matches the current server tag, the proxy sends a 304 response with an empty body to the client, telling it to use its own cached version of the data. If the ETag from the client is stale, the proxy looks for a local cached version of the response, checking for a file named after the current ETag. If that file exists, its contents are returned to the client as the body in a 200 response, which includes the current ETag in the header. If the proxy does not have a local cached file for the service response, it calls the service, and writes the WCF response to the local cache file, and to the body of a 200 response for the client. So the WCF service is only troubled if both client and proxy have stale (or no) caches.   The only (vaguely) clever bit in the sample is using the ETag cache, so the proxy can serve cached requests without any communication with the underlying service, which it does completely generically, so the proxy has no notion of what it is serving or what the services it proxies are doing. The relative path from the URL is used as the lookup key, so there's no shared key-generation logic between .NET and Node, and when WCF stores a tag it also stores the "read" URL against the ETag so it can be used for a reverse lookup, e.g:   Key Value /WcfSampleService/PersonService.svc/rest/fetch/3 "28cd4796-76b8-451b-adfd-75cb50a50fa6" "28cd4796-76b8-451b-adfd-75cb50a50fa6" /WcfSampleService/PersonService.svc/rest/fetch/3    In Node we read the cache using the incoming URL path as the key and we know that "28cd4796-76b8-451b-adfd-75cb50a50fa6" is the current ETag; we look for a local cached response in /caches/28cd4796-76b8-451b-adfd-75cb50a50fa6.body (and the corresponding .header file which contains the original service response headers, so the proxy response is exactly the same as the underlying service). When the data is updated, we need to invalidate the ETag cache – which is why we need the reverse lookup in the cache. In the WCF update service, we don't need to know the URL of the related read service - we fetch the entity from the database, do a reverse lookup on the tag cache using the old ETag to get the read URL, update the new ETag against the URL, store the new reverse lookup and delete the old one.   Running Apache Bench against the two endpoints gives the headline performance comparison. Making 1000 requests with concurrency of 100, and not sending any ETag headers in the requests, with the Node proxy I get 102 requests handled per second, average response time of 975 milliseconds with 90% of responses served within 850 milliseconds; going direct to WCF with the same parameters, I get 53 requests handled per second, mean response time of 1853 milliseconds, with 90% of response served within 3260 milliseconds. Informally monitoring server usage during the tests, Node maxed at 20% CPU and 20Mb memory; IIS maxed at 60% CPU and 100Mb memory.   Note that the sample WCF service does a database read and sleeps for 250 milliseconds to simulate a moderate processing load, so this is *not* a baseline Node-vs-WCF comparison, but for similar scenarios where the  service call is expensive but applicable to numerous clients for a long timespan, the performance boost from the accelerator is considerable.     * - actually, the accelerator will work nicely for any HTTP request, where the URL (path + querystring) uniquely identifies a resource. In the sample, there is an assumption that the ETag is a GUID wrapped in double-quotes (e.g. "28cd4796-76b8-451b-adfd-75cb50a50fa6") – which is the default for WCF services. I use that assumption to name the cache files uniquely, but it is a trivial change to adapt to other ETag formats.

    Read the article

  • tile_static, tile_barrier, and tiled matrix multiplication with C++ AMP

    - by Daniel Moth
    We ended the previous post with a mechanical transformation of the C++ AMP matrix multiplication example to the tiled model and in the process introduced tiled_index and tiled_grid. This is part 2. tile_static memory You all know that in regular CPU code, static variables have the same value regardless of which thread accesses the static variable. This is in contrast with non-static local variables, where each thread has its own copy. Back to C++ AMP, the same rules apply and each thread has its own value for local variables in your lambda, whereas all threads see the same global memory, which is the data they have access to via the array and array_view. In addition, on an accelerator like the GPU, there is a programmable cache, a third kind of memory type if you'd like to think of it that way (some call it shared memory, others call it scratchpad memory). Variables stored in that memory share the same value for every thread in the same tile. So, when you use the tiled model, you can have variables where each thread in the same tile sees the same value for that variable, that threads from other tiles do not. The new storage class for local variables introduced for this purpose is called tile_static. You can only use tile_static in restrict(direct3d) functions, and only when explicitly using the tiled model. What this looks like in code should be no surprise, but here is a snippet to confirm your mental image, using a good old regular C array // each tile of threads has its own copy of locA, // shared among the threads of the tile tile_static float locA[16][16]; Note that tile_static variables are scoped and have the lifetime of the tile, and they cannot have constructors or destructors. tile_barrier In amp.h one of the types introduced is tile_barrier. You cannot construct this object yourself (although if you had one, you could use a copy constructor to create another one). So how do you get one of these? You get it, from a tiled_index object. Beyond the 4 properties returning index objects, tiled_index has another property, barrier, that returns a tile_barrier object. The tile_barrier class exposes a single member, the method wait. 15: // Given a tiled_index object named t_idx 16: t_idx.barrier.wait(); 17: // more code …in the code above, all threads in the tile will reach line 16 before a single one progresses to line 17. Note that all threads must be able to reach the barrier, i.e. if you had branchy code in such a way which meant that there is a chance that not all threads could reach line 16, then the code above would be illegal. Tiled Matrix Multiplication Example – part 2 So now that we added to our understanding the concepts of tile_static and tile_barrier, let me obfuscate rewrite the matrix multiplication code so that it takes advantage of tiling. Before you start reading this, I suggest you get a cup of your favorite non-alcoholic beverage to enjoy while you try to fully understand the code. 01: void MatrixMultiplyTiled(vector<float>& vC, const vector<float>& vA, const vector<float>& vB, int M, int N, int W) 02: { 03: static const int TS = 16; 04: array_view<const float,2> a(M, W, vA); 05: array_view<const float,2> b(W, N, vB); 06: array_view<writeonly<float>,2> c(M,N,vC); 07: parallel_for_each(c.grid.tile< TS, TS >(), 08: [=] (tiled_index< TS, TS> t_idx) restrict(direct3d) 09: { 10: int row = t_idx.local[0]; int col = t_idx.local[1]; 11: float sum = 0.0f; 12: for (int i = 0; i < W; i += TS) { 13: tile_static float locA[TS][TS], locB[TS][TS]; 14: locA[row][col] = a(t_idx.global[0], col + i); 15: locB[row][col] = b(row + i, t_idx.global[1]); 16: t_idx.barrier.wait(); 17: for (int k = 0; k < TS; k++) 18: sum += locA[row][k] * locB[k][col]; 19: t_idx.barrier.wait(); 20: } 21: c[t_idx.global] = sum; 22: }); 23: } Notice that all the code up to line 9 is the same as per the changes we made in part 1 of tiling introduction. If you squint, the body of the lambda itself preserves the original algorithm on lines 10, 11, and 17, 18, and 21. The difference being that those lines use new indexing and the tile_static arrays; the tile_static arrays are declared and initialized on the brand new lines 13-15. On those lines we copy from the global memory represented by the array_view objects (a and b), to the tile_static vanilla arrays (locA and locB) – we are copying enough to fit a tile. Because in the code that follows on line 18 we expect the data for this tile to be in the tile_static storage, we need to synchronize the threads within each tile with a barrier, which we do on line 16 (to avoid accessing uninitialized memory on line 18). We also need to synchronize the threads within a tile on line 19, again to avoid the race between lines 14, 15 (retrieving the next set of data for each tile and overwriting the previous set) and line 18 (not being done processing the previous set of data). Luckily, as part of the awesome C++ AMP debugger in Visual Studio there is an option that helps you find such races, but that is a story for another blog post another time. May I suggest reading the next section, and then coming back to re-read and walk through this code with pen and paper to really grok what is going on, if you haven't already? Cool. Why would I introduce this tiling complexity into my code? Funny you should ask that, I was just about to tell you. There is only one reason we tiled our extent, had to deal with finding a good tile size, ensure the number of threads we schedule are correctly divisible with the tile size, had to use a tiled_index instead of a normal index, and had to understand tile_barrier and to figure out where we need to use it, and double the size of our lambda in terms of lines of code: the reason is to be able to use tile_static memory. Why do we want to use tile_static memory? Because accessing tile_static memory is around 10 times faster than accessing the global memory on an accelerator like the GPU, e.g. in the code above, if you can get 150GB/second accessing data from the array_view a, you can get 1500GB/second accessing the tile_static array locA. And since by definition you are dealing with really large data sets, the savings really pay off. We have seen tiled implementations being twice as fast as their non-tiled counterparts. Now, some algorithms will not have performance benefits from tiling (and in fact may deteriorate), e.g. algorithms that require you to go only once to global memory will not benefit from tiling, since with tiling you already have to fetch the data once from global memory! Other algorithms may benefit, but you may decide that you are happy with your code being 150 times faster than the serial-version you had, and you do not need to invest to make it 250 times faster. Also algorithms with more than 3 dimensions, which C++ AMP supports in the non-tiled model, cannot be tiled. Also note that in future releases, we may invest in making the non-tiled model, which already uses tiling under the covers, go the extra step and use tile_static memory on your behalf, but it is obviously way to early to commit to anything like that, and we certainly don't do any of that today. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • We've completed the first iteration

    - by CliveT
    There are a lot of features in C# that are implemented by the compiler and not by the underlying platform. One such feature is a lambda expression. Since local variables cannot be accessed once the current method activation finishes, the compiler has to go out of its way to generate a new class which acts as a home for any variable whose lifetime needs to be extended past the activation of the procedure. Take the following example:     Random generator = new Random();     Func func = () = generator.Next(10); In this case, the compiler generates a new class called c_DisplayClass1 which is marked with the CompilerGenerated attribute. [CompilerGenerated] private sealed class c__DisplayClass1 {     // Fields     public Random generator;     // Methods     public int b__0()     {         return this.generator.Next(10);     } } Two quick comments on this: (i)    A display was the means that compilers for languages like Algol recorded the various lexical contours of the nested procedure activations on the stack. I imagine that this is what has led to the name. (ii)    It is a shame that the same attribute is used to mark all compiler generated classes as it makes it hard to figure out what they are being used for. Indeed, you could imagine optimisations that the runtime could perform if it knew that classes corresponded to certain high level concepts. We can see that the local variable generator has been turned into a field in the class, and the body of the lambda expression has been turned into a method of the new class. The code that builds the Func object simply constructs an instance of this class and initialises the fields to their initial values.     c__DisplayClass1 class2 = new c__DisplayClass1();     class2.generator = new Random();     Func func = new Func(class2.b__0); Reflector already contains code to spot this pattern of code and reproduce the form containing the lambda expression, so this is example is correctly decompiled. The use of compiler generated code is even more spectacular in the case of iterators. C# introduced the idea of a method that could automatically store its state between calls, so that it can pick up where it left off. The code can express the logical flow with yield return and yield break denoting places where the method should return a particular value and be prepared to resume.         {             yield return 1;             yield return 2;             yield return 3;         } Of course, there was already a .NET pattern for expressing the idea of returning a sequence of values with the computation proceeding lazily (in the sense that the work for the next value is executed on demand). This is expressed by the IEnumerable interface with its Current property for fetching the current value and the MoveNext method for forcing the computation of the next value. The sequence is terminated when this method returns false. The C# compiler links these two ideas together so that an IEnumerator returning method using the yield keyword causes the compiler to produce the implementation of an Iterator. Take the following piece of code.         IEnumerable GetItems()         {             yield return 1;             yield return 2;             yield return 3;         } The compiler implements this by defining a new class that implements a state machine. This has an integer state that records which yield point we should go to if we are resumed. It also has a field that records the Current value of the enumerator and a field for recording the thread. This latter value is used for optimising the creation of iterator instances. [CompilerGenerated] private sealed class d__0 : IEnumerable, IEnumerable, IEnumerator, IEnumerator, IDisposable {     // Fields     private int 1__state;     private int 2__current;     public Program 4__this;     private int l__initialThreadId; The body gets converted into the code to construct and initialize this new class. private IEnumerable GetItems() {     d__0 d__ = new d__0(-2);     d__.4__this = this;     return d__; } When the class is constructed we set the state, which was passed through as -2 and the current thread. public d__0(int 1__state) {     this.1__state = 1__state;     this.l__initialThreadId = Thread.CurrentThread.ManagedThreadId; } The state needs to be set to 0 to represent a valid enumerator and this is done in the GetEnumerator method which optimises for the usual case where the returned enumerator is only used once. IEnumerator IEnumerable.GetEnumerator() {     if ((Thread.CurrentThread.ManagedThreadId == this.l__initialThreadId)               && (this.1__state == -2))     {         this.1__state = 0;         return this;     } The state machine itself is implemented inside the MoveNext method. private bool MoveNext() {     switch (this.1__state)     {         case 0:             this.1__state = -1;             this.2__current = 1;             this.1__state = 1;             return true;         case 1:             this.1__state = -1;             this.2__current = 2;             this.1__state = 2;             return true;         case 2:             this.1__state = -1;             this.2__current = 3;             this.1__state = 3;             return true;         case 3:             this.1__state = -1;             break;     }     return false; } At each stage, the current value of the state is used to determine how far we got, and then we generate the next value which we return after recording the next state. Finally we return false from the MoveNext to signify the end of the sequence. Of course, that example was really simple. The original method body didn't have any local variables. Any local variables need to live between the calls to MoveNext and so they need to be transformed into fields in much the same way that we did in the case of the lambda expression. More complicated MoveNext methods are required to deal with resources that need to be disposed when the iterator finishes, and sometimes the compiler uses a temporary variable to hold the return value. Why all of this explanation? We've implemented the de-compilation of iterators in the current EAP version of Reflector (7). This contrasts with previous version where all you could do was look at the MoveNext method and try to figure out the control flow. There's a fair amount of things we have to do. We have to spot the use of a CompilerGenerated class which implements the Enumerator pattern. We need to go to the class and figure out the fields corresponding to the local variables. We then need to go to the MoveNext method and try to break it into the various possible states and spot the state transitions. We can then take these pieces and put them back together into an object model that uses yield return to show the transition points. After that Reflector can carry on optimising using its usual optimisations. The pattern matching is currently a little too sensitive to changes in the code generation, and we only do a limited analysis of the MoveNext method to determine use of the compiler generated fields. In some ways, it is a pity that iterators are compiled away and there is no metadata that reflects the original intent. Without it, we are always going to dependent on our knowledge of the compiler's implementation. For example, we have noticed that the Async CTP changes the way that iterators are code generated, so we'll have to do some more work to support that. However, with that warning in place, we seem to do a reasonable job of decompiling the iterators that are built into the framework. Hopefully, the EAP will give us a chance to find examples where we don't spot the pattern correctly or regenerate the wrong code, and we can improve things. Please give it a go, and report any problems.

    Read the article

  • Oracle B2B - Synchronous Request Reply

    - by cdwright
    Introduction So first off, let me say I didn't create this demo (although I did modify it some). I got it from a member of the B2B development technical staff. Since it came with only a simple readme file, I thought I would take some time and write a more detailed explanation about how it works. Beginning with Oracle SOA Suite PS5 (11.1.1.6), B2B supports synchronous request reply over http using the b2b/syncreceiver servlet. I’m attaching the demo to this blog which includes a SOA composite archive that needs to be deployed using JDeveloper, a B2B repository with two agreements that need to be deployed using the B2B console, and a test xml file that gets sent to the b2b/syncreceiver servlet using your favorite SOAP test tool (I'm using Firefox Poster here). You can download the zip file containing the demo here. The demo works by sending the sample xml request file (req.xml) to http://<b2bhost>:8001/b2b/syncreceiver using the SOAP test tool.  The syncreceiver servlet keeps the socket connection open between itself and the test tool so that it can synchronously send the reply message back. When B2B receives the inbound request message, it is passed to the SOA composite through the default B2B Fabric binding. A simple reply is created in BPEL and returned to B2B which then sends the message back to the test tool using that same socket connection. I’ll show you the B2B configuration first, then we’ll look at the soa composite. Configuring B2B No additional configuration necessary in order to use the syncreceiver servlet. It is already running when you start SOA. After importing the GC_SyncReqRep.zip repository file into B2B, you’ll have the typical GlobalChips host trading partner and the Acme remote trading partner. Document Management The repository contains two very simple custom XML document definitions called Orders and OrdersResponse. In order to determine the trading partner agreement needed to process the inbound Orders document, you need to know two things about it; what is it and where it came from. So let’s look at how B2B identifies the appropriate document definition for the message. The XSD’s for these two document definitions themselves are not particularly interesting. Whenever you're dealing with custom XML documents, B2B identifies the appropriate document definition for each XML message using an XPath Identification Expression. The expression is entered for each of these document definitions under the document administration tab in the B2B console. The full XPATH expression for the Orders document is  //*[local-name()='shiporder']/*[local-name()='shipto']/*[local-name()='name']/text(). You can see this path in the XSD diagram below and how it uniquely identifies this message. The OrdersReponse document is identified in the same way. The XPath expression for it is //*[local-name()='Response']/*[local-name()='Status']/text(). You can see how it’s path differs uniquely identifying the reply from the request. Trading Partner Profile The trading partner profiles are very simple too. For GlobalChips, a generic identifier is being used to identify the sender of the response document using the host trading partner name. For Acme, a generic identifier is also being used to identify the sender of the inbound request using the remote trading partner name. The document types are added for the remote trading partner as usual. So the remote trading partner Acme is the sender of the Orders document, and it is the receiver of the OrdersResponse document. For the remote trading partner only, there needs to be a dummy channel which gets used in the outbound response agreement. The channel is not actually used. It is just a necessary place holder that needs to be there when creating the agreement. Trading Partner Agreement The agreements are equally simple. There is no validation and translation is not an option for a custom XML document type. For the InboundAgreement (request) the document definition is set to OrdersDef. In the Agreement Parameters section the generic identifiers have been added for the host and remote trading partners. That’s all that is needed for the inbound transaction. For the OutboundAgreement (response), the document definition is set to OrdersResponseDef and the generic identifiers for the two trading partners are added. The remote trading partner dummy delivery channel is also added to the agreement. SOA Composite Import the SOA composite archive into JDeveloper as an EJB JAR file. Open the composite and you should have a project that looks like this. In the composite, open the b2bInboundSyncSvc exposed service and advance through the setup wizard. Select your Application Server Connection and advance to the Operations window. Notice here that the B2B binding is set to Receive. It is not set for Synchronous Request Reply. Continue advancing through the wizard as you normally would and select finish at the end. Now open BPELProcess1 in the composite. The BPEL process is set as a Synchronous Request Reply as you can see below. The while loop is there just to give the process something to do. The actual reply message is prepared in the assignResponseValues assignment followed by an Invoke of the B2B binding. Open the replyResponse Invoke and go to the properties tab. You’ll see that the fromTradingPartnerId, toTradingPartner, documentTypeName, and documentProtocolRevision properties have been set. Testing the Configuration To test the configuration, I used Firefox Poster. Enter the URL for the b2b/syncreceiver servlet and browse for the req.xml file that contains the test request message. In the Headers tab, add the property ‘from’ and give it the value ‘Acme’. This is how B2B will know where the message is coming from and it will use that information along with the document type name to find the right trading partner agreement. Now post the message. You should get back a response with a status of ‘200 OK’. That’s all there is to it.

    Read the article

  • WPF animation: binding to the "To" attribute of storyboard animation.

    - by bozalina
    Hi, I'm trying to create a button that behaves similarly to the "slide" button on the iPhone. I have an animation that adjusts the position and width of the button, but I want these values to be based on the text used in the control. Currently, they're hardcoded. Here's my working XAML, so far: <CheckBox x:Class="Smt.Controls.SlideCheckBox" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:local="clr-namespace:Smt.Controls" xmlns:System.Windows="clr-namespace:System.Windows;assembly=PresentationCore" Name="SliderCheckBox" mc:Ignorable="d"> <CheckBox.Resources> <System.Windows:Duration x:Key="AnimationTime">0:0:0.2</System.Windows:Duration> <Storyboard x:Key="OnChecking"> <DoubleAnimation Storyboard.TargetName="CheckButton" Storyboard.TargetProperty="(UIElement.RenderTransform).(TransformGroup.Children)[0].(TranslateTransform.X)" Duration="{StaticResource AnimationTime}" To="40" /> <DoubleAnimation Storyboard.TargetName="CheckButton" Storyboard.TargetProperty="(Button.Width)" Duration="{StaticResource AnimationTime}" To="41" /> </Storyboard> <Storyboard x:Key="OnUnchecking"> <DoubleAnimation Storyboard.TargetName="CheckButton" Storyboard.TargetProperty="(UIElement.RenderTransform).(TransformGroup.Children)[0].(TranslateTransform.X)" Duration="{StaticResource AnimationTime}" To="0" /> <DoubleAnimation Storyboard.TargetName="CheckButton" Storyboard.TargetProperty="(Button.Width)" Duration="{StaticResource AnimationTime}" To="40" /> </Storyboard> <Style x:Key="SlideCheckBoxStyle" TargetType="{x:Type local:SlideCheckBox}"> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="{x:Type local:SlideCheckBox}"> <Canvas> <ContentPresenter SnapsToDevicePixels="{TemplateBinding SnapsToDevicePixels}" Content="{TemplateBinding Content}" ContentTemplate="{TemplateBinding ContentTemplate}" RecognizesAccessKey="True" VerticalAlignment="Center" HorizontalAlignment="Center" /> <Canvas> <!--Background--> <Rectangle Width="{Binding ElementName=ButtonText, Path=ActualWidth}" Height="{Binding ElementName=ButtonText, Path=ActualHeight}" Fill="LightBlue" /> </Canvas> <Canvas> <!--Button--> <Button Width="{Binding ElementName=CheckedText, Path=ActualWidth}" Height="{Binding ElementName=ButtonText, Path=ActualHeight}" Name="CheckButton" Command="{x:Static local:SlideCheckBox.SlideCheckBoxClicked}"> <Button.RenderTransform> <TransformGroup> <TranslateTransform /> </TransformGroup> </Button.RenderTransform> </Button> </Canvas> <Canvas> <!--Text--> <StackPanel Name="ButtonText" Orientation="Horizontal" IsHitTestVisible="False"> <Grid Name="CheckedText"> <Label Margin="7 0" Content="{Binding RelativeSource={RelativeSource AncestorType={x:Type local:SlideCheckBox}}, Path=CheckedText}" /> </Grid> <Grid Name="UncheckedText" HorizontalAlignment="Right"> <Label Margin="7 0" Content="{Binding RelativeSource={RelativeSource AncestorType={x:Type local:SlideCheckBox}}, Path=UncheckedText}" /> </Grid> </StackPanel> </Canvas> </Canvas> <ControlTemplate.Triggers> <Trigger Property="IsChecked" Value="True"> <Trigger.EnterActions> <BeginStoryboard Storyboard="{StaticResource OnChecking}" /> </Trigger.EnterActions> <Trigger.ExitActions> <BeginStoryboard Storyboard="{StaticResource OnUnchecking}" /> </Trigger.ExitActions> </Trigger> </ControlTemplate.Triggers> </ControlTemplate> </Setter.Value> </Setter> </Style> </CheckBox.Resources> <CheckBox.CommandBindings> <CommandBinding Command="{x:Static local:SlideCheckBox.SlideCheckBoxClicked}" Executed="OnSlideCheckBoxClicked" /> </CheckBox.CommandBindings> </CheckBox> And the code behind: using System.Windows; using System.Windows.Controls; using System.Windows.Input; namespace Smt.Controls { public partial class SlideCheckBox : CheckBox { public SlideCheckBox() { InitializeComponent(); Loaded += OnLoaded; } public static readonly DependencyProperty CheckedTextProperty = DependencyProperty.Register("CheckedText", typeof(string), typeof(SlideCheckBox), new PropertyMetadata("Checked Text")); public string CheckedText { get { return (string)GetValue(CheckedTextProperty); } set { SetValue(CheckedTextProperty, value); } } public static readonly DependencyProperty UncheckedTextProperty = DependencyProperty.Register("UncheckedText", typeof(string), typeof(SlideCheckBox), new PropertyMetadata("Unchecked Text")); public string UncheckedText { get { return (string)GetValue(UncheckedTextProperty); } set { SetValue(UncheckedTextProperty, value); } } public static readonly RoutedCommand SlideCheckBoxClicked = new RoutedCommand(); void OnLoaded(object sender, RoutedEventArgs e) { Style style = TryFindResource("SlideCheckBoxStyle") as Style; if (!ReferenceEquals(style, null)) { Style = style; } } void OnSlideCheckBoxClicked(object sender, ExecutedRoutedEventArgs e) { IsChecked = !IsChecked; } } } The problem comes when I try to bind the "To" attribute in the DoubleAnimations to the actual width of the text, the same as I'm doing in the ControlTemplate. If I bind the values to an ActualWidth of an element in the ControlTemplate, the control comes up as a blank checkbox (my base class). However, I'm binding to the same ActualWidths in the ControlTemplate itself without any problems. Just seems to be the CheckBox.Resources that have a problem with it. For instance, the following will break it: <DoubleAnimation Storyboard.TargetName="CheckButton" Storyboard.TargetProperty="(Button.Width)" Duration="{StaticResource AnimationTime}" To="{Binding ElementName=CheckedText, Path=ActualWidth}" /> I don't know whether this is because it's trying to bind to a value that doesn't exist until a render pass is done, or if it's something else. Anyone have any experience with this sort of animation binding?

    Read the article

< Previous Page | 134 135 136 137 138 139 140 141 142 143 144 145  | Next Page >