Search Results

Search found 19746 results on 790 pages for 'local dependency'.

Page 189/790 | < Previous Page | 185 186 187 188 189 190 191 192 193 194 195 196  | Next Page >

  • How do I enforce the order of qmake library dependencies?

    - by James Oltmans
    I'm getting a lot of errors because qmake is improperly ordering the boost libraries I'm using. Here's what .pro file looks like QT += core gui TARGET = MyTarget TEMPLATE = app CONFIG += no_keywords \ link_pkgconfig SOURCES += file1.cpp \ file2.cpp \ file3.cpp PKGCONFIG += my_package \ sqlite3 LIBS += -lsqlite3 \ -lboost_signals \ -lboost_date_time HEADERS += file1.h\ file2.h\ file3.h FORMS += mainwindow.ui RESOURCES += Resources/resources.qrc This produces the following command: g++ -Wl,-O1 -o MyTarget file1.o file2.o file3.o moc_mainwindow.o -L/usr/lib/x86_64-linux-gnu -lboost_signals -lboost_date_time -L/usr/local/lib -lmylib1 -lmylib2 -lsqlite3 -lQtGui -lQtCore Note: mylib1 and mylib2 are statically compiled by another project, placed in /usr/local/lib with an appropriate pkg-config .pc file pointing there. The .pro file references them via my_package in PKGCONFIG. The problem is not with pkg-config's output but with Qt's ordering. Here's the .pc file: prefix=/usr/local exec_prefix=${prefix} libdir=${exec_prefix}/lib includedir=${prefix}/include Name: my_package Description: My component package Version: 0.1 URL: http://example.com Libs: -L${libdir} -lmylib1 -lmylib2 Cflags: -I${includedir}/my_package/ The linking stage fails spectacularly as mylib1 and mylib2 come up with a lot of undefined references to boost libraries that both the app and mylib1 and mylib2 are using. We have another build method using scons and it properly orders things for the linker. It's build command order is below. g++ -o MyTarget file1.o file2.o file3.o moc_mainwindow.o -L/usr/local/lib -lmylib1 -lmylib2 -lsqlite3 -lboost_signals -lboost_date_time -lQtGui -lQtCore Note that the principle difference is the order of the boost libs. Scons puts them at the end just before QtGui and QtCore while qmake puts them first. The other differences in the compile commands are unimportant as I have hand modified the qmake produced make file and the simple reordering fixed the problem. So my question is, how do I enforce the right order in my .pro file despite what qmake thinks they should be?

    Read the article

  • Problem with derived ControlTemplates in WPF

    - by Frank Fella
    The following xaml code works: <Window x:Class="DerivedTemplateBug.Window1" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:local="clr-namespace:DerivedTemplateBug" Title="Window1" Height="300" Width="300"> <Button> <Button.Template> <ControlTemplate> <Border BorderBrush="Black" BorderThickness="2"> <TextBlock>Testing!</TextBlock> </Border> </ControlTemplate> </Button.Template> </Button> </Window> Now, if you define the following data template: using System.Windows.Controls; namespace DerivedTemplateBug { public class DerivedTemplate : ControlTemplate { } } And then swap the ControlTemplate for the derived class: <Window x:Class="DerivedTemplateBug.Window1" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:local="clr-namespace:DerivedTemplateBug" Title="Window1" Height="300" Width="300"> <Button> <Button.Template> <local:DerivedTemplate> <Border BorderBrush="Black" BorderThickness="2"> <TextBlock>Testing!</TextBlock> </Border> </local:DerivedTemplate> </Button.Template> </Button> </Window> You get the following error: Invalid ContentPropertyAttribute on type 'DerivedTemplateBug.DerivedTemplate', property 'Content' not found. Can anyone tell me why this is the case?

    Read the article

  • symfony + doctrine + inheritance, how to make them work?

    - by imac
    I am beginning to work with Symfony, I've found some documentation about inheritance. But also found this discouraging article, which make me doubt if Doctrine handles inheritance any good at all... Has anyone find a smart solution for inheritance in Symfony+Doctrine? As an example, I have already structured the database something like this: CREATE TABLE `poster` ( `poster_id` int(11) NOT NULL AUTO_INCREMENT, `user_name` varchar(50) NOT NULL, PRIMARY KEY (`poster_id`), UNIQUE KEY `id` (`poster_id`), ) ENGINE=InnoDB AUTO_INCREMENT=3 DEFAULT CHARSET=latin1; CREATE TABLE `user` ( `user_id` int(11) NOT NULL, `real_name` varchar(50) DEFAULT NULL, PRIMARY KEY (`user_id`), UNIQUE KEY `user_id` (`user_id`), CONSTRAINT `user_fk` FOREIGN KEY (`user_id`) REFERENCES `poster` (`poster_id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; From that, Doctrine generated this "schema.yml": Poster: connection: doctrine tableName: poster columns: poster_id: type: integer(4) fixed: false unsigned: false primary: true autoincrement: true user_name: type: string(50) fixed: false unsigned: false primary: false notnull: true autoincrement: false relations: Post: local: poster_id foreign: poster_id type: many User: local: poster_id foreign: user_id type: many Version: local: poster_id foreign: poster_id type: many User: connection: doctrine tableName: user columns: user_id: type: integer(4) fixed: false unsigned: false primary: true autoincrement: false real_name: type: string(50) fixed: false unsigned: false primary: false notnull: false autoincrement: false relations: Poster: local: user_id foreign: poster_id type: one User creation for this structure with Doctrine auto-generated forms does not work. Any clue will be appreciated.

    Read the article

  • maven: multi module HAR dependencies

    - by QuanNH
    I have a multiple module project The first module contains my hibernate xml files and data java beans and packages as har. The second module defines my DAO classes. This module has a dependancy on data java beans with in the har and is defined in pom.xml. <dependency> <groupId>myproject</groupId> <artifactId>myhar</artifactId> <version>1.0</version> <type>har</type> </dependency> The first module compiles, packages and installs fine into my local repository. The problem is when the second modules compiles it has a problem finding the packages and class defined in my har module. I get the following output from running mvn install package myproject.myhar does not exist and build fails.

    Read the article

  • Always require a plugin to be loaded

    - by axon
    I am writing an application with RequireJS and Knockout JS. The application includes an extension to knockout that adds ko.protectedObservable to the main knockout object. I would like this to always be available on the require'd knockout object, and not only when I specify it in the dependencies. I could concat the files together, but that seems that it should be unnecessary. Additionally, I can put knockout-protectedObservable as a dependency for knockout in the requirejs shim configuration, but that just leads to a circular dependency and it all fails to load. Edit: I've solved me issue, but seems hacky, is there a better way? -- Main.html <script type="text/javascript" src="require.js"></script> <script type="text/javascript"> require(['knockout'], function(ko) { require(['knockout-protectedObservable']); }); </script> -- knockout-protectedObservable.js define(['knockout'], function(ko) { ko.protectedObservable = { ... }; });

    Read the article

  • Disabling Remote-Debugging Connection on Windows 2000

    - by Kim Leeper
    I have two machines one running Win 2000 and one running Win XP both with VSC++ 6. I created an application on the Win XP machine (local) and sucessfuly used the Win2000 machine (remote) as the target for debugging. The code was in a shared drive on the Win2000 machine. This setup worked well, just like in the movies! However, I now wish to use my Win2000 machine as a develpment machine again and I find I can not. When attempting to execute a natively compiled application on this machine, I get a dialog with the title of "Remote Execution Path And Filename" asking me for same. When dialog is cancelled as the program that is attemptng to execute is not remote, the program terminates without error. Extra info! On the WinXP machine, VSC++, under the Build menu-Debugger Remote Connection-Remote Connection Dialog-the "Connection:" list box has two entries 'Local' and 'Network', on the Win2000 machine the 'Local' entry is not present, only the entry for 'Network'. How do I get back my 'Local' entry on what used to be my target machine?

    Read the article

  • Chache problem running two consecutive HTTP GET requests from an APP1 to an APP2

    - by user502052
    I use Ruby on Rails 3 and I have 2 applications (APP1 and APP2) working on two subdomains: app1.domain.local app2.domain.local and I am tryng to run two consecutive HTTP GET requests from APP1 to APP2 like this: Code in APP1 (request): response1 = Net::HTTP.get( URI.parse("http://app2.domain.local?test=first&id=1") ) response2 = Net::HTTP.get( URI.parse("http://app2.domain.local/test=second&id=1") ) Code in APP2 (response): respond_to do |format| if <model_name>.find(params[:id]).<field_name> == "first" <model_name>.find(params[:id]).update_attribute ( <field_name>, <field_value> ) format.xml { render :xml => <model_name>.find(params[:id]).<field_name> } elsif <model_name>.find(params[:id]).<field_name> == "second" format.xml { render :xml => <model_name>.find(params[:id]).<field_name> } end end After the first request I get the correct XML (response1 is what I expect), but on the second it isn't (response2 isn't what I expect). Doing some tests I found that the second time that <model_name>.find(params[:id]).<field_name> run (for the elsif statements) it returns always a blank value so that the code in the elseif statement is never run. Is it possible that the problem is related on caching <model_name>.find(params[:id]).<field_name>? P.S.: I read about eTag and Conditional GET, but I am not sure that I must use that approach. I would like to keep all simple.

    Read the article

  • Speed/expensive of SQLite query vs. List.contains() for "in-set" icon on list rows

    - by kpdvx
    An application I'm developing requires that the app main a local list of things, let's say books, in a local "library." Users can access their local library of books and search for books using a remote web service. The app will be aware of other users of the app through this web service, and users can browse other users' lists of books in their library. Each book is identified by a unique bookId (represented as an int). When viewing books returned through a search result or when viewing another user's book library, the individual list row cells need to visually represent if the book is in the user's local library or not. A user can have at most 5,000 books in the library, stored in SQLite on the device (and synchronized with the remote web service). My question is, to determine if the book shown in the list row is in the user's library, would it be better to directly ask SQLite (via SELECT COUNT(*)...) or to maintain, in-memory, a List or int[] array of some sort containing the unique bookIds. So, on each row display do I query SQLite or check if the List or int[] array contains the unique bookId? Because the user can have at most 5,000 books, each bookId occupies 4 bytes so at most this would use ~ 20kB. In thinking about this, and in typing this out, it seems obvious to me that it would be far better for performance if I maintained a list or int[] array of in-library bookIds vs. querying SQLite (the only caveat to maintaining an int[] array is that if books are added or removed I'll need to grow or shrink the array by hand, so with this option I'll most likely use an ArrayList or Vector, though I'm not sure of the additional memory overhead of using Integer objects as opposed to primitives). Opinions, thoughts, suggestions?

    Read the article

  • Eclipse content assist with maven project not picking up all classes

    - by thickosticko
    I'm using Eclipse 3.5.0, and have imported a maven project using the import maven project from SCM wizard (M2 plugin). I have a jar file as a dependency in my pom. and inside the jar is a complicated schema - with quite a number of XSD's. But the content assist doesn't seem to pick up the schema at all, nor a number of other classes in the dependency list. yet for another module in the same project it seems to work ok. Does anyone know why this is? it's driving me mad!

    Read the article

  • Sorting in Lua, counting number of items

    - by Josh
    Two quick questions (I hope...) with the following code. The script below checks if a number is prime, and if not, returns all the factors for that number, otherwise it just returns that the number prime. Pay no attention to the zs. stuff in the script, for that is client specific and has no bearing on script functionality. The script itself works almost wonderfully, except for two minor details - the first being the factor list doesn't return itself sorted... that is, for 24, it'd return 1, 2, 12, 3, 8, 4, 6, and 24 instead of 1, 2, 3, 4, 6, 8, 12, and 24. I can't print it as a table, so it does need to be returned as a list. If it has to be sorted as a table first THEN turned into a list, I can deal with that. All that matters is the end result being the list. The other detail is that I need to check if there are only two numbers in the list or more. If there are only two numbers, it's a prime (1 and the number). The current way I have it does not work. Is there a way to accomplish this? I appreciate all the help! function get_all_factors(number) local factors = 1 for possible_factor=2, math.sqrt(number), 1 do local remainder = number%possible_factor if remainder == 0 then local factor, factor_pair = possible_factor, number/possible_factor factors = factors .. ", " .. factor if factor ~= factor_pair then factors = factors .. ", " .. factor_pair end end end factors = factors .. ", and " .. number return factors end local allfactors = get_all_factors(zs.param(1)) if zs.func.numitems(allfactors)==2 then return zs.param(1) .. " is prime." else return zs.param(1) .. " is not prime, and its factors are: " .. allfactors end

    Read the article

  • C# SQL Data Adapter Fill on existing typed Dataset

    - by René
    I have an option to choose between local based data storing (xml file) or SQL Server based. I already created a long time ago a typed dataset for my application to save data local in the xml file. Now, I have a bool that changes between Server based version and local version. If true my application get the data from the SQL Server. I'm not sure but It seems that Sql Adapter's Fill Method can't fill the Data in my existing schema SqlCommand cmd = new SqlCommand("Select * FROM dbo.Categories WHERE CatUserId = 1", _connection); cmd.CommandType = CommandType.Text; _sqlAdapter = new SqlDataAdapter(cmd); _sqlAdapter.TableMappings.Add("Categories", "dbo.Categories"); _sqlAdapter.Fill(Program.Dataset); This should fill my data from dbo.Categories to Categories (in my local, typed dataset). but it doesn't. It creates a new table with the name "Table". It looks like it can't handle the existing schema. I can't figure it out. Where is the problem? btw. of course the database request I do isn't very useful that way. It's just a simplified version for testing...

    Read the article

  • Git Svn dcommit error - restart the commit

    - by Rob Wilkerson
    Last week, I made a number of changes to my local branch before leaving town for the weekend. This morning I wanted to dcommit all of those changes to the company's Svn repository, but I get a merge conflict in one file: Merge conflict during commit: Your file or directory 'build.properties.sample' is probably out-of-date: The version resource does not correspond to the resource within the transaction. Either the requested version resource is out of date (needs to be updated), or the requested version resource is newer than the transaction root (restart the commit). I'm not sure exactly why I'm getting this, but before attempting to dcommit, I did a git svn rebase. That "overwrote" my commits. To recover from that, I did a git reset --hard HEAD@{1}. Now my working copy seems to be where I expect it to be, but I have no idea how to get past the merge conflict; there's not actually any conflict to resolve that I can find. Any thoughts would be appreciated. EDIT: Just wanted to specify that I am working locally. I have a local branch for the trunk that references svn/trunk (the remote branch). All of my work was done on the local trunk: $ git branch maint-1.0.x master * trunk $ git branch -r svn/maintenance/my-project-1.0.0 svn/trunk Similarly, git log currently shows 10 commits on my local trunk since the last commit with a Svn ID. Hopefully that answers a few questions. Thanks again.

    Read the article

  • Third Party Libraries and Technologies very Java Programmer must be aware of?

    - by kunjaan
    I agree that this is a very subjective question but as a student of Java , I get suggested good libraries and technologies for Java by my mentors at work. For example, I was not aware of Google Guice for Dependency Injection, awesomeness of Java Reflection APIs, ORMs like Hibernate or stuffs you could do with libraries like Hadoop. I want to collect and share some of the libraries that exemplifies good java programming (so that beginners like me could code walk and emulate the coding practice), teach unique concepts to Java (for example Dependency Injections or ORM) and/or are really interesting libraries that a student like me would get to do interesting projects on (eg. Hadoop). I redited this question 3 times to make it more specific : ). I am sorry if I am really not clear in my intentions. But some kind of a list of good concepts and third party libraries for Java could really help some of my intern friends here at work. Thank you.

    Read the article

  • Possible to implement an IsViewPortVisible dependencyproperty for an item in an ItemsControl?

    - by Matt H.
    I need to enable/disable spell checking in a richtextbox in an ItemsControl, based on whether the RichTextBox is visible in the ItemsControl's Scrollviewer. I think the route is to implement an IsViewPortVisible dependency property and wire an event handler for a changed event... I found this article that describes the lengthy process for determining if an item is in the viewport: http://social.msdn.microsoft.com/Forums/en/wpf/thread/e6ccfec3-3dc0-4702-9d0d-1cfa55ecfc90 Any ideas on where to start? I'm familiar with implementing my own dependency property for the sake of simple bindings (integers, strings, etc...). I have no idea how to undergo something like this though) This is the end result I'm hoping for: <DataTemplate> <Grid> ...Stuff in the Grid <local:CustomRichTextBox SpellCheck.IsEnabled={Binding RelativeSource={RelativeSource Self}, Path=IsViewPortVisible}/> </Grid> </DataTemplate> Help will be EXTREMELY appreciated... you'll be saving me about 500MB in memory consumption while the program is running!!!! :)

    Read the article

  • Convert IIS / Tomcat Web Application to a multi-server environment.

    - by bill_the_loser
    I have an existing web application built in .Net, running on IIS that leverages a java servlet that we have running on Tomcat 5.5. We need to scale the application and I'm confused about what relates to our situation and what we need to do to get the servlet running on multiple servers. Right now I have 4 servers that can each individually process results, it almost seems like all I should have to do is add the ajp13 worker processes from three additional machines to the machine hosting the load balancer worker. But I can't imagine it should be that easy. What do I need to do to distribute the Tomcat load to the extra three machines? Thanks. Update: The current configuration is using a workers2.properties configuration file. From all of the documentation online I have not been able to determine the distinction between the workers.properties and the workers2.properties. Most of the examples that I have found are configuring the workers.properties and revolve around adding workers and registering them in the worker.list element. The workers2.properties does not appears to have a worker.list element and the syntax is different enough between the workers.properties and the workers2.properties that I'm doubtful that I can add that element. If I just add my multiple AJP workers to the workers2.properties file do I need to worry about the apparent lack of a worker.list element? [ajp13:localhost:8009] channel=channel.socket:localhost:8009 group=lb [ajp13:host2.mydomain.local:8009] channel=channel.socket:host2.mydomain.local:8009 group=lb [ajp13:host3.mydomain.local:8009] channel=channel.socket:host3.mydomain.local:8009 group=lb A couple of side notes... One I've noticed that sometime Tomcat doesn't seem to reload my changes and I don't know why. Also, I have no idea why this configuration has a workers2.properties and not a workers.properties. I've been assuming that it's based on version but I haven't seen anything to back up that assumption.

    Read the article

  • Third Party Libraries and Technologies every Java Programmer must be aware of?

    - by kunjaan
    I agree that this is a very subjective question but as a student of Java , I get suggested good libraries and technologies for Java by my mentors at work. For example, I was not aware of Google Guice for Dependency Injection, awesomeness of Java Reflection APIs, ORMs like Hibernate or stuffs you could do with libraries like Hadoop. I want to collect and share some of the libraries that exemplifies good java programming (so that beginners like me could code walk and emulate the coding practice), teach unique concepts to Java (for example Dependency Injections or ORM) and/or are really interesting libraries that a student like me would get to do interesting projects on (eg. Hadoop). I redited this question 3 times to make it more specific : ). I am sorry if I am really not clear in my intentions. But some kind of a list of good concepts and third party libraries for Java could really help some of my intern friends here at work. Thank you.

    Read the article

  • qmake library reordering

    - by user1095108
    I put this into a qmake file: QTPLUGIN += component LIBS += -L../lib -lmodule -lcomponent -lnetworking But qmake reorders the libraries behind my back: g++ -m64 -Wl,-O1,--sort-common,--as-needed,-z,relro -o testb .obj/constants.o .obj/main.o .obj/qrc_application.o -L/usr/lib -L../lib -lmodule -lnetworking -lcomponent -lQtGui -lQtNetwork -lQtCore -lpthread Probably because component is a static plugin. But it has a static dependency on the networking library and hence the reordering causes a link error. It is a static dependency and hence is ok in my opinion. How to work around this? I'm using qt 4.8.1.

    Read the article

  • Change cookies when doing jQuery.ajax requests in Chrome Extensions

    - by haskellguy
    I have wrote a plugin for facebook that sends data to testing-fb.local. The request goes through if the user is logged in. Here is the workflow: User logs in from testing-fb.local Cookies are stored When $.ajax() are fired from the Chrome extension Chrome extension listen with chrome.webRequest.onBeforeSendHeaders Chrome extension checks for cookies from chrome.cookies.get Chrome changes the Set-Cookies header to be sent And the request goes through. I wrote this part of code that shoud be this: function getCookies (callback) { chrome.cookies.get({url:"https://testing-fb.local", name: "connect.sid"}, function(a){ return callback(a) }) } chrome.webRequest.onBeforeSendHeaders.addListener( function(details) { getCookies(function(a){ // Here something happens }) }, {urls: ["https://testing-fb.local/*"]}, ['blocking']); Here is my manifest.json: { "name": "test-fb", "version": "1.0", "manifest_version": 1, "description": "testing", "permissions": [ "cookies", "webRequest", "tabs", "http://*/*", "https://*/*" ], "background": { "scripts": ["background.js"] }, "content_scripts": [ { "matches": ["http://*.facebook.com/*", "https://*.facebook.com/*"], "exclude_matches" : [ "*://*.facebook.com/ajax/*", "*://*.channel.facebook.tld/*", "*://*.facebook.tld/pagelet/generic.php/pagelet/home/morestories.php*", "*://*.facebook.tld/ai.php*" ], "js": ["jquery-1.8.3.min.js", "allthefunctions.js"] } ] } In allthefunction.js I have the $.ajax calls, and in background.js is where I put the code above which however looks not to run.. In summary, I have not clear: What I should write in Here something happens If this strategy is going to work Where should I put this code?

    Read the article

  • Windows service: Listening on socket while running as LocalSystem

    - by Socob
    I'm writing a small server-like program in C for Windows (using MinGW/GCC, testing on Windows 7) which is eventually supposed to run as a service with the LocalSystem account. I am creating a socket, and using Windows Sockets bind(), listen() and accept() to listen for incoming connections. If I run the application from the command line (i.e. not as a service, but as a normal user), I have no problems connecting to it from external IPs. However, if I run the program as a service with the LocalSystem account, I can only connect to the service from my own PC, either with 127.0.0.1 or my local address, 192.168.1.80 (I'm behind a router in a small local network). Neither external IPs nor other PCs in the same local network, using my local address, can connect now, even though there were no problems without running as a service. Now, I've heard that networking is handled differently or even not accessible (?) when running as LocalSystem or LocalService or that services cannot access both the desktop and the network (note: my service is not interactive) at the same time due to security considerations. Essentially, I need to find out what's going wrong/how to listen for connections in a service. Is running as NetworkService the same as running as LocalSystem, but with network access? Surely there must be servers that can run as background services, so how do they do it?

    Read the article

  • Using Node.js as an accelerator for WCF REST services

    - by Elton Stoneman
    Node.js is a server-side JavaScript platform "for easily building fast, scalable network applications". It's built on Google's V8 JavaScript engine and uses an (almost) entirely async event-driven processing model, running in a single thread. If you're new to Node and your reaction is "why would I want to run JavaScript on the server side?", this is the headline answer: in 150 lines of JavaScript you can build a Node.js app which works as an accelerator for WCF REST services*. It can double your messages-per-second throughput, halve your CPU workload and use one-fifth of the memory footprint, compared to the WCF services direct.   Well, it can if: 1) your WCF services are first-class HTTP citizens, honouring client cache ETag headers in request and response; 2) your services do a reasonable amount of work to build a response; 3) your data is read more often than it's written. In one of my projects I have a set of REST services in WCF which deal with data that only gets updated weekly, but which can be read hundreds of times an hour. The services issue ETags and will return a 304 if the client sends a request with the current ETag, which means in the most common scenario the client uses its local cached copy. But when the weekly update happens, then all the client caches are invalidated and they all need the same new data. Then the service will get hundreds of requests with old ETags, and they go through the full service stack to build the same response for each, taking up threads and processing time. Part of that processing means going off to a database on a separate cloud, which introduces more latency and downtime potential.   We can use ASP.NET output caching with WCF to solve the repeated processing problem, but the server will still be thread-bound on incoming requests, and to get the current ETags reliably needs a database call per request. The accelerator solves that by running as a proxy - all client calls come into the proxy, and the proxy routes calls to the underlying REST service. We could use Node as a straight passthrough proxy and expect some benefit, as the server would be less thread-bound, but we would still have one WCF and one database call per proxy call. But add some smart caching logic to the proxy, and share ETags between Node and WCF (so the proxy doesn't even need to call the servcie to get the current ETag), and the underlying service will only be invoked when data has changed, and then only once - all subsequent client requests will be served from the proxy cache.   I've built this as a sample up on GitHub: NodeWcfAccelerator on sixeyed.codegallery. Here's how the architecture looks:     The code is very simple. The Node proxy runs on port 8010 and all client requests target the proxy. If the client request has an ETag header then the proxy looks up the ETag in the tag cache to see if it is current - the sample uses memcached to share ETags between .NET and Node. If the ETag from the client matches the current server tag, the proxy sends a 304 response with an empty body to the client, telling it to use its own cached version of the data. If the ETag from the client is stale, the proxy looks for a local cached version of the response, checking for a file named after the current ETag. If that file exists, its contents are returned to the client as the body in a 200 response, which includes the current ETag in the header. If the proxy does not have a local cached file for the service response, it calls the service, and writes the WCF response to the local cache file, and to the body of a 200 response for the client. So the WCF service is only troubled if both client and proxy have stale (or no) caches.   The only (vaguely) clever bit in the sample is using the ETag cache, so the proxy can serve cached requests without any communication with the underlying service, which it does completely generically, so the proxy has no notion of what it is serving or what the services it proxies are doing. The relative path from the URL is used as the lookup key, so there's no shared key-generation logic between .NET and Node, and when WCF stores a tag it also stores the "read" URL against the ETag so it can be used for a reverse lookup, e.g:   Key Value /WcfSampleService/PersonService.svc/rest/fetch/3 "28cd4796-76b8-451b-adfd-75cb50a50fa6" "28cd4796-76b8-451b-adfd-75cb50a50fa6" /WcfSampleService/PersonService.svc/rest/fetch/3    In Node we read the cache using the incoming URL path as the key and we know that "28cd4796-76b8-451b-adfd-75cb50a50fa6" is the current ETag; we look for a local cached response in /caches/28cd4796-76b8-451b-adfd-75cb50a50fa6.body (and the corresponding .header file which contains the original service response headers, so the proxy response is exactly the same as the underlying service). When the data is updated, we need to invalidate the ETag cache – which is why we need the reverse lookup in the cache. In the WCF update service, we don't need to know the URL of the related read service - we fetch the entity from the database, do a reverse lookup on the tag cache using the old ETag to get the read URL, update the new ETag against the URL, store the new reverse lookup and delete the old one.   Running Apache Bench against the two endpoints gives the headline performance comparison. Making 1000 requests with concurrency of 100, and not sending any ETag headers in the requests, with the Node proxy I get 102 requests handled per second, average response time of 975 milliseconds with 90% of responses served within 850 milliseconds; going direct to WCF with the same parameters, I get 53 requests handled per second, mean response time of 1853 milliseconds, with 90% of response served within 3260 milliseconds. Informally monitoring server usage during the tests, Node maxed at 20% CPU and 20Mb memory; IIS maxed at 60% CPU and 100Mb memory.   Note that the sample WCF service does a database read and sleeps for 250 milliseconds to simulate a moderate processing load, so this is *not* a baseline Node-vs-WCF comparison, but for similar scenarios where the  service call is expensive but applicable to numerous clients for a long timespan, the performance boost from the accelerator is considerable.     * - actually, the accelerator will work nicely for any HTTP request, where the URL (path + querystring) uniquely identifies a resource. In the sample, there is an assumption that the ETag is a GUID wrapped in double-quotes (e.g. "28cd4796-76b8-451b-adfd-75cb50a50fa6") – which is the default for WCF services. I use that assumption to name the cache files uniquely, but it is a trivial change to adapt to other ETag formats.

    Read the article

  • tile_static, tile_barrier, and tiled matrix multiplication with C++ AMP

    - by Daniel Moth
    We ended the previous post with a mechanical transformation of the C++ AMP matrix multiplication example to the tiled model and in the process introduced tiled_index and tiled_grid. This is part 2. tile_static memory You all know that in regular CPU code, static variables have the same value regardless of which thread accesses the static variable. This is in contrast with non-static local variables, where each thread has its own copy. Back to C++ AMP, the same rules apply and each thread has its own value for local variables in your lambda, whereas all threads see the same global memory, which is the data they have access to via the array and array_view. In addition, on an accelerator like the GPU, there is a programmable cache, a third kind of memory type if you'd like to think of it that way (some call it shared memory, others call it scratchpad memory). Variables stored in that memory share the same value for every thread in the same tile. So, when you use the tiled model, you can have variables where each thread in the same tile sees the same value for that variable, that threads from other tiles do not. The new storage class for local variables introduced for this purpose is called tile_static. You can only use tile_static in restrict(direct3d) functions, and only when explicitly using the tiled model. What this looks like in code should be no surprise, but here is a snippet to confirm your mental image, using a good old regular C array // each tile of threads has its own copy of locA, // shared among the threads of the tile tile_static float locA[16][16]; Note that tile_static variables are scoped and have the lifetime of the tile, and they cannot have constructors or destructors. tile_barrier In amp.h one of the types introduced is tile_barrier. You cannot construct this object yourself (although if you had one, you could use a copy constructor to create another one). So how do you get one of these? You get it, from a tiled_index object. Beyond the 4 properties returning index objects, tiled_index has another property, barrier, that returns a tile_barrier object. The tile_barrier class exposes a single member, the method wait. 15: // Given a tiled_index object named t_idx 16: t_idx.barrier.wait(); 17: // more code …in the code above, all threads in the tile will reach line 16 before a single one progresses to line 17. Note that all threads must be able to reach the barrier, i.e. if you had branchy code in such a way which meant that there is a chance that not all threads could reach line 16, then the code above would be illegal. Tiled Matrix Multiplication Example – part 2 So now that we added to our understanding the concepts of tile_static and tile_barrier, let me obfuscate rewrite the matrix multiplication code so that it takes advantage of tiling. Before you start reading this, I suggest you get a cup of your favorite non-alcoholic beverage to enjoy while you try to fully understand the code. 01: void MatrixMultiplyTiled(vector<float>& vC, const vector<float>& vA, const vector<float>& vB, int M, int N, int W) 02: { 03: static const int TS = 16; 04: array_view<const float,2> a(M, W, vA); 05: array_view<const float,2> b(W, N, vB); 06: array_view<writeonly<float>,2> c(M,N,vC); 07: parallel_for_each(c.grid.tile< TS, TS >(), 08: [=] (tiled_index< TS, TS> t_idx) restrict(direct3d) 09: { 10: int row = t_idx.local[0]; int col = t_idx.local[1]; 11: float sum = 0.0f; 12: for (int i = 0; i < W; i += TS) { 13: tile_static float locA[TS][TS], locB[TS][TS]; 14: locA[row][col] = a(t_idx.global[0], col + i); 15: locB[row][col] = b(row + i, t_idx.global[1]); 16: t_idx.barrier.wait(); 17: for (int k = 0; k < TS; k++) 18: sum += locA[row][k] * locB[k][col]; 19: t_idx.barrier.wait(); 20: } 21: c[t_idx.global] = sum; 22: }); 23: } Notice that all the code up to line 9 is the same as per the changes we made in part 1 of tiling introduction. If you squint, the body of the lambda itself preserves the original algorithm on lines 10, 11, and 17, 18, and 21. The difference being that those lines use new indexing and the tile_static arrays; the tile_static arrays are declared and initialized on the brand new lines 13-15. On those lines we copy from the global memory represented by the array_view objects (a and b), to the tile_static vanilla arrays (locA and locB) – we are copying enough to fit a tile. Because in the code that follows on line 18 we expect the data for this tile to be in the tile_static storage, we need to synchronize the threads within each tile with a barrier, which we do on line 16 (to avoid accessing uninitialized memory on line 18). We also need to synchronize the threads within a tile on line 19, again to avoid the race between lines 14, 15 (retrieving the next set of data for each tile and overwriting the previous set) and line 18 (not being done processing the previous set of data). Luckily, as part of the awesome C++ AMP debugger in Visual Studio there is an option that helps you find such races, but that is a story for another blog post another time. May I suggest reading the next section, and then coming back to re-read and walk through this code with pen and paper to really grok what is going on, if you haven't already? Cool. Why would I introduce this tiling complexity into my code? Funny you should ask that, I was just about to tell you. There is only one reason we tiled our extent, had to deal with finding a good tile size, ensure the number of threads we schedule are correctly divisible with the tile size, had to use a tiled_index instead of a normal index, and had to understand tile_barrier and to figure out where we need to use it, and double the size of our lambda in terms of lines of code: the reason is to be able to use tile_static memory. Why do we want to use tile_static memory? Because accessing tile_static memory is around 10 times faster than accessing the global memory on an accelerator like the GPU, e.g. in the code above, if you can get 150GB/second accessing data from the array_view a, you can get 1500GB/second accessing the tile_static array locA. And since by definition you are dealing with really large data sets, the savings really pay off. We have seen tiled implementations being twice as fast as their non-tiled counterparts. Now, some algorithms will not have performance benefits from tiling (and in fact may deteriorate), e.g. algorithms that require you to go only once to global memory will not benefit from tiling, since with tiling you already have to fetch the data once from global memory! Other algorithms may benefit, but you may decide that you are happy with your code being 150 times faster than the serial-version you had, and you do not need to invest to make it 250 times faster. Also algorithms with more than 3 dimensions, which C++ AMP supports in the non-tiled model, cannot be tiled. Also note that in future releases, we may invest in making the non-tiled model, which already uses tiling under the covers, go the extra step and use tile_static memory on your behalf, but it is obviously way to early to commit to anything like that, and we certainly don't do any of that today. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • We've completed the first iteration

    - by CliveT
    There are a lot of features in C# that are implemented by the compiler and not by the underlying platform. One such feature is a lambda expression. Since local variables cannot be accessed once the current method activation finishes, the compiler has to go out of its way to generate a new class which acts as a home for any variable whose lifetime needs to be extended past the activation of the procedure. Take the following example:     Random generator = new Random();     Func func = () = generator.Next(10); In this case, the compiler generates a new class called c_DisplayClass1 which is marked with the CompilerGenerated attribute. [CompilerGenerated] private sealed class c__DisplayClass1 {     // Fields     public Random generator;     // Methods     public int b__0()     {         return this.generator.Next(10);     } } Two quick comments on this: (i)    A display was the means that compilers for languages like Algol recorded the various lexical contours of the nested procedure activations on the stack. I imagine that this is what has led to the name. (ii)    It is a shame that the same attribute is used to mark all compiler generated classes as it makes it hard to figure out what they are being used for. Indeed, you could imagine optimisations that the runtime could perform if it knew that classes corresponded to certain high level concepts. We can see that the local variable generator has been turned into a field in the class, and the body of the lambda expression has been turned into a method of the new class. The code that builds the Func object simply constructs an instance of this class and initialises the fields to their initial values.     c__DisplayClass1 class2 = new c__DisplayClass1();     class2.generator = new Random();     Func func = new Func(class2.b__0); Reflector already contains code to spot this pattern of code and reproduce the form containing the lambda expression, so this is example is correctly decompiled. The use of compiler generated code is even more spectacular in the case of iterators. C# introduced the idea of a method that could automatically store its state between calls, so that it can pick up where it left off. The code can express the logical flow with yield return and yield break denoting places where the method should return a particular value and be prepared to resume.         {             yield return 1;             yield return 2;             yield return 3;         } Of course, there was already a .NET pattern for expressing the idea of returning a sequence of values with the computation proceeding lazily (in the sense that the work for the next value is executed on demand). This is expressed by the IEnumerable interface with its Current property for fetching the current value and the MoveNext method for forcing the computation of the next value. The sequence is terminated when this method returns false. The C# compiler links these two ideas together so that an IEnumerator returning method using the yield keyword causes the compiler to produce the implementation of an Iterator. Take the following piece of code.         IEnumerable GetItems()         {             yield return 1;             yield return 2;             yield return 3;         } The compiler implements this by defining a new class that implements a state machine. This has an integer state that records which yield point we should go to if we are resumed. It also has a field that records the Current value of the enumerator and a field for recording the thread. This latter value is used for optimising the creation of iterator instances. [CompilerGenerated] private sealed class d__0 : IEnumerable, IEnumerable, IEnumerator, IEnumerator, IDisposable {     // Fields     private int 1__state;     private int 2__current;     public Program 4__this;     private int l__initialThreadId; The body gets converted into the code to construct and initialize this new class. private IEnumerable GetItems() {     d__0 d__ = new d__0(-2);     d__.4__this = this;     return d__; } When the class is constructed we set the state, which was passed through as -2 and the current thread. public d__0(int 1__state) {     this.1__state = 1__state;     this.l__initialThreadId = Thread.CurrentThread.ManagedThreadId; } The state needs to be set to 0 to represent a valid enumerator and this is done in the GetEnumerator method which optimises for the usual case where the returned enumerator is only used once. IEnumerator IEnumerable.GetEnumerator() {     if ((Thread.CurrentThread.ManagedThreadId == this.l__initialThreadId)               && (this.1__state == -2))     {         this.1__state = 0;         return this;     } The state machine itself is implemented inside the MoveNext method. private bool MoveNext() {     switch (this.1__state)     {         case 0:             this.1__state = -1;             this.2__current = 1;             this.1__state = 1;             return true;         case 1:             this.1__state = -1;             this.2__current = 2;             this.1__state = 2;             return true;         case 2:             this.1__state = -1;             this.2__current = 3;             this.1__state = 3;             return true;         case 3:             this.1__state = -1;             break;     }     return false; } At each stage, the current value of the state is used to determine how far we got, and then we generate the next value which we return after recording the next state. Finally we return false from the MoveNext to signify the end of the sequence. Of course, that example was really simple. The original method body didn't have any local variables. Any local variables need to live between the calls to MoveNext and so they need to be transformed into fields in much the same way that we did in the case of the lambda expression. More complicated MoveNext methods are required to deal with resources that need to be disposed when the iterator finishes, and sometimes the compiler uses a temporary variable to hold the return value. Why all of this explanation? We've implemented the de-compilation of iterators in the current EAP version of Reflector (7). This contrasts with previous version where all you could do was look at the MoveNext method and try to figure out the control flow. There's a fair amount of things we have to do. We have to spot the use of a CompilerGenerated class which implements the Enumerator pattern. We need to go to the class and figure out the fields corresponding to the local variables. We then need to go to the MoveNext method and try to break it into the various possible states and spot the state transitions. We can then take these pieces and put them back together into an object model that uses yield return to show the transition points. After that Reflector can carry on optimising using its usual optimisations. The pattern matching is currently a little too sensitive to changes in the code generation, and we only do a limited analysis of the MoveNext method to determine use of the compiler generated fields. In some ways, it is a pity that iterators are compiled away and there is no metadata that reflects the original intent. Without it, we are always going to dependent on our knowledge of the compiler's implementation. For example, we have noticed that the Async CTP changes the way that iterators are code generated, so we'll have to do some more work to support that. However, with that warning in place, we seem to do a reasonable job of decompiling the iterators that are built into the framework. Hopefully, the EAP will give us a chance to find examples where we don't spot the pattern correctly or regenerate the wrong code, and we can improve things. Please give it a go, and report any problems.

    Read the article

  • Oracle B2B - Synchronous Request Reply

    - by cdwright
    Introduction So first off, let me say I didn't create this demo (although I did modify it some). I got it from a member of the B2B development technical staff. Since it came with only a simple readme file, I thought I would take some time and write a more detailed explanation about how it works. Beginning with Oracle SOA Suite PS5 (11.1.1.6), B2B supports synchronous request reply over http using the b2b/syncreceiver servlet. I’m attaching the demo to this blog which includes a SOA composite archive that needs to be deployed using JDeveloper, a B2B repository with two agreements that need to be deployed using the B2B console, and a test xml file that gets sent to the b2b/syncreceiver servlet using your favorite SOAP test tool (I'm using Firefox Poster here). You can download the zip file containing the demo here. The demo works by sending the sample xml request file (req.xml) to http://<b2bhost>:8001/b2b/syncreceiver using the SOAP test tool.  The syncreceiver servlet keeps the socket connection open between itself and the test tool so that it can synchronously send the reply message back. When B2B receives the inbound request message, it is passed to the SOA composite through the default B2B Fabric binding. A simple reply is created in BPEL and returned to B2B which then sends the message back to the test tool using that same socket connection. I’ll show you the B2B configuration first, then we’ll look at the soa composite. Configuring B2B No additional configuration necessary in order to use the syncreceiver servlet. It is already running when you start SOA. After importing the GC_SyncReqRep.zip repository file into B2B, you’ll have the typical GlobalChips host trading partner and the Acme remote trading partner. Document Management The repository contains two very simple custom XML document definitions called Orders and OrdersResponse. In order to determine the trading partner agreement needed to process the inbound Orders document, you need to know two things about it; what is it and where it came from. So let’s look at how B2B identifies the appropriate document definition for the message. The XSD’s for these two document definitions themselves are not particularly interesting. Whenever you're dealing with custom XML documents, B2B identifies the appropriate document definition for each XML message using an XPath Identification Expression. The expression is entered for each of these document definitions under the document administration tab in the B2B console. The full XPATH expression for the Orders document is  //*[local-name()='shiporder']/*[local-name()='shipto']/*[local-name()='name']/text(). You can see this path in the XSD diagram below and how it uniquely identifies this message. The OrdersReponse document is identified in the same way. The XPath expression for it is //*[local-name()='Response']/*[local-name()='Status']/text(). You can see how it’s path differs uniquely identifying the reply from the request. Trading Partner Profile The trading partner profiles are very simple too. For GlobalChips, a generic identifier is being used to identify the sender of the response document using the host trading partner name. For Acme, a generic identifier is also being used to identify the sender of the inbound request using the remote trading partner name. The document types are added for the remote trading partner as usual. So the remote trading partner Acme is the sender of the Orders document, and it is the receiver of the OrdersResponse document. For the remote trading partner only, there needs to be a dummy channel which gets used in the outbound response agreement. The channel is not actually used. It is just a necessary place holder that needs to be there when creating the agreement. Trading Partner Agreement The agreements are equally simple. There is no validation and translation is not an option for a custom XML document type. For the InboundAgreement (request) the document definition is set to OrdersDef. In the Agreement Parameters section the generic identifiers have been added for the host and remote trading partners. That’s all that is needed for the inbound transaction. For the OutboundAgreement (response), the document definition is set to OrdersResponseDef and the generic identifiers for the two trading partners are added. The remote trading partner dummy delivery channel is also added to the agreement. SOA Composite Import the SOA composite archive into JDeveloper as an EJB JAR file. Open the composite and you should have a project that looks like this. In the composite, open the b2bInboundSyncSvc exposed service and advance through the setup wizard. Select your Application Server Connection and advance to the Operations window. Notice here that the B2B binding is set to Receive. It is not set for Synchronous Request Reply. Continue advancing through the wizard as you normally would and select finish at the end. Now open BPELProcess1 in the composite. The BPEL process is set as a Synchronous Request Reply as you can see below. The while loop is there just to give the process something to do. The actual reply message is prepared in the assignResponseValues assignment followed by an Invoke of the B2B binding. Open the replyResponse Invoke and go to the properties tab. You’ll see that the fromTradingPartnerId, toTradingPartner, documentTypeName, and documentProtocolRevision properties have been set. Testing the Configuration To test the configuration, I used Firefox Poster. Enter the URL for the b2b/syncreceiver servlet and browse for the req.xml file that contains the test request message. In the Headers tab, add the property ‘from’ and give it the value ‘Acme’. This is how B2B will know where the message is coming from and it will use that information along with the document type name to find the right trading partner agreement. Now post the message. You should get back a response with a status of ‘200 OK’. That’s all there is to it.

    Read the article

  • WPF animation: binding to the "To" attribute of storyboard animation.

    - by bozalina
    Hi, I'm trying to create a button that behaves similarly to the "slide" button on the iPhone. I have an animation that adjusts the position and width of the button, but I want these values to be based on the text used in the control. Currently, they're hardcoded. Here's my working XAML, so far: <CheckBox x:Class="Smt.Controls.SlideCheckBox" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:local="clr-namespace:Smt.Controls" xmlns:System.Windows="clr-namespace:System.Windows;assembly=PresentationCore" Name="SliderCheckBox" mc:Ignorable="d"> <CheckBox.Resources> <System.Windows:Duration x:Key="AnimationTime">0:0:0.2</System.Windows:Duration> <Storyboard x:Key="OnChecking"> <DoubleAnimation Storyboard.TargetName="CheckButton" Storyboard.TargetProperty="(UIElement.RenderTransform).(TransformGroup.Children)[0].(TranslateTransform.X)" Duration="{StaticResource AnimationTime}" To="40" /> <DoubleAnimation Storyboard.TargetName="CheckButton" Storyboard.TargetProperty="(Button.Width)" Duration="{StaticResource AnimationTime}" To="41" /> </Storyboard> <Storyboard x:Key="OnUnchecking"> <DoubleAnimation Storyboard.TargetName="CheckButton" Storyboard.TargetProperty="(UIElement.RenderTransform).(TransformGroup.Children)[0].(TranslateTransform.X)" Duration="{StaticResource AnimationTime}" To="0" /> <DoubleAnimation Storyboard.TargetName="CheckButton" Storyboard.TargetProperty="(Button.Width)" Duration="{StaticResource AnimationTime}" To="40" /> </Storyboard> <Style x:Key="SlideCheckBoxStyle" TargetType="{x:Type local:SlideCheckBox}"> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="{x:Type local:SlideCheckBox}"> <Canvas> <ContentPresenter SnapsToDevicePixels="{TemplateBinding SnapsToDevicePixels}" Content="{TemplateBinding Content}" ContentTemplate="{TemplateBinding ContentTemplate}" RecognizesAccessKey="True" VerticalAlignment="Center" HorizontalAlignment="Center" /> <Canvas> <!--Background--> <Rectangle Width="{Binding ElementName=ButtonText, Path=ActualWidth}" Height="{Binding ElementName=ButtonText, Path=ActualHeight}" Fill="LightBlue" /> </Canvas> <Canvas> <!--Button--> <Button Width="{Binding ElementName=CheckedText, Path=ActualWidth}" Height="{Binding ElementName=ButtonText, Path=ActualHeight}" Name="CheckButton" Command="{x:Static local:SlideCheckBox.SlideCheckBoxClicked}"> <Button.RenderTransform> <TransformGroup> <TranslateTransform /> </TransformGroup> </Button.RenderTransform> </Button> </Canvas> <Canvas> <!--Text--> <StackPanel Name="ButtonText" Orientation="Horizontal" IsHitTestVisible="False"> <Grid Name="CheckedText"> <Label Margin="7 0" Content="{Binding RelativeSource={RelativeSource AncestorType={x:Type local:SlideCheckBox}}, Path=CheckedText}" /> </Grid> <Grid Name="UncheckedText" HorizontalAlignment="Right"> <Label Margin="7 0" Content="{Binding RelativeSource={RelativeSource AncestorType={x:Type local:SlideCheckBox}}, Path=UncheckedText}" /> </Grid> </StackPanel> </Canvas> </Canvas> <ControlTemplate.Triggers> <Trigger Property="IsChecked" Value="True"> <Trigger.EnterActions> <BeginStoryboard Storyboard="{StaticResource OnChecking}" /> </Trigger.EnterActions> <Trigger.ExitActions> <BeginStoryboard Storyboard="{StaticResource OnUnchecking}" /> </Trigger.ExitActions> </Trigger> </ControlTemplate.Triggers> </ControlTemplate> </Setter.Value> </Setter> </Style> </CheckBox.Resources> <CheckBox.CommandBindings> <CommandBinding Command="{x:Static local:SlideCheckBox.SlideCheckBoxClicked}" Executed="OnSlideCheckBoxClicked" /> </CheckBox.CommandBindings> </CheckBox> And the code behind: using System.Windows; using System.Windows.Controls; using System.Windows.Input; namespace Smt.Controls { public partial class SlideCheckBox : CheckBox { public SlideCheckBox() { InitializeComponent(); Loaded += OnLoaded; } public static readonly DependencyProperty CheckedTextProperty = DependencyProperty.Register("CheckedText", typeof(string), typeof(SlideCheckBox), new PropertyMetadata("Checked Text")); public string CheckedText { get { return (string)GetValue(CheckedTextProperty); } set { SetValue(CheckedTextProperty, value); } } public static readonly DependencyProperty UncheckedTextProperty = DependencyProperty.Register("UncheckedText", typeof(string), typeof(SlideCheckBox), new PropertyMetadata("Unchecked Text")); public string UncheckedText { get { return (string)GetValue(UncheckedTextProperty); } set { SetValue(UncheckedTextProperty, value); } } public static readonly RoutedCommand SlideCheckBoxClicked = new RoutedCommand(); void OnLoaded(object sender, RoutedEventArgs e) { Style style = TryFindResource("SlideCheckBoxStyle") as Style; if (!ReferenceEquals(style, null)) { Style = style; } } void OnSlideCheckBoxClicked(object sender, ExecutedRoutedEventArgs e) { IsChecked = !IsChecked; } } } The problem comes when I try to bind the "To" attribute in the DoubleAnimations to the actual width of the text, the same as I'm doing in the ControlTemplate. If I bind the values to an ActualWidth of an element in the ControlTemplate, the control comes up as a blank checkbox (my base class). However, I'm binding to the same ActualWidths in the ControlTemplate itself without any problems. Just seems to be the CheckBox.Resources that have a problem with it. For instance, the following will break it: <DoubleAnimation Storyboard.TargetName="CheckButton" Storyboard.TargetProperty="(Button.Width)" Duration="{StaticResource AnimationTime}" To="{Binding ElementName=CheckedText, Path=ActualWidth}" /> I don't know whether this is because it's trying to bind to a value that doesn't exist until a render pass is done, or if it's something else. Anyone have any experience with this sort of animation binding?

    Read the article

  • Can't get MySQL to install

    - by James Marthenal
    I'd like to think I know what I'm doing in a Unix shell but maybe not. I made a mistake in a configuration file for MySQL, so I decided to just uninstall it and then reinstall it, so I did: sudo apt-get --purge remove mysql-server mysql-server-5.0 mysql-client The files were deleted, so I then tried to install it, but it didn't ask me for a root password or anything else, so I uninstalled it using the above command again and then did sudo rm -rf /etc/mysql sudo rm /etc/init.d/mysql sudo rm -rf /var/lib/mysql* I then restarted the computer then installed it again: sudo apt-get install mysql-server mysql-client It asked for a root password, and everything looked like it would work, until I saw this: $ sudo apt-get install mysql-server mysql-client Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: mysql-server-5.0 Suggested packages: tinyca The following NEW packages will be installed: mysql-client mysql-server mysql-server-5.0 0 upgraded, 3 newly installed, 0 to remove and 1 not upgraded. Need to get 0B/27.4MB of archives. After this operation, 86.7MB of additional disk space will be used. Do you want to continue [Y/n]? y WARNING: The following packages cannot be authenticated! mysql-server-5.0 mysql-client mysql-server Authentication warning overridden. Preconfiguring packages ... Can't exec "/tmp/mysql-server-5.0.config.28101": Permission denied at /usr/share/perl/5.10/IPC/Open3.pm line 168. open2: exec of /tmp/mysql-server-5.0.config.28101 configure failed at /usr/share/perl5/Debconf/ConfModule.pm line 59 mysql-server-5.0 failed to preconfigure, with exit status 255 Selecting previously deselected package mysql-server-5.0. (Reading database ... 160284 files and directories currently installed.) Unpacking mysql-server-5.0 (from .../mysql-server-5.0_5.0.51a-24+lenny5_amd64.deb) ... Selecting previously deselected package mysql-client. Unpacking mysql-client (from .../mysql-client_5.0.51a-24+lenny5_all.deb) ... Selecting previously deselected package mysql-server. Unpacking mysql-server (from .../mysql-server_5.0.51a-24+lenny5_all.deb) ... Processing triggers for man-db ... Setting up mysql-server-5.0 (5.0.51a-24+lenny5) ... Stopping MySQL database server: mysqld. /var/lib/dpkg/info/mysql-server-5.0.postinst: line 144: /etc/mysql/conf.d/old_passwords.cnf: No such file or directory dpkg: error processing mysql-server-5.0 (--configure): subprocess post-installation script returned error exit status 1 Setting up mysql-client (5.0.51a-24+lenny5) ... dpkg: dependency problems prevent configuration of mysql-server: mysql-server depends on mysql-server-5.0; however: Package mysql-server-5.0 is not configured yet. dpkg: error processing mysql-server (--configure): dependency problems - leaving unconfigured Errors were encountered while processing: mysql-server-5.0 mysql-server E: Sub-process /usr/bin/dpkg returned an error code (1) Now I can't seem to figure out what to do. I just want to get a clean MySQL installation at this point. I'm running the latest stable release of Debian. All help is appreciated—thanks! Edit: I looked at this similar question, which suggests that I uninstall mysql-common, but when I try to do so I see: The following packages will be REMOVED: apache2 apache2-mpm-prefork apache2-utils apache2.2-common git-svn libapache2-mod-php5 libapache2-mod-python libapache2-svn libaprutil1 libdbd-mysql-perl libdbd-mysql-rubygem libmysql-ruby libmysql-ruby1.8 libmysql-rubygem libmysqlclient15-dev libmysqlclient15off librdf-perl librdf0 libserf-0-0 libsvn-perl libsvn1 mysql-client-5.0 mysql-common mytop ndn-apache22-php5 ndn-apache22-svn ndn-interpreters ndn-lighttpd ndn-netsaint-plugins ndn-perl-modules ndn-php5-cgi ndn-php5-xcache ndn-php53 ndn-php53-suhosin ndn-rubygems php5 php5-mcrypt php5-mysql proftpd proftpd-mod-mysql python-django python-mysqldb python-subversion python-svn subversion subversion-tools trac zendoptimizer 0 upgraded, 0 newly installed, 48 to remove and 1 not upgraded. Eeek! Any suggestions?

    Read the article

< Previous Page | 185 186 187 188 189 190 191 192 193 194 195 196  | Next Page >