Search Results

Search found 6774 results on 271 pages for 'special locations'.

Page 217/271 | < Previous Page | 213 214 215 216 217 218 219 220 221 222 223 224  | Next Page >

  • Can you/should you develop components for ASP.NET MVC?

    - by Vilx-
    Following from the previous question I've started to wonder - is it possible to implement "Components" in ASP.NET MVC (latest version)? And should you? Let's clarify what I mean with a "component". With that I mean a "control" (aka "widget"), similar to those that ASP.NET webforms is built upon. A gridview might be a good example. In webforms I can place on my form a datasource component (one line of code), a gridview component (another line of code) and bind them together (specify an attribute on the gridview). In the codebehind file I fill the datasource with data (a few lines of DB-querying code), and I'm all set. At this point the gridview is a fully functional standalone component. I can open the form, and I'll see all the data. I can sort it by clicking on the column headers; it is split into several pages; I can drag the column headers around and rearrange columns; I can turn on "grouping" mode; etc. And I don't need to write another line of code for any of it. The gridview, as a component, already has all the code tucked away in its classes and assemblies. I just place it on the form, initialize it, and it Just Works. At some times (like sorting or navigation to a different page) it will also perform ajax callbacks to the server, but those too will be handled internally, with my code having no knowledge at all about it. And then there are also events that I can attach if I want to get notified when something happens. In MVC I cannot see a way of doing this cleanly. Sure, there are the partial views, but those only handle half of the problem - they render the initial HTML. Some more can be achieved with client-side Javascript (like column re-arranging), but when the grid needs to do an ajax callback (say, to fetch the next page of data), my code will have to get involved and process that request. At best I guess I can provide some helper methods to process it, but I'll have to write the code that calls them, and also provide a controller method with signature matching the arguments of that callback. I guess that I could make some hacks with global events or special routes or something, but that just seems... hackish. Unelegant. Perhaps this is not the MVC way? Although I've completed one project in it, I'm still far from being an MVC expert. But then what is? In the intranet application that we're building there are dozens upon dozens of such grids. Naturally I want them all to have a unified look & behavior, and I don't want to repeat the same code all over the place. So what's the "MVC" approach to this problem?

    Read the article

  • Pinterest and the Rising Power of Imagery

    - by Mike Stiles
    If images keep you glued to a screen, you’re hardly alone. Countless social users are letting their eyes do the walking, waiting for that special photo to grab their attention. And perhaps more than any other social network, Pinterest has been giving those eyes plenty of room to walk. Pinterest came along in 2010. Its play was that users could simply create topic boards and pin pictures to the appropriate boards for sharing. Yes there are some words, captions mostly, but not many. The speed of its growth raised eyebrows. Traffic quadrupled in the last quarter of 2011, with 7.51 million unique visitors in December alone. It now gets 1.9 billion monthly page views. And it was sticky. In the US, the average time a user spends strolling through boards and photos on Pinterest is 15 minutes, 50 seconds. Proving the concept of browsing a catalogue is not dead, it became a top 5 referrer for several apparel retailers like Land’s End, Nordstrom, and Bergdorfs. Now a survey of online shoppers by BizRate Insights says that Pinterest is responsible for more purchases online than Facebook. Over 70% of its users are going there specifically to keep up with trends and get shopping ideas. And when they buy, the average order value is $179. Pinterest is also scoring better in terms of user engagement. 66% of pinners regularly follow and repin retailers, whereas 17% of Facebook fans turn to that platform for purchase ideas. (Facebook still wins when it comes to reach and driving traffic to 3rd-party sites by the way). Social posting best practices have consistently shown that posts with photos are rewarded with higher engagement levels. You may be downright Shakespearean in your writing, but what makes images in the digital world so much more powerful than prose? 1. They transcend language barriers. 2. They’re fun and addictive to look at. 3. They can be consumed in fractions of a second, important considering how fast users move through their social content (admit it, you do too). 4. They’re efficient gateways. A good picture might get them to the headline. A good headline might then get them to the written content. 5. The audience for them surpasses demographic limitations. 6. They can effectively communicate and trigger an emotion. 7. With mobile use soaring, photos are created on those devices and easily consumed and shared on them. Pinterest’s iPad app hit #1 in the Apple store in 1 day. Even as far back as 2009, over 2.5 billion devices with cameras were on the streets generating in just 1 year, 10% of the number of photos taken…ever. But let’s say you’re not a retailer. What if you’re a B2B whose products or services aren’t visual? Should you worry about your presence on Pinterest? As with all things, you need a keen awareness of who your audience is, where they reside online, and what they want to do there. If it doesn’t make sense to put a tent stake in Pinterest, fine. But ignore the power of pictures at your own peril. If not visually, how are you going to attention-grab social users scrolling down their News Feeds at top speed? You’re competing with every other cool image out there from countless content sources. Bore us and we’ll fly right past you.

    Read the article

  • How to make a stack stable? Need help for an explicit resting contact scheme (2-dimensional)

    - by Register Sole
    Previously, I struggle with the sequential impulse-based method I developed. Thanks to jedediah referring me to this paper, I managed to rebuild the codes and implement the simultaneous impulse based method with Projected-Gauss-Seidel (PGS) iterative solver as described by Erin Catto (mentioned in the reference of the paper as [Catt05]). So here's how it currently is: The simulation handles 2-dimensional rotating convex polygons. Detection is using separating-axis test, with a SKIN, meaning closest points between two polygons is detected and determined if their distance is less than SKIN. To resolve collision, simultaneous impulse-based method is used. It is solved using iterative solver (PGS-solver) as in Erin Catto's paper. Error-correction is implemented using Baumgarte's stabilization (you can refer to either paper for this) using J V = beta/dt*overlap, J is the Jacobian for the constraints, V the matrix containing the velocities of the bodies, beta an error-correction parameter that is better be < 1, dt the time-step taken by the engine, and overlap, the overlap between the bodies (true overlap, so SKIN is ignored). However, it is still less stable than I expected :s I tried to stack hexagons (or squares, doesn't really matter), and even with only 4 to 5 of them, they would swing! Also note that I am not looking for a sleeping scheme. But I would settle if you have any explicit scheme to handle resting contacts. That said, I would be more than happy if you have a way of treating it generally (as continuous collision, instead of explicitly as a special state). Ideas I have tried: Using simultaneous position based error correction as described in the paper in section 5.3.2, turned out to be worse than the current scheme. If you want to know the parameters I used: Hexagons, side 50 (pixels) gravity 2400 (pixels/sec^2) time-step 1/60 (sec) beta 0.1 restitution 0 to 0.2 coeff. of friction 0.2 PGS iteration 10 initial separation 10 (pixels) mass 1 (unit is irrelevant for now, i modified velocity directly<-impulse method) inertia 1/1000 Thanks in advance! I really appreciate any help from you guys!! :) EDIT In response to Cholesky's comment about warm starting the solver and Baumgarte: Oh right, I forgot to mention! I do save the contact history and the impulse determined in this time step to be used as initial guess in the next time step. As for the Baumgarte, here's what actually happens in the code. Collision is detected when the bodies' closest distance is less than SKIN, meaning they are actually still separated. If at this moment, I used the PGS solver without Baumgarte, restitution of 0 alone would be able to stop the bodies, separated by a distance of ~SKIN, in mid-air! So this isn't right, I want to have the bodies touching each other. So I turn on the Baumgarte, where its role is actually to pull the bodies together! Weird I know, a scheme intended to push the body apart becomes useful for the reverse. Also, I found that if I increase the number of iteration to 100, stacks become much more stable, though the program becomes so slow. UPDATE Since the stack swings left and right, could it be something is wrong with my friction model? Current friction constraint: relative_tangential_velocity = 0

    Read the article

  • how to run mysql drop and create synonym in shell script

    - by bgrif
    I have added this command to a script I am writing and I am running into a issue with it not logging onto mysql and running the commands. How can i fix this and make it run. #! /bin/bash Subject: Please stage the following TFL09143 Locator Bulletin to all TF90 staging environments: # This next section is to go to mysql server and make changes. you can drop and create synonyms truncate a table and insert into a different one. you will be able to verify the counts to the different locations # $ mysql --host=app03-bsi --u "" --p "" "TF90BPS" -bse "drop synonym TF90.BTXADDR && drop synonym TF90.BTXSUPB && CREATE SYNONYM TF90.BTXADDR FOR TF90BP.TFBPS2.BTXADDR && CREATE SYNONYM TF90.BTXSUPB FOR TF90BP.TFBPS3.BTXSUPB && TRUNCATE TABLE TF90BP.TFBPS3.BTXSUPB SELECT * FROM TF90BP.TFBPS2.BTXSUPB; select count () from TF90BP.TF90.BTXADDR select count() from TF90BPS.TF90.BTXADDR; select count() from TF90BP.TF90.BTXSUPB; select count() from TF90BPS.TF90.BTXSUPB;" $ mysql --host=app03-bsi --u "" --p "" "TF90LMS" -bse "drop synonym TF90.BTXADDR && drop synonym TF90.BTXSUPB && CREATE SYNONYM TF90.BTXADDR FOR TF90LM.TFBPS2.BTXADDR && CREATE SYNONYM TF90.BTXSUPB FOR TF90LM.TFBPS3.BTXSUPB; TRUNCATE TABLE TF90LM.TFLMS2.BTXADDR;TRUNCATE TABLE TF90LM.TFLMS3.BTXSUPB;INSERT INTO TF90LM.TFLMS3.BTXSUPB SELECT * FROM TF90LM.TFLMS2.BTXSUPB;Verify select count() from TF90LM.TF90.BTXADDR;select count() from TF90LMS.TF90.BTXADDR;select count() from TF90LM.TF90.BTXSUPB;select count() from TF90LMS.TF90.BTXSUPB" $ mysql --host=app03-bsi --u "" --p "" "TF90NCS" -bse "drop synonym TF90.BTXADDR && drop synonym TF90.BTXSUPB && CREATE SYNONYM TF90.BTXADDR FOR TF90NC.TFBPS2.BTXADDR && CREATE SYNONYM TF90.BTXSUPB FOR TF90NC.TFBPS3.BTXSUPB; TRUNCATE TABLE TF90NC.TFNCS2.BTXADDR; TRUNCATE TABLE TF90NC.TFNCS3.BTXSUPB; INSERT INTO TF90NC.TFNCS3.BTXSUPB SELECT * FROM TF90NC.TFNCS2.BTXSUPB; Verify select count() from TF90NC.TF90.BTXADDR; select count() from TF90NCS.TF90.BTXADDR;select count() from TF90NC.TF90.BTXSUPB;select count() from TF90NCS.TF90.BTXSUPB" $ mysql --host=app03-bsi --u "" --p "" "TF90PVS" -bse "drop synonym TF90.BTXADDR && drop synonym TF90.BTXSUPB && CREATE SYNONYM TF90.BTXADDR FOR TF90PV.TFBPS2.BTXADDR && CREATE SYNONYM TF90.BTXSUPB FOR TF90PV.TFBPS3.BTXSUPB; TRUNCATE TABLE TF90PV.TFPVS2.BTXADDR;TRUNCATE TABLE TF90PV.TFPVS3.BTXSUPB;INSERT INTO TF90PV.TFPVS3.BTXSUPB SELECT * FROM TF90PV.TFPVS2.BTXSUPB;Verify select count() from TF90PV.TF90.BTXADDR;select count() from TF90PVS.TF90.BTXADDR;select count() from TF90PV.TF90.BTXSUPB;select count() from TF90PVS.TF90.BTXSUPB" TFL09143 Staging cd \ntsrv\common\To\IT-CERT-TEST\TFL09143 #change to mapped network drive cp -p TFL09143.pkg /d:/tf90/code_stg && /tf90bp/code_stg && /tf90lm/code_stg && /tf90pv/code_stg # Copies the package from the networked folder and then copies to the location(s) needed.# InvalidInput="true" if [ $# -eq 0 ] ; then echo "This script sets up TF90 Staging" echo -n "Which production do you want to run? (RB/TaxLocator/Cyclic)" read ProductionDistro else ProductionDistro="$1" fi while [ "$InvalidInput" = "true" ] do if [ "$ProductionDistro" = "RB" -o "$ProductionDistro" = "TaxLocator" -o "$ProductionDistro" = "Cyclic" ] ; then InvalidInput="false" break else echo "You have entered an error" echo "You must type RB or TaxLocator or Cyclic" echo "you typed $ProductionDistro" echo "This script sets up TF90 Staging" read ProductionDistro fi done InvalidInput="true" if [ $# -eq 0 ] ; then echo "This script sets up RB TF90 Staging" echo -n "Which Element do you want to run? (TF90/TF90BP/TF90LM/TF90PV/ALL)" read ElementDistro else ElementDistro="$1" fi while [ "$InvalidInput" = "true" ] do if [ "$ElementDistro" = "TF90" -o "$ElementDistro" = "TF90BP" -o "$ElementDistro" = "TF90LM" -o "$ElementDistro" = "TF90PV" -o "$ElementDistro" = "ALL" ] ; then InvalidInput="false" break else echo "You have entered an error" echo "You must type TF90 or TF90BP or TF90LM or TF90PV" echo "you typed $ElementDistro" echo "This script sets up TF90 Staging" read ElementDistro fi done if [ "$ElementDistro" = "TF90" ] ; then cd /d/tf90/code_stg vim TFL09143.pkg export var=TF90_CONNECT_STRING=DSN=TF90NCS;export Description=TF90NCS;export Trusted_Connection=Yes;export WSID=APP03- BSI;export DATABASE=TF90NCS; export DATASET=DEFAULT pkgintall -l -v ../TFL09143.pkg fi if [ "$ElementDistro" = "$TF90BP" ] ; then cd /d/tf90bp/code_stg vim TFL09143.pkg export TF90_CONNECT_STRING=DSN=TF90BPS;export Description=TF90BPS;export Trusted_Connection=Yes;export WSID=APP03- BSI;export DATABASE=TF90BPS; start tfloader -l –v ../TFL09143.pkg fi if [ "$ElementDistro" = "$TF90LM" ] ; then cd /d/tf90lm/code_stg vim TFL09143.pkg export TF90_CONNECT_STRING=DSN=TF90LMS;export Description=TF90LMS;export Trusted_Connection=Yes;export WSID=APP03- BSI;export DATABASE=TF90LMS; start tfloader -l -v ../TFL09143.pkg fi if [ "$ElementDistro" = "TF90PV" ] ; then cd /d/tf90pv/code_stg vim TFL09143.pkg export TF90_CONNECT_STRING=DSN=TF90PVS;Description=TF90PVS;Trusted_Connection=Yes;WSID=APP03- BSI;DATABASE=TF90PVS; start tfloader -l –v ../TFL09143.pkg fi exit 0

    Read the article

  • ADF page security - the untold password rule

    - by ankuchak
    I'm kinda new to Oracle ADF. So, in this blog post I'm going to share something with you that I faced (and recovered from) recently. Initially I thought if I should at all put a blog post on this, because it's totally simple. Still, simplicity is a relative term. So without wasting further time, let's kick off.    I was exploring the ADF security aspect to secure a page through html basic authentication. The idea is very simple and the credential store etc. come into picture. But I was not able to run a successful test of this phenomenally simple thing even after trying for over 30 minutes. This is what I did.   I created a simple jsf page and put a panel in it. And I put a simple el to show the current user name.  Next I created a user that I should test with. I named the password as myuser, just to keep it simple. Then I created an enterprise role and mapped the user that I just created. Then I created an application role and mapped the enterprise role to it. Then I mapped the resource, the simple jsf page in this case, to this application role. This way, only users with the given application role can only access this page (as if you didn't know this duh!).  Of course, I had to create the page definition for the page before I could map it to an application role. What else! done! Then I hit the run menu item and it all went well...   Until... I got this message. I put the correct credentials repeatedly 2-3 times. Still I got the same error. Why? I didn't get any error message during the deployment. nope.  Then, as I said before, I spent over 30 minutes trying different things out, things like mapping only the user(not the role) to the page, changing the context root etc. Nothing worked!  Then of course, I bothered to look at the logs and found this. See the first red line. That says it all. So the problem was with that password. The password must have at least one special character and one digit in it. I think I was misled by the missing password hint/rule and the fact that the deployment didn't fail even if the user was not created properly. Well, yes, I agree that I was fool enough not to look at the logs.  Later I changed the password to something like myuser123# . And it worked. I hope it helped.

    Read the article

  • Understanding the Value of SOA

    - by Mala Narasimharajan
    Written By: Debra Lilley, ACE Director, Fusion Applications Again I want to talk from my area of expertise of Fusion Applications and talk about their design fundamentals. If you look at the table below and start at the bottom Oracle have defined all of the business objects e.g. accounts, people, customers, invoices etc. used by Fusion Applications; each of these objects contain all of the information required and can be expanded if necessary.  That Oracle have created for each of these business objects every action that is needed for the applications e.g. all the actions to create a new customer, checking to see if it exists, credit checking with D&B (Dun & Bradstreet < http://www.dnb.co.uk/> ) , creating the record, notifying those required etc. Each of these actions is a stand-alone web service. Again you can create a new actions or subscribe to an external provided web service e.g. the D&B check. The diagram also shows that all of development of Fusion Applications is from their Fusion Middleware offerings. Then the Intelligent Business Process is the order in which you run these actions, this is Service Orientated Architecture, SOA. Not only is SOA used to orchestrate actions within Fusion Applications it is also used in the integration of Fusion Applications with the rest of the Oracle stable of applications such as EBS, PeopleSoft, JDE and Siebel. The other applications are written with propriety development tools so how do they work with SOA? It’s a very simple answer, with the introduction of the Oracle SOA platform each process within these applications was made available to be called as a web service. I won’t go into technically how that is done but what’s known as a wrapper to allow each of them to act in this way was added. Finally at the top of the diagram are the questions that each Fusion Application process must answer, and this is the ‘special’ sauce that makes them so good, the User Experience, but that is a topic for another day, or you can read about it in my blog http://debrasoracle.blogspot.co.uk/2014/04/going-on-record-about-fusion-apps-cloud.html or Oracle’s own UX blog https://blogs.oracle.com/usableapps/ The concept behind AppAdvantage is not new the idea that Oracle technology can add value to your Oracle applications investments is pretty fundamental. Nishit Rao who is in AppAdvantage team provided myself and other ACE Directors with demo kits so that we could demonstrate SOA running with the applications. The example I learnt to build was that of the EBS inventory open interface. The simple concept is that request records can be added to a table and an import run that creates these as transactions in inventory. What’s SOA allows you to do is to add to the table from any source and then run this process automatically whereas traditionally you had to run the process at regular intervals because you didn’t know if the table was empty or not. This may just sound like a different way of doing the same thing but if the process is critical for your business then the interval was very small and the process run potentially many times unnecessarily. Using SOA it only happened when necessary without any delay. So in my post today I’ve talked about how SOA is used with Fusion Applications and in the linking with more traditional applications but that is only the tip of the iceberg of potential, your applications are just part of your IT systems and SOA can orchestrate your data across all of them; the beauty of open standards.  Debra Lilley, Fusion Champion, UKOUG Board Member, Fusion User Experience Advocate and ACE Director.  Lilley has 18 years experience with Oracle Applications, with E Business Suite since 9.4.1, moving to Business Intelligence Team Lead and Oracle Alliance Director. She has spoken at over 100 conferences worldwide and posts at debrasoraclethoughts

    Read the article

  • Simple BizTalk Orchestration & Port Tutorial

    - by bosuch
    (This is a reference for a lunch & learn I'm giving at my company) This demo will create a BizTalk process that monitors a directory for an XML file, loads it into an orchestration, and drops it into a different directory. There’s no real processing going on (other than moving the file from one location to another), but this will introduce you to Messages, Orchestrations and Ports. To begin, create a new BizTalk Project names OrchestrationPortDemo: When the solution has been created, right-click the OrchestrationPortDemo solution name and select Add -> New Item. Add a BizTalk Orchestration named DemoOrchestration: Click Add and the orchestration will be created and displayed in the BizTalk Orchestration Designer. The designer allows you to visually create your business processes: Next, you will add a message (the basic unit of communication) to the orchestration. In the Orchestration View, right-click Messages and select New Message. In the message properties window, enter DemoMessage as the Identifier (the name), and select .NET Classes -> System.Xml.XmlDocument for Message Type. This indicates that we’ll be passing a standard Xml document in and out of the orchestration. Next, you will add Send and Receive shapes to the orchestration. From the toolbox, drag a Receive shape onto the orchestration (where it says “Drop a shape from the toolbox here”). Next, drag a Send shape directly below the Receive shape. For the properties of both shapes, select DemoMessage for Message – this indicates we’ll be passing around the message we created earlier. The Operation box will have a red exclamation mark next to it because no port has been specified. We will do this in a minute. On the Receive shape properties, you must be sure to select True for Activate. This indicates that the orchestration will be started upon receipt of a message, rather than being called by another orchestration. If you leave it set to false, when you try to build the application you’ll receive the error “You must specify at least one already-initialized correlation set for a non-activation receive that is on a non self-correlating port.” Now you’ll add ports to the orchestration. Ports specify how your orchestration will send and receive messages. Drag a port from the toolbox to the left-hand Port Surface, and the Port Configuration Wizard launches. For the first port (the receive port), enter the following information: Name: ReceivePort Select the port type to be used for this port: Create a new Port Type Port Type Name: ReceivePortType Port direction of communication: I’ll always be receiving <…> Port binding: Specify later By choosing “Specify later” you are choosing to bind the port (choose where and how it will send or receive its messages) at deployment time via the BizTalk Server Administration console. This allows you to change locations later without building and re-deploying the application. Next, drag a port to the right-hand Port Surface; this will be your send port. Configure it as follows: Name: SendPort Select the port type to be used for this port: Create a new Port Type Port Type Name: SendPortType Port direction of communication: I’ll always be sending <…> Port binding: Specify later Finally, drag the green arrow on the ReceivePort to the Receive_1 shape, and the green arrow on the SendPort to the Send_1 shape. Your orchestration should look like this: Now you have a couple final steps before building and deploying the application. In the Solution Explorer, right-click on OrchestrationPortDemo and select Properties. On the Signing tab, click “Sign the assembly”, and choose <New…> from the drop-down. Enter DemoKey as the Key file name, and deselect “Protect my key file with a password”. This will create the file DemoKey.snk in your solution. Signing the assembly gives it a strong name so that it can be deployed into the global assembly cache (GAC). Next, click the Deployment tab, and enter OrchestrationPortDemo as the Application Name. Save your solution. Click “Build OrchestrationPortDemo”. Your solution should (hopefully!) build with no errors. Click “Deploy OrchestrationPortDemo”. (Note – If you’re running Server 2008, Vista or Win7, you may get an error message. If so, close Visual Studio and run it as an administrator) That’s it! Your application is ready to be configured and fired up in the BizTalk Server Administration console, so stay tuned!

    Read the article

  • jqGrid custom formatter

    - by great_llama
    For one of the columns in my jqGrid, I'm providing a custom formatter function. I'm providing some special cases, but if those conditions aren't met, I'd like to resort to using the built-in date formatter utility method. It doesn't seem that I'm getting the right combination of $.extend() to create the options that method is expecting. My colModel for this column: { name:'expires', index:'7', width:90, align:"right", resizable: false, formatter: expireFormat, formatoptions: {srcformat:"l, F d, Y g:i:s A",newformat:"n/j/Y"} }, And an example of what I'm trying to do function expireFormat(cellValue, opts, rowObject) { if (cellValue == null || cellValue == 1451520000) { // a specific date that should show as blank return ''; } else { // here is where I'd like to just call the $.fmatter.util.DateFormat var dt = new Date(cellValue * 1000); var op = $.extend({},opts.date); if(!isUndefined(opts.colModel.formatoptions)) { op = $.extend({},op,opts.colModel.formatoptions); } return $.fmatter.util.DateFormat(op.srcformat,dt,op.newformat,op); } } (An exception is being thrown in the guts of that DateFormat method, looks like where it's trying to read into a masks property of the options that get passed in) EDIT: The $.extend that put everything in the place it needed was getting it from that global property where the i18n library set it, $.jgrid.formatter.date. var op = $.extend({}, $.jgrid.formatter.date); if(!isUndefined(opts.colModel.formatoptions)) { op = $.extend({}, op, opts.colModel.formatoptions); } return $.fmatter.util.DateFormat(op.srcformat,dt.toLocaleString(),op.newformat,op);

    Read the article

  • Using parameters in reports for VIsual Studio 2008

    - by Jim Thomas
    This is my first attempt to create a Visual Studio 2008 report using parameters. I have created the dataset and the report. If I run it with a hard-coded filter on a column the report runs fine. When I change the filter to '?' I keep getting this error: No overload for method 'Fill' takes '1' argument Obviously I am missing some way to connect the parameter on the dataset to a report parameter. I have defined a report parameter using the Report/Report Parameter screen. But how does that report parameter get tied to the dataset table parameter? Is there a special naming convention for the parameter? I have Googled this a half dozen times and read the msdn documentation but the examples all seem to use a different approach (like creating a SQL query rather then a table based dataset) or entering the parameter name as "=Parameters!name.value" but I can't figure out where to do that. One msdn example suggestted I needed to create some C# code using a SetParameters() method to make the connection. Is that how it is done? If anyone can recommend a good walk-through I'd appreciate it. Edit: After more reading it appears I don't need report parameters at all. I am simply trying to add a parameter to the database query. So I would create a text box on the form, get the user's input, then apply that parameter programmatically to the fill() argument list. The report parameter on the other hand is an ad-hoc value generally entered by a user that you want to appear on the report. But there is no relationship between report parameters and query/dataset parameters. Is that correct?

    Read the article

  • WPF application with MS Access database as a data source

    - by Kay Zed
    I have a Microsoft Access 2010 database. Now, using Visual Studio 2010, I want to create a WPF application and add the database as a data source. The app will have a window with a frame that provides navigation through pages. No problem so far. But: -What is the right way to set up the database in this scenario? Tables only? Or must everything go via queries? (VS2010 talks about views which I assume (?) are queries) -Database data must be updatable and records can be added. Some relationships go through link tables (many-to-many) and there are nullable foreign key relationships. Must I take manual steps to make it work? -While adding the data source VS2010 created an xsd from my Access database. I think the xsd might need further tweaking for the application to work the right way. What if I change my Access database design, I'd have to regenerate the xsd again as well. Is this right, and is it the way it is usually done? OR, should I let the original Access database go and give the application the capability to create new empty databases? -How do you provide controls in a page to step through the records in a table? Is there a special database control? -What is the way (WPF class?) to load records into the data context that displays in a page? (At this level it probably does not matter what type of data source it is.)

    Read the article

  • Entity Framework with MySQL - Timeout Expired while Generating Model

    - by Nathan Taylor
    I've constructed a database in MySQL and I am attempting to map it out with Entity Framework, but I start running into "GenerateSSDLException"s whenever I try to add more than about 20 tables to the EF context. An exception of type 'Microsoft.Data.Entity.Design.VisualStudio.ModelWizard.Engine.ModelBuilderEngine+GenerateSSDLException' occurred while attempting to update from the database. The exception message is: 'An error occurred while executing the command definition. See the inner exception for details.' Fatal error encountered during command execution. Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. There's nothing special about the affected tables, and it's never the same table(s), it's just that after a certain (unspecific) number of tables have been added, the context can no longer be updated without the "Timeout expired" error. Sometimes it's only one table left over, and sometimes it's three; results are pretty unpredictable. Furthermore, the variance in the number of tables which can be added before the error indicates to me that perhaps the problem lies in the size of the query being generated to update the context which includes both the existing table definitions, and also the new tables that are being added to it. Essentially, the SQL query is getting too large and it's failing to execute for some reason. If I generate the model with EdmGen2 it works without any errors, but the generated EDMX file cannot be updated within Visual Studio without producing the aforementioned exception. In all likelihood the source of this problem lies in the tool within Visual Studio given that EdmGen2 works fine, but I'm hoping that perhaps others could offer some advice on how to approach this very unique issue, because it seems like I'm not the only person experiencing it. One suggestion a colleague offered was maintaining two separate EBMX files with some table crossover, but that seems like a pretty ugly fix in my opinion. I suppose this is what I get for trying to use "new technology". :(

    Read the article

  • uploading large xml to WCF REST service -> 400 Bad request

    - by glenn.danthi
    I am trying to upload large xml files to a REST service... I have tried almost all methods specified on stackoverflow on google but I still cant find out where I am going wrong....I cannot upload a file greater than 64 kb!.. I have specified the maxRequestLength : <httpRuntime maxRequestLength="65536"/> and my binding config is as follows : <bindings> <webHttpBinding> <binding name="RESTBinding" maxBufferSize="67108864" maxReceivedMessageSize="67108864" openTimeout="00:10:00" receiveTimeout="00:10:00" sendTimeout="00:10:00"> <readerQuotas maxDepth="2147483647" maxStringContentLength="2147483647" maxArrayLength="2147483647" maxBytesPerRead="2147483647" maxNameTableCharCount="2147483647"/> </binding> </webHttpBinding> </bindings> In my C# client side I am doing the following : WebRequest request = HttpWebRequest.Create(@"http://localhost.:2381/RepositoryServices.svc/deviceprofile/AddDdxml"); request.Credentials = new NetworkCredential("blah", "blah"); request.Method = "POST"; request.ContentType = "application/xml"; request.ContentLength = byteArray.LongLength; using (Stream postStream = request.GetRequestStream()) { postStream.Write(byteArray, 0, byteArray.Length); } There is no special configuration done on the client side...

    Read the article

  • Spring AOP AfterThrowing vs. Around Advice

    - by whiskerz
    Hey there, when trying to implement an Aspect, that is responsible for catching and logging a certain type of error, I initially thought this would be possible using the AfterThrowing advice. However it seems that his advice doesn't catch the exception, but just provides an additional entry point to do something with the exception. The only advice which would also catch the exception in question would then be an AroundAdvice - either that or I did something wrong. Can anyone assert that indeed if I want to catch the exception I have to use an AroundAdvice? The configuration I used follows: @Pointcut("execution(* test.simple.OtherService.print*(..))") public void printOperation() {} @AfterThrowing(pointcut="printOperation()", throwing="exception") public void logException(Throwable exception) { System.out.println(exception.getMessage()); } @Around("printOperation()") public void swallowException(ProceedingJoinPoint pjp) throws Throwable { try { pjp.proceed(); } catch (Throwable exception) { System.out.println(exception.getMessage()); } } Note that in this example I caught all Exceptions, because it just is an example. I know its bad practice to just swallow all exceptions, but for my current use case I want one special type of exception to be just logged while avoiding duplicate logging logic.

    Read the article

  • Mimicking the UltraGridColumnChooser's drag & drop ability

    - by Sören Kuklau
    (Infragistics 2008 Vol. 3, CLR 2.0) Infragistics's UltraGrid comes with a column chooser user control, which is simply a vertical arrangement of columns with checkboxes that toggle a column's hidden state. In addition, it allows you to pick a column and drag it directly to the grid so you don't have to manually position it afterwards. (This is particularly handy when you already have a lot of visible columns and have no clue where the new one ended up.) I'm building my own column chooser based on an UltraTree. Getting the checkboxes to behave the same wasn't an issue, but I haven't found a way to drag a column from the tree to the grid and have it accept it. In my tree, each UltraTreeNode has a Tag with the following struct: Private Structure DraggableGridColumn Public NodeKey As String Public NodeName As String Public ParentKey As String Public Column As UltraGridColumn End Structure I then have an event as follows: Private Sub columnsTree_SelectionDragStart(ByVal sender As Object, ByVal e As System.EventArgs) Handles columnsTree.SelectionDragStart If columnsTree.SelectedNodes.Count <> 1 Then Return End If If Not TypeOf columnsTree.SelectedNodes(0).Tag Is DraggableGridColumn Then Return End If Dim column As UltraGridColumn = CType(columnsTree.SelectedNodes(0).Tag, DraggableGridColumn).Column columnsTree.DoDragDrop(column, DragDropEffects.All) End Sub In the DoDragDrop call, neither column (of type UltraGridColumn) nor column.Header (of type ColumnHeader) get accepted by the grid. I assume I'm sending the wrong type, and/or that the grid expects a special struct with some additional information. Unfortunately, I've also failed to catch an event (both on the column chooser side as well as on the grid side) where Infragistics's normal column chooser does this properly; the normal drag & drop events never seem to fire.

    Read the article

  • Sitecore not resolving rich text editor URLS in page renders

    - by adam
    Hi We're having issues inserting links into rich text in Sitecore 6.1.0. When a link to a sitecore item is inserted, it is outputted as: http://domain/~/link.aspx?_id=8A035DC067A64E2CBBE2662F6DB53BC5&_z=z Rather than the actual resolved url: http://domain/path/to/page.aspx This article confirms that this should be resolved in the render pipeline: in Sitecore 6 it inserts a specially formatted link that contains the Guid of the item you want to link to, then when the item is rendered the special link is replaced with the actual link to the item The pipeline has the method ShortenLinks added in web.config <convertToRuntimeHtml> <processor type="Sitecore.Pipelines.ConvertToRuntimeHtml.PrepareHtml, Sitecore.Kernel"/> <processor type="Sitecore.Pipelines.ConvertToRuntimeHtml.ShortenLinks, Sitecore.Kernel"/> <processor type="Sitecore.Pipelines.ConvertToRuntimeHtml.SetImageSizes, Sitecore.Kernel"/> <processor type="Sitecore.Pipelines.ConvertToRuntimeHtml.ConvertWebControls, Sitecore.Kernel"/> <processor type="Sitecore.Pipelines.ConvertToRuntimeHtml.FixBullets, Sitecore.Kernel"/> <processor type="Sitecore.Pipelines.ConvertToRuntimeHtml.FinalizeHtml, Sitecore.Kernel"/> </convertToRuntimeHtml> So I really can't see why links are still rendering in ID format rather than as full SEO-tastic urls. Anyone got any clues? Thanks, Adam

    Read the article

  • Binding "Text-Property" of a derived textbox to another textbox doesn´t work

    - by Jehof
    Hello, i have a class 'MyTextBox' that derives from the default TextBox in Silverlight. This class currently contains no additional code. I set up a binding in xaml to bind the Text-Property of MyTextbox to another Textbox to reflect the input made in the Textbox. The effect is that MyTextBox doesn´t update and not display the text of the other Textbox. Additional i made an equal binding for a normal Textbox. And this works. Here´s the XAML for the bindings. <UserControl x:Class="Silverlight.Sample.Dummy" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" xmlns:my="clr-namespace:Sample" mc:Ignorable="d" d:DesignHeight="300" d:DesignWidth="400"> <StackPanel x:Name="LayoutRoot" Background="White"> <TextBox Height="23" x:Name="textBox2" Width="120" /> <TextBox Text="{Binding ElementName=textBox2, Path=Text, Mode=TwoWay}" Width="120" /> <my:NumberTextBox Width="120" Text="{Binding ElementName=textBox2, Path=Text, Mode=OneWay}" /> </StackPanel> Is there something special to set for binding, when i derive from a control. PS: I tried a binding to a dummy object with INotifyPropertyChanged and set it as DataContext for the existing Textboxes. This test works as expected and my derived textbox gets updated.

    Read the article

  • How can I beta test web Perl modules under Apache/mod_perl on production web server?

    - by DVK
    We have a setup where most code, before being promoted to full production, is deployed in BETA mode - meaning, it runs in full production environment (using production database - usually production data; and production web server). We call that stage BETA testing. One of the main requirements is that BETA code promotion to production must be a simple "cp" command from beta to production directory - no code/filename changes. For non-web Perl code, achieving seamless BETA test is quite doable (see details here): Perl programs live in a standard location under production root (/usr/code/scripts) with production perl modules living under the same root (/usr/code/lib/perl) The BETA code has 100% same code paths except under beta root (/usr/code/beta/) A special module manipulates @INC of any script based on whether the script was called from /usr/code/scripts or /usr/code/test/scripts, to include beta libraries for beta scripts. This setup works fine up till we need to beta test our web Perl code (the setup is EmbPerl and Apache/mod_perl). The hang-up is as follows: if both a production Perl module and BETA Perl module have the same name (e.g. /usr/code/lib/perl/MyLib1.pm and /usr/code/beta/lib/perl/MyLib1.pm), then mod_perl will only be able to load ONE of these modules into memory - and there's no way we are aware of for a particular web page to affect which version of the module is currently loaded due to concurrency issues. Leaving aside the obvious non-programming solution (get a bloody BETA web server) which for political/organizational reasons is not feasible, is there any way we can somehow hack around this problem in either Perl or mod_perl? I played around with various approaches to unloading Perl modules that %INC has listed, but the problem remains that another user might load a beta page at just the right (or rather wrong) moment and have the beta module loaded which will be used for my production page.

    Read the article

  • Measuring "real" phone signal strength on a mobile phone

    - by Serafeim
    I want to programmatically measure the phone signal strength in a mobile phone. I don't actually care about the mobile phone or the programming environment: It can be based on android or windows mobile or even J2ME and can be from any manufacturer (please no iPhone). However, it needs to be a real, commercial mobile phone and not a special measurement device. This problem is not as easy as it seems with a first look. I am aware that there already exist a number of methods that claim to return the phone signal strength. Some of these are: SystemState.PhoneSignalStrength for WM6 RIL_GetCellTowerInfo for WinCe (dwRxLevel member of returned RILCELLTOWERINFO struct) android.telephony.NeighboringCellInfo.getRssi() for android The problem with the above is that they only return a few (under 10) discrete values, meaning that, for instance, the return values of SystemState.PhoneSignalStrength can only be translated to (for instance) -100 dbm, -90 dbm, -80 dbm, -70 dbm and -60 dbm, something that is not useful for my application, since I'd like to have as much precision as possible. It doesn't matter if there is an undocumented solution that only works on only one phone, if you can tell me a way I'd be grateful. Thanks in advance

    Read the article

  • Create Text File Without BOM

    - by balexandre
    Hi guys, I tried this aproach without any success the code I'm using: // File name String filename = String.Format("{0:ddMMyyHHmm}", dtFileCreated); String filePath = Path.Combine(Server.MapPath("App_Data"), filename + ".txt"); // Process ProcessPBS pbs = new ProcessPBS(); pbs.CreatePBSInfoFile(pbslist, Convert.ToInt64(filename)); // Save file Encoding utf8WithoutBom = new UTF8Encoding(true); TextWriter tw = new StreamWriter(filePath, false, utf8WithoutBom); foreach (string s in pbs.GeneratedFile.ToArray()) tw.WriteLine(s); tw.Close(); // Push Generated File into Client Response.Clear(); Response.ContentType = "application/vnd.text"; Response.AppendHeader("Content-Disposition", "attachment; filename=" + filename + ".txt"); Response.TransmitFile(filePath); Response.End(); the result: It's writing the BOM no matter what, and special chars (like Æ Ø Å) are not correct :-/ I'm stuck! I can't create a file using UTF-8 as Encoding and 8859-1 as CharSet :-( Anyone can help me find the light in the tunnel? All help is greatly appreciated, thank you!

    Read the article

  • Python ImportError when executing 'import.py', but not when executing 'python import.py'

    - by Martin Del Vecchio
    I am running Cygwin Python version 2.5.2. I have a three-line source file, called import.py: #!/usr/bin/python import xml.etree.ElementTree as ET print "Success!" When I execute "python import.py", it works: C:\Temp>python import.py Success! When I run the python interpreter and type the commands, it works: C:\Temp>python Python 2.5.2 (r252:60911, Dec 2 2008, 09:26:14) [GCC 3.4.4 (cygming special, gdc 0.12, using dmd 0.125)] on cygwin Type "help", "copyright", "credits" or "license" for more information. >>> #!/usr/bin/python ... import xml.etree.ElementTree as ET >>> print "Success!" Success! >>> But when I execute "import.py', it does not work: C:\Temp>which python /usr/bin/python C:\Temp>import.py Traceback (most recent call last): File "C:\Temp\import.py", line 2, in ? import xml.etree.ElementTree as ET ImportError: No module named etree.ElementTree When I remove the first line (#!/usr/bin/python), I get the same error. I need that line in there, though, for when this script runs on Linux. And it works fine on Linux. Any ideas? Thanks.

    Read the article

  • How to populate Java (web) application with initial data using Spring/JPA/Hibernate

    - by Tuukka Mustonen
    I want to setup my database with initial data programmatically. I want to populate my database for development runs, not for testing runs (it's easy). The product is built on top of Spring and JPA/Hibernate. Developer checks out the project Developer runs command/script to setup database with initial data Developer starts application (server) and begins developing/testing then: Developer runs command/script to flush the database and set it up with new initial data because database structures or the initial data bundle were changed What I want is to setup my environment by required parts in order to call my DAOs and insert new objects into database. I do not want to create initial data sets in raw SQL, XML, take dumps of database or whatever. I want to programmatically create objects and persist them in database as I would in normal application logic. One way to accomplish this would be to start up my application normally and run a special servlet that does the initialization. But is that really the way to go? I would love to execute the initial data setup as Maven task and I don't know how to do that if I take the servlet approach. There is somewhat similar question. I took a quick glance at the suggested DBUnit and Unitils. But they seem to be heavily focused in setting up testing environments, which is not what I want here. DBUnit does initial data population, but only using xml/csv fixtures, which is not what I'm after here. Then, Maven has SQL plugin, but I don't want to handle raw SQL. Maven also has Hibernate plugin, but it seems to help only in Hibernate configuration and table schema creation (not in populating db with data). How to do this?

    Read the article

  • Getting fields_for and accepts_nested_attributes_for to work with a belongs_to relationship

    - by Billy Gray
    I cannot seem to get a nested form to generate in a rails view for a belongs_to relationship using the new accepts_nested_attributes_for facility of Rails 2.3. I did check out many of the resources available and it looks like my code should be working, but fields_for explodes on me, and I suspect that it has something to do with how I have the nested models configured. The error I hit is a common one that can have many causes: '@account[owner]' is not allowed as an instance variable name Here are the two models involved: class Account < ActiveRecord::Base # Relationships belongs_to :owner, :class_name => 'User', :foreign_key => 'owner_id' accepts_nested_attributes_for :owner has_many :users end class User < ActiveRecord::Base belongs_to :account end Perhaps this is where I am doing it 'rong', as an Account can have an 'owner', and may 'users', but a user only has one 'account', based on the user model account_id key. This is the view code in new.html.haml that blows up on me: - form_for :account, :url => account_path do |account| = account.text_field :name - account.fields_for :owner do |owner| = owner.text_field :name And this is the controller code for the new action: class AccountsController < ApplicationController # GET /account/new def new @account = Account.new end end When I try to load /account/new I get the following exception: NameError in Accounts#new Showing app/views/accounts/new.html.haml where line #63 raised: @account[owner] is not allowed as an instance variable name If I try to use the mysterious 'build' method, it just bombs out in the controller, perhaps because build is just for multi-record relationships: class AccountsController < ApplicationController # GET /account/new def new @account = Account.new @account.owner.build end end You have a nil object when you didn't expect it! The error occurred while evaluating nil.build If I try to set this up using @account.owner_attributes = {} in the controller, or @account.owner = User.new, I'm back to the original error, "@account[owner] is not allowed as an instance variable name". Does anybody else have the new accepts_nested_attributes_for method working with a belongs_to relationship? Is there something special or different you have to do? All the official examples and sample code (like the great stuff over at Ryans Scraps) is concerned with multi-record associations.

    Read the article

  • UIBarButtonItem with custom image and no border

    - by mongeta
    Hello, I want to create a UIBarButtonItem with a custom image, but I don't want the border that iPhone adds, as my Image has a special border. It's the same as the back button but a forward button. This App is for an inHouse project, so I don't care if Apple reject or approves it or likes it :-) If I use the initWithCustomView:v property of the UIBarButtonItem, I can do it: UIImage *image = [UIImage imageNamed:@"right.png"]; UIButton *button = [UIButton buttonWithType:UIButtonTypeCustom]; [button setBackgroundImage: [image stretchableImageWithLeftCapWidth:7.0 topCapHeight:0.0] forState:UIControlStateNormal]; [button setBackgroundImage: [[UIImage imageNamed: @"right_clicked.png"] stretchableImageWithLeftCapWidth:7.0 topCapHeight:0.0] forState:UIControlStateHighlighted]; button.frame= CGRectMake(0.0, 0.0, image.size.width, image.size.height); [button addTarget:self action:@selector(AcceptData) forControlEvents:UIControlEventTouchUpInside]; UIView *v=[[UIView alloc] initWithFrame:CGRectMake(0.0, 0.0, image.size.width, image.size.height) ]; [v addSubview:button]; UIBarButtonItem *forward = [[UIBarButtonItem alloc] initWithCustomView:v]; self.navigationItem.rightBarButtonItem= forward; [v release]; [image release]; This works, but if I have to repeat this process in 10 views, this is not DRY. I suppose I have to subclass, but what ? NSView ? UIBarButtonItem ? thanks, regards,

    Read the article

  • Password Recovery without sending password via email

    - by Brian
    So, I've been playing with asp:PasswordRecovery and discovered I really don't like it, for several reasons: 1) Alice's password can be reset even without having access to Alice's email. A security question for password resets mitigates this, but does not really satisfy me. 2) Alice's new password is sent back to her in cleartext. I would rather send her a special link to my page (e.g. a page like example.com/recovery.aspx?P=lfaj0831uefjc), which would let her change her password. I imagine I could do this myself by creating some sort of table of expiring password recovery pages and sending those pages to users who asked for a reset. Somehow those pages could also change user passwords behind the scenes (e.g. by resetting them manually and then using the text of the new password to change the password, since a password cannot be changed without knowing the old one). I'm sure others have had this problem before and that kind of solution strikes me as a little hacky. Is there a better way to do this? An ideal solution does not violate encapsulation by accessing the database directly but instead uses the existing stored procedures within the database...though that may not be possible.

    Read the article

  • C - check if string is a substring of another string

    - by user69514
    I need to write a program that takes two strings as arguments and check if the second one is a substring of the first one. I need to do it without using any special library functions. I created this implementation, but I think it's always returning true as long as there is one letter that's the same in both strings. Can you help me out here. I'm not sure what I am doing wrong: #include <stdio.h> #include <string.h> int my_strstr( char const *s, char const *sub ) { char const *ret = sub; int r = 0; while ( ret = strchr( ret, *sub ) ) { if ( strcmp( ++ret, sub+1 ) == 0 ){ r = 1; } else{ r = 0; } } return r; } int main(int argc, char **argv){ if (argc != 3) { printf ("Usage: check <string one> <string two>\n"); } int result = my_strstr(argv[1], argv[2]); if(result == 1){ printf("%s is a substring of %s\n", argv[2], argv[1]); } else{ printf("%s is not a substring of %s\n", argv[2], argv[1]); } return 0; }

    Read the article

< Previous Page | 213 214 215 216 217 218 219 220 221 222 223 224  | Next Page >