Search Results

Search found 23427 results on 938 pages for 'christopher done'.

Page 467/938 | < Previous Page | 463 464 465 466 467 468 469 470 471 472 473 474  | Next Page >

  • hibernate and ehcache replication

    - by cachingsol
    Hi, I am working on a cache replication solution between nodes Node A - master node = Hibernate + Database + Ehcache as secondary cache Node B - regional node= Ehcache as prmiary cache. no Hibernate Node B is used only as near-by cache for query. Now I am updating data (Say SudentInfo) in Node A, it gets persisted and cached correctly. On replication side (I am using JMS) it sends a message to Node B. But the problem is, the message it sends is of instance CacheEntry(deep Inside Element), there is no way to resurrect the original object (StudentInfo). What I got in node B is CacheEntry with some attributes of Students but not actually an Student Object. Please note that I don't need Hibernate session/persistence in Node B, node B is only for fast query, persistence is done through Node A. So has anybody tried any solution like this? Is there any way to convert CacheEntry to actual object? or Tell ehcache to replicate original object rather than CacheEntry. Thanks for the help

    Read the article

  • Authenticating from a "child" application via CAS

    - by Rob Wilkerson
    I have a portal application that loads external content (widgets) via an iframe. Users login to CAS via the portal itself. There are a few portal APIs, though, that need to be called from that external content. What information do I have to pass from the portal to the widgets that the widgets can use to make these calls without being rejected by CAS? UPDATE The more I investigate, the more I think that my question boils down to how CAS actually does what it's supposed to do. In other words, how can I go from one site where I've authenticated to another and tell it that I've already done the authentication thing. What's the mechanism behind that and how can I employ in in a web context.

    Read the article

  • The Inkremental Architect&acute;s Napkin - #4 - Make increments tangible

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/06/12/the-inkremental-architectacutes-napkin---4---make-increments-tangible.aspxThe driver of software development are increments, small increments, tiny increments. With an increment being a slice of the overall requirement scope thin enough to implement and get feedback from a product owner within 2 days max. Such an increment might concern Functionality or Quality.[1] To make such high frequency delivery of increments possible, the transition from talking to coding needs to be as easy as possible. A user story or some other documentation of what´s supposed to get implemented until tomorrow evening at latest is one side of the medal. The other is where to put the logic in all of the code base. To implement an increment, only logic statements are needed. Functionality like Quality are just about expressions and control flow statements. Think of Assembler code without the CALL/RET instructions. That´s all is needed. Forget about functions, forget about classes. To make a user happy none of that is really needed. It´s just about the right expressions and conditional executions paths plus some memory allocation. Automatic function inlining of compilers which makes it clear how unimportant functions are for delivering value to users at runtime. But why then are there functions? Because they were invented for optimization purposes. We need them for better Evolvability and Production Efficiency. Nothing more, nothing less. No software has become faster, more secure, more scalable, more functional because we gathered logic under the roof of a function or two or a thousand. Functions make logic easier to understand. Functions make us faster in producing logic. Functions make it easier to keep logic consistent. Functions help to conserve memory. That said, functions are important. They are even the pivotal element of software development. We can´t code without them - whether you write a function yourself or not. Because there´s always at least one function in play: the Entry Point of a program. In Ruby the simplest program looks like this:puts "Hello, world!" In C# more is necessary:class Program { public static void Main () { System.Console.Write("Hello, world!"); } } C# makes the Entry Point function explicit, not so Ruby. But still it´s there. So you can think of logic always running in some function. Which brings me back to increments: In order to make the transition from talking to code as easy as possible, it has to be crystal clear into which function you should put the logic. Product owners might be content once there is a sticky note a user story on the Scrum or Kanban board. But developers need an idea of what that sticky note means in term of functions. Because with a function in hand, with a signature to run tests against, they have something to focus on. All´s well once there is a function behind whose signature logic can be piled up. Then testing frameworks can be used to check if the logic is correct. Then practices like TDD can help to drive the implementation. That´s why most code katas define exactly how the API of a solution should look like. It´s a function, maybe two or three, not more. A requirement like “Write a function f which takes this as parameters and produces such and such output by doing x” makes a developer comfortable. Yes, there are all kinds of details to think about, like which algorithm or technology to use, or what kind of state and side effects to consider. Even a single function not only must deliver on Functionality, but also on Quality and Evolvability. Nevertheless, once it´s clear which function to put logic in, you have a tangible starting point. So, yes, what I´m suggesting is to find a single function to put all the logic in that´s necessary to deliver on a the requirements of an increment. Or to put it the other way around: Slice requirements in a way that each increment´s logic can be located under the roof of a single function. Entry points Of course, the logic of a software will always be spread across many, many functions. But there´s always an Entry Point. That´s the most important function for each increment, because that´s the root to put integration or even acceptance tests on. A batch program like the above hello-world application only has a single Entry Point. All logic is reached from there, regardless how deep it´s nested in classes. But a program with a user interface like this has at least two Entry Points: One is the main function called upon startup. The other is the button click event handler for “Show my score”. But maybe there are even more, like another Entry Point being a handler for the event fired when one of the choices gets selected; because then some logic could check if the button should be enabled because all questions got answered. Or another Entry Point for the logic to be executed when the program is close; because then the choices made should be persisted. You see, an Entry Point to me is a function which gets triggered by the user of a software. With batch programs that´s the main function. With GUI programs on the desktop that´s event handlers. With web programs that´s handlers for URL routes. And my basic suggestion to help you with slicing requirements for Spinning is: Slice them in a way so that each increment is related to only one Entry Point function.[2] Entry Points are the “outer functions” of a program. That´s where the environment triggers behavior. That´s where hardware meets software. Entry points always get called because something happened to hardware state, e.g. a key was pressed, a mouse button clicked, the system timer ticked, data arrived over a wire.[3] Viewed from the outside, software is just a collection of Entry Point functions made accessible via buttons to press, menu items to click, gestures, URLs to open, keys to enter. Collections of batch processors I´d thus say, we haven´t moved forward since the early days of software development. We´re still writing batch programs. Forget about “event-driven programming” with its fancy GUI applications. Software is just a collection of batch processors. Earlier it was just one per program, today it´s hundreds we bundle up into applications. Each batch processor is represented by an Entry Point as its root that works on a number of resources from which it reads data to process and to which it writes results. These resources can be the keyboard or main memory or a hard disk or a communication line or a display. Together many batch processors - large and small - form applications the user perceives as a single whole: Software development that way becomes quite simple: just implement one batch processor after another. Well, at least in principle ;-) Features Each batch processor entered through an Entry Point delivers value to the user. It´s an increment. Sometimes its logic is trivial, sometimes it´s very complex. Regardless, each Entry Point represents an increment. An Entry Point implemented thus is a step forward in terms of Agility. At the same time it´s a tangible unit for developers. Therefore, identifying the more or less numerous batch processors in a software system is a rewarding task for product owners and developers alike. That´s where user stories meet code. In this example the user story translates to the Entry Point triggered by clicking the login button on a dialog like this: The batch then retrieves what has been entered via keyboard, loads data from a user store, and finally outputs some kind of response on the screen, e.g. by displaying an error message or showing the next dialog. This is all very simple, but you see, there is not just one thing happening, but several. Get input (email address, password) Load user for email address If user not found report error Check password Hash password Compare hash to hash stored in user Show next dialog Viewed from 10,000 feet it´s all done by the Entry Point function. And of course that´s technically possible. It´s just a bunch of logic and calling a couple of API functions. However, I suggest to take these steps as distinct aspects of the overall requirement described by the user story. Such aspects of requirements I call Features. Features too are increments. Each provides some (small) value of its own to the user. Each can be checked individually by a product owner. Instead of implementing all the logic behind the Login() entry point at once you can move forward increment by increment, e.g. First implement the dialog, let the user enter any credentials, and log him/her in without any checks. Features 1 and 4. Then hard code a single user and check the email address. Features 2 and 2.1. Then check password without hashing it (or use a very simple hash like the length of the password). Features 3. and 3.2 Replace hard coded user with a persistent user directoy, but a very simple one, e.g. a CSV file. Refinement of feature 2. Calculate the real hash for the password. Feature 3.1. Switch to the final user directory technology. Each feature provides an opportunity to deliver results in a short amount of time and get feedback. If you´re in doubt whether you can implement the whole entry point function until tomorrow night, then just go for a couple of features or even just one. That´s also why I think, you should strive for wrapping feature logic into a function of its own. It´s a matter of Evolvability and Production Efficiency. A function per feature makes the code more readable, since the language of requirements analysis and design is carried over into implementation. It makes it easier to apply changes to features because it´s clear where their logic is located. And finally, of course, it lets you re-use features in different context (read: increments). Feature functions make it easier for you to think of features as Spinning increments, to implement them independently, to let the product owner check them for acceptance individually. Increments consist of features, entry point functions consist of feature functions. So you can view software as a hierarchy of requirements from broad to thin which map to a hierarchy of functions - with entry points at the top.   I like this image of software as a self-similar structure on many levels of abstraction where requirements and code match each other. That to me is true agile design: the core tenet of Agility to move forward in increments is carried over into implementation. Increments on paper are retained in code. This way developers can easily relate to product owners. Elusive and fuzzy requirements are not tangible. Software production is moving forward through requirements one increment at a time, and one function at a time. In closing Product owners and developers are different - but they need to work together towards a shared goal: working software. So their notions of software need to be made compatible, they need to be connected. The increments of the product owner - user stories and features - need to be mapped straightforwardly to something which is relevant to developers. To me that´s functions. Yes, functions, not classes nor components nor micro services. We´re talking about behavior, actions, activities, processes. Their natural representation is a function. Something has to be done. Logic has to be executed. That´s the purpose of functions. Later, classes and other containers are needed to stay on top of a growing amount of logic. But to connect developers and product owners functions are the appropriate glue. Functions which represent increments. Can there always be such a small increment be found to deliver until tomorrow evening? I boldly say yes. Yes, it´s always possible. But maybe you´ve to start thinking differently. Maybe the product owner needs to start thinking differently. Completion is not the goal anymore. Neither is checking the delivery of an increment through the user interface of a software. Product owners need to become comfortable using test beds for certain features. If it´s hard to slice requirements thin enough for Spinning the reason is too little knowledge of something. Maybe you don´t yet understand the problem domain well enough? Maybe you don´t yet feel comfortable with some tool or technology? Then it´s time to acknowledge this fact. Be honest about your not knowing. And instead of trying to deliver as a craftsman officially become a researcher. Research an check back with the product owner every day - until your understanding has grown to a level where you are able to define the next Spinning increment. ? Sometimes even thin requirement slices will cover several Entry Points, like “Add validation of email addresses to all relevant dialogs.” Validation then will it put into a dozen functons. Still, though, it´s important to determine which Entry Points exactly get affected. That´s much easier, if strive for keeping the number of Entry Points per increment to 1. ? If you like call Entry Point functions event handlers, because that´s what they are. They all handle events of some kind, whether that´s palpable in your code or note. A public void btnSave_Click(object sender, EventArgs e) {…} might look like an event handler to you, but public static void Main() {…} is one also - for then event “program started”. ?

    Read the article

  • Using GoogleMaps with JXMapViewer

    - by npinti
    I have been searching on the web to see if I can use GoogleMaps with the JXMapViewer. According to this, it is illegal, but the article is more than three years old. Could anyone be kind enough to tell me if I can use GoogleMaps with the JXMap viewer? I know that Google has recently allowed desktop applications to use their static maps provided that the application is freely accessible to people on some website. If this can be done, I would appreciate some pointer to where I could start looking so that I can use Google Maps, I tried messing around with this but to no avail. Thanks in advance.

    Read the article

  • Technique for ensuring HTML- and URL-encoding

    - by JW
    Has anyone implemented a good template system for ensuring that output is properly HTML-encoded where it makes sense? Maybe even something that recognizes when output should be URL-encoded or JSON-encoded instead? The lazy approach — just encoding all inputs — causes problems when you want to send those inputs to a database, or to a block of JavaScript code. So something a little smarter is needed. The tedious approach — putting the proper encoding function around each piece of data on the template — works, but it's easy for developers to forget to do it. Is there a good approach that makes it easy for developers, and ensures that the right encoding is done? I was listening to one of the SO podcasts, and Joel tossed out an idea about using typed data to enforce a difference between HTML-encoded strings and non-encoded strings. Maybe that could be a starting point. I'm looking more for a strategy than for an implementation in a particular language (although I'd be happy to hear about implementations that already exist and work).

    Read the article

  • Update data programmatically using EntityDataSource

    - by Vinay
    Hello guys, I want to update the data using programmatically in code behind using EntityDataSource. I have done the samething using Update method of LINQ Datasource. sample code snippet int id = Convert.ToInt32(e.CommandArgument); ListDictionary keyValues = new ListDictionary(); ListDictionary newValues = new ListDictionary(); ListDictionary oldValues = new ListDictionary(); keyValues.Add("ReviewID", id); oldValues.Add("IsActive", "IsActive"); newValues.Add("IsActive", "false"); GridDataSource.Update(keyValues,newValues, oldValues); Can I achieve the same thing using EntityDataSource? Thanks, Vinay

    Read the article

  • SignalR cross domain does not work after updating to 0.5.1

    - by jlp
    My site uses SignalR to communicate cross-domain. It worked great but I updated SignalR from 0.5.0 to 0.5.1 and my site broke. Here's my script: function start() { $.connection.hub.url = "http://otherdomain.com/signalr"; $.connection.hub.start().done(function () { $.connection.myHub.join(); }); } When script calls join - I see in Firebug that it is a POST (I believe it shoold be GET (jsonp) since it is cross domain) with no response. EDIT: I tried $.connection.hub.start({jsonp: true}). Now I have can call server from client but calls from server don't execute on client. I noticed that there are following calls: negotiate and send whereas locally (the same domain) there are: negotiate, connect, send.

    Read the article

  • How do I read 64-bit Registry values from VBScript running as a an msi post-installation task?

    - by Joergen Bech
    I need to read the location of the Temporary ASP.NET Files folder from VBScript as part of a post-installation task in an installer created using a Visual Studio 2008 deployment project. I thought I would do something like this: Set oShell = CreateObject("Wscript.Shell") strPath = oShell.RegRead("HKLM\SOFTWARE\Microsoft\ASP.NET\2.0.50727.0\Path") and then concatenate strPath with "\Temporary ASP.NET Files" and be done with it. On an x64 system, however, I am getting the value from the WOW6432Node (HKLM\SOFTWARE\Wow6432Node\Microsoft\ASP.NET\2.0.50727.0), which gives me the 32-bit framework path (C:\Windows\Microsoft.NET\Framework\v2.0.50727), but on an x64 system, I actually want the 64-bit path, i.e. C:\Windows\Microsoft.NET\Framework64\v2.0.50727. I understand that this happens because the .vbs file is run using the 32-bit script host due to the parent process (the installer) being 32-bit itself. How can I run the script using the 64-bit script host - or - how can I read the 64-bit values even if the script is run using the 32-bit script host?

    Read the article

  • WCF Data Services implementation strategies.

    - by Nix
    Microsoft has done a savvy job of not outlining the actual place for data services in the wonderful world of SOA/Web dev. So my question is simple, are WCF Data Services designed to be used via clients? Or has anyone ever heard of someone using them on the server side? Simple scenario a general layered architecture using BO business objects (parenthesis indicate what is being passed between layers) (XML) WCF Service - (BO)Business Logic - (BO) Dao - Entity Framework or using data services it would be where DS BO are modeled business entities to be used in data service. (XML) WCF Service -(BO) Business Logic - (BO) WCF Data Service - (DS BO)Server I can't see a use for the later, unless there are going to be a lot of cases people would be accessing your data via your Data Service Layer vs the Service layer? Thoughts anyone? I have not seen any mention of using DS from within a Service Layer....

    Read the article

  • Test if a Property is not Null before Returning

    - by DaveDev
    I have the following property public MyType MyProperty {get;set;} I want to change this property so that if the value is null, it'll do populate the value first, and then return it... but without using a private member variable. For instance, if I was doing this: public MyType MyProperty { get { if (_myProperty != null) return _myProperty else _myProperty = XYZ; return _myProperty; } set { _myProperty = value; } } is this possible? Or do I need the member variable to get it done?

    Read the article

  • Get timestamp from Authenticode Signed files in .NET

    - by SlavaGu
    We need to verify that binary files are signed properly with digital signature (Authenticode). This can be achieved with signtool.exe pretty easily. However, we need an automatic way that also verifies signer name and timestamp. This is doable in native C++ with CryptQueryObject() API as shown in this wonderful sample: How To Get Information from Authenticode Signed Executables However we live in a managed world :) hence looking for C# solution to the same problem. Straight approach would be to pInvoke Crypt32.dll and all is done. But there is similar managed API in System.Security.Cryptography.X509Certificates Namespace. X509Certificate2 Class seems to provide some information but no timestamp. Now we came to the original question how can we get that timestamp of a digital signature in C Sharp?

    Read the article

  • moviePlayBackDidFinish: is not called in application.

    - by srikanth rongali
    In my application I have used these notifications when playing a video. But, The problem is the control of the program is not entering in to the first two methods(moviePreLoadDidFinish and moviePlayBackDidFinish). It is entering into last method. So, after the video is played and I pressed Done button in the video, I cannot touch any object in screen. I mean I used play button for video to play in a view. After the video played the play button is deactivated. - (void) moviePreloadDidFinish:(NSNotification*)notification { } // Notification called when the movie finished playing. - (void) moviePlayBackDidFinish:(NSNotification*)notification { } // Notification called when the movie scaling mode has changed. - (void) movieScalingModeDidChange:(NSNotification*)notification { } I could not get where I was wrong ? Thank You.

    Read the article

  • Can I programatically get hold of the Autos/local variables that is shown when debugging?

    - by Stefan
    Im trying to build an error-logger that loggs running values that is active in the function that caused the error. (just for fun so its not a critical problem) When going in break-mode and looking at the locals-tab and autos-tab you can see all active variables (name, type and value), it would be useful to get hold of that for logging purposes when an error occur and on some other occasions. For my example, I just want to find all local variables that are of type string and integer and store the name and value of them. Is this possible with reflection? Any tips or pointers that get me closer to my goal would be very appreciated. I have toyed with using expression on a specifik object (a structure) to create an automapper against a dataset, but I have not done anything like what I ask for above, so please make me happy and say its possible. Thanks.

    Read the article

  • Asynchronous operations performance

    - by LicenseQ
    One of the features of asynchronous programming in .NET is saving threads during long running operation execution. The FileStream class can be setup to allow asynchronous operations, that allows running (e.g.) a copy operation without virtually using any threads. To my surprise, I found that running asynchronous stream copy performs not only slower, but also uses more processing power than synchronous stream copy equivalent. Is there any benchmark tests were done to compare a synchronous vs asynchronous operation execution (file, network, etc.)? Does it really make sense to perform an asynchronous operation instead of spanning separate thread and perform synchronous operation in server environment if the asynchronous operation is times slower than the synchronous one?

    Read the article

  • Android: databinding when using a ArrayAdapter: possible?

    - by Peterdk
    I need some simple databinding for a Spinner. I want to display 2 items for each dropdownitem. So when the user clicks the spinner I get a list like: ------------------- Name 123456 ------------------- Name 123456 ------------------- I understand this can be done when using a Cursor, according to the databinding info on android dev. Like: SimpleCursorAdapter adapter2 = new SimpleCursorAdapter(this, R.layout.my_custom_spinner_item_layout, cur, new String[] {People.NAME, People.ID}, new int[] {android.R.id.text1, android.R.id.text2}); However, I don't get my data from a database, so I don't use a cursor, I use a ArrayAdapter. Unfortunately it looks like there is no support for databinding with this adapter. Is there a way to do this?

    Read the article

  • OpenStreetMap and Hadoop

    - by portoalet
    Hi, I need some ideas for a weekend project about Hadoop and OpenStreetMap. I have access to AWS EC2 instance with OpenStreetMap snapshot in my EBS volume. The OpenStreetMap data is in a PostgreSQL database. What kind of MapReduce function can be run on the OpenStreetMap data, assuming I can export them into xml format, and then place into HDFS ? In other words, I am having a brain cramp at the moment, and cannot think what kind of MapReduce operation that can extract valuable insight from the OpenStreetMap xml? (i.e. extract all the places designated as park or golf course. But this needs to be done once only, not continuously) Many Thanks

    Read the article

  • How to get image to pulse with opacity with JQuery

    - by Alex
    I am trying to get an image to change opacity smoothly over a duration of time. Here's the code I have for it. <script type="text/javascript"> pulsem(elementid){ var element = document.getElementById(elementid) jquery(element).pulse({opacity: [0,1]}, { duration: 100, // duration of EACH individual animation times: 3, // Will go three times through the pulse array [0,1] easing: 'linear', // easing function for each individual animation complete: function() { alert("I'm done pulsing!"); } }) </script> <a href="city.htm"><img src="waterloo.png" onmouseover="javascript:pulsem("waterloo")" border="0" class="env" id="waterloo"/></a> Also, is there a way for this to happen automatically without the need of a mouseover? Thanks.

    Read the article

  • PHP CMS with independent framework.

    - by Simon
    We currently use MySource Matrix CMS for large projects, Wordpress CMS for small projects and Zend Framework for bespoke applications... I'm not trying to confuse and compare a CMS to a framework, that has been done before :-) I want to identify a few CMSs for review that have foundations in strong (preferably independent) PHP frameworks. The only one I have looked at is SilverStripe CMS and Sapphire Framework. We have many clients that have a CMS for internet and/ or extranet and then various other bespoke applications that are then integrated via various means to look like they're in the CMS. I believe it will be more productive and beneficial to have a common framework between these branches so they can be natively merged. Hope this makes sense. PS. I have used custom assets in MySource Matrix and specific modules in other CMS but you feel you are working for the CMS not the application you are building.

    Read the article

  • How do I put a custom version of node/add form in a view using customfield in drupal 6?

    - by niedakh
    Hi, I'm trying to add a pre-filled 'add reply' form to a view of nodes. Reply is a content-type (reply) with certain fields that need to be prefilled based on what is in the view. This way a user can see only the selected fields from the node/add/reply. At the moment I'm building the forms manually - copy the form from node/add, do some modifications using php & views customfield, but I would like to be able to just push default values to some fields and hide some others and then make drupal render it with all the javascript glory like date select etc. Can this be done?

    Read the article

  • Getting into a technology which requires experience when you have no experience

    - by dotnetdev
    It seems that Sharepoint is a technology which is very hard to get into. All the jobs in this tech require experience in working with it (Eg 2 years development experience in MOSS). If I wanted to get into this - but had no job that used the tech, how can I get experience in it to get an experienced job? Jobs state you need "2 years professional experience in MOSS 2007" but then if you have never done it, you won't get the job. The only possible way is you will be doing this at home and not in a team, but if you work in the mean time, that will negate this (it's not like teamworking is tech specific). Many people think if you decide to make a project at home you're just going to play about aimlessly rather than work to specs (where as in my current situation it's vice versa) but if you're dedicated, like me, you would write them - just not with the same presentation. Would employers treat experience at home as professional experience? Biztalk is another prime example of this. Thanks

    Read the article

  • Disposables, Using & Try/Catch Blocks

    - by Aren B
    Having a mental block today, need a hand verifying my logic isn't fubar'ed. Traditionally I would do file i/o similar to this: FileStream fs = null; // So it's visible in the finally block try { fs = File.Open("Foo.txt", FileMode.Open); /// Do Stuff } catch(IOException) { /// Handle Stuff } finally { if (fs != null) fs.Close(); } However, this isn't very elegant. Ideally I'd like to use the using block to dispose of the filestream when I'm done, however I am unsure about the synergy between using and try/catch. This is how i'd like to implement the above: try { using(FileStream fs = File.Open("Foo.txt", FileMode.Open)) { /// Do Stuff } } catch(Exception) { /// Handle Stuff } However, I'm worried that a premature exit (via thrown exception) from within the using block may not allow the using block to complete execution and clean up it's object. Am I just paranoid, or will this actually work the way I intend it to?

    Read the article

  • C# "Enum" Serialization - Deserialization to Static Instance

    - by Walt W
    Suppose you have the following class: class Test : ISerializable { public static Test Instance1 = new Test { Value1 = "Hello" ,Value2 = 86 }; public static Test Instance2 = new Test { Value1 = "World" ,Value2 = 26 }; public String Value1 { get; private set; } public int Value2 { get; private set; } public void GetObjectData(SerializationInfo info, StreamingContext context) { //Serialize an indicator of which instance we are - Currently //I am using the FieldInfo for the static reference. } } I was wondering if it is possible / elegant to deserialize to the static instances of the class? Since the deserialization routines (I'm using BinaryFormatter, though I'd imagine others would be similar) look for a constructor with the same argument list as GetObjectData(), it seems like this can't be done directly . . Which I would presume means that the most elegant solution would be to actually use an enum, and then provide some sort of translation mechanism for turning an enum value into an instance reference. How might one go about this?

    Read the article

  • How to include asp.net assets when using Team Build 2008

    - by bonskijr
    I am able to configure our Build Server (Team Build 2008) to build our asp.net application. I've done so via <ConfigurationToBuild Include="Debug|Mixed Platforms"> <FlavorToBuild>Debug</FlavorToBuild> <PlatformToBuild>Mixed Platforms</PlatformToBuild> </ConfigurationToBuild> Problem though, the asp.net assets(eg. script folders, imgs, etc.) are not copied to the deployment folder. Folder(_PublishedWebsites) only contains the binaries references of the app plus the pre-compiled web services. Is there a way to include said folders/files to the deployment folder? Thanks

    Read the article

  • AJAX and iFrame: Calling AJAX from inside the iFrame to Update an Outside DIV

    - by KcYxA
    Hi there, I have a page where a user can upload a file along with some other input. Because I wanted this to be AJAX-like, I resorted to using an iFrame to accomplish this. After the file is uploaded and an iFrame is loaded with a response page, I need to update a DIV outside of the iFrame with an AJAX call. The reason for separate updates, is that the result of the outside DIV depends on the input that the user provided with the file input. Can this be done? Am I approaching this the wrong way? Thank you!

    Read the article

  • GhostScript font issues

    - by Robert
    I'm running GPL Ghostscript 8.70 (2009-07-31) on Windows XP. I have about 100 PDF files I've attempted to run through GS, but I'm having font-related issues on two separate groups of files from two different customers. I'm not sure if the issues could be related. Here are the two errors I receive: Loading Courier font from C:\Program Files\gs\fonts/cour.ttf... 2343384 986555 13583240 12261829 3 done. Using CourierNewPSMT font for Courier. Error: /rangecheck in --get-- Can't find CID font "Arial". Substituting CID font /Adobe-Identity for /Arial, see doc/Use.htm#CIDFontSubstitution. The substitute CID font "Adobe-Identity" is not provided either. Will exit with error. Error: /undefined in findresource I've tried just about everything I can think of with fontmap and cidfmap. Does anyone out there have a solution?

    Read the article

< Previous Page | 463 464 465 466 467 468 469 470 471 472 473 474  | Next Page >