Search Results

Search found 17487 results on 700 pages for 'static members'.

Page 398/700 | < Previous Page | 394 395 396 397 398 399 400 401 402 403 404 405  | Next Page >

  • In Djano, why do I get a 500 server error when browsing, but "python mysite.fcgi" from SSH works fin

    - by Jim
    If I browse to my site, I get a 500 "internal server error." However, if I SSH into my server and go to my site's folder and run "python mysite.fcgi" I see the HTML rendered fine. Obviously, something is wrong, but I'm not sure what. Here is my .htaccess file: AddHandler fastcgi-script .fcgi RewriteEngine On RewriteRule ^(media/.*)$ - [L] RewriteRule ^(static/.*)$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.*)$ mysite.fcgi/$1 [QSA,L] Here is my mysite.fcgi file: #!/usr/bin/python2.5 import sys, os sys.path.insert(0, "/kunden/homepages/34/[mydir]/htdocs/projects/django") sys.path.insert(1, "/kunden/homepages/34/[mydir]/lib/python/site-packages") os.chdir("/kunden/homepages/34/[mydir]/htdocs/projects/django/mysite") os.environ['DJANGO_SETTINGS_MODULE'] = 'mysite.settings' from django.core.servers.fastcgi import runfastcgi runfastcgi(["method=threaded", "daemonize=false"]) I'm setting this up on 1and1. It has been a pain, but I think I'm close.

    Read the article

  • Friday Tips #6, Part 1

    - by Chris Kawalek
    We have a two parter this week, with this post focusing on desktop virtualization and the next one on server virtualization. Question: Why would I use the Oracle Secure Global Desktop Secure Gateway? Answer by Rick Butland, Principal Sales Consultant, Oracle Desktop Virtualization: Well, for the benefit of those who might not be familiar with client connections in Oracle Secure Global Desktop (SGD), let me back up and briefly explain. An SGD client connects to an SGD server using two distinct protocols, which, by default, require two distinct TCP ports. The first is the HTTP protocol, used by the web browser to connect to the SGD webserver on TCP port 80, or if secure connections are enabled (SSL/TLS), then TCP port 443, commonly identified as the "HTTPS" port, that is, "SSL encrypted HTTP." The second protocol from the client to the server is the Adaptive Internet Protocol, or AIP, which is used for displaying applications, transferring drive mapping data, print jobs, and so on. By default, AIP uses the TCP port 3104, or port 5307 when SSL is enabled. When SGD clients need to access SGD over a firewall, the ports that AIP requires are typically "closed"; and most administrators are reluctant, to put it mildly, to change their firewall configurations to allow AIP traffic on 3144/5307.   To avoid this problem, SGD introduced "Firewall Forwarding", a technique where, in effect, both http and AIP traffic are "multiplexed" onto a single "well-known" TCP port, that is port 443, the https port.  This is also known as single-port firewall traversal.  This technique takes advantage of the fact that, as a "well-known service", port 443 is usually "open",   allowing (encrypted) traffic to pass. At the target SGD server, the two protocols are de-multiplexed and routed appropriately. The Secure Gateway was developed in response to requirements from customers for SGD to support multi-stage DMZ's, and to avoid exposing SGD servers and the information they contain directly to connections from the Internet. The Secure Gateway acts as a reverse-proxy in the first-tier of the DMZ, accepting, authenticating, and terminating incoming client connections, and then re-encrypting the connections, and proxying them, routing them on to SGD servers, deeper in the network. The client no longer needs to know the name/IP address of the SGD servers in their network, they connect to the gateway, only. The gateway takes care of those internal network details.     The Secure Gateway supports the same "single-port firewall" capability as does "Firewall Forwarding", but offers the additional advantage of load-balancing incoming client connections amongst SGD array members, which could be cumbersome without a forward-deployed secure gateway. Load-balancing weights and policies can be monitored and tuned using the "Balancer Manager" application, and Apache mod_proxy_balancer directives.   Going forward, our architects recommend the use of the Secure Gateway over "Firewall Forwarding" for single-port firewall traversal, due to its architectural advantages, its greater flexibility and enhanced features.  Finally, it should be noted that the Secure Gateway is not separately priced; any licensed SGD customer may use the Secure Gateway component at no additional cost.   For more information, see the "Secure Gateway Administrator's Guide".

    Read the article

  • Code contracts and inheritance

    - by DigiMortal
    In my last posting about code contracts I introduced you how to force code contracts to classes through interfaces. In this posting I will go step further and I will show you how code contracts work in the case of inherited classes. As a first thing let’s take a look at my interface and code contracts. [ContractClass(typeof(ProductContracts))] public interface IProduct {     int Id { get; set; }     string Name { get; set; }     decimal Weight { get; set; }     decimal Price { get; set; } }   [ContractClassFor(typeof(IProduct))] internal sealed class ProductContracts : IProduct {     private ProductContracts() { }       int IProduct.Id     {         get         {             return default(int);         }         set         {             Contract.Requires(value > 0);         }     }       string IProduct.Name     {         get         {             return default(string);         }         set         {             Contract.Requires(!string.IsNullOrWhiteSpace(value));             Contract.Requires(value.Length <= 25);         }     }       decimal IProduct.Weight     {         get         {             return default(decimal);         }         set         {             Contract.Requires(value > 3);             Contract.Requires(value < 100);         }     }       decimal IProduct.Price     {         get         {             return default(decimal);         }         set         {             Contract.Requires(value > 0);             Contract.Requires(value < 100);         }     } } And here is the product class that inherits IProduct interface. public class Product : IProduct {     public int Id { get; set; }     public string Name { get; set; }     public virtual decimal Weight { get; set; }     public decimal Price { get; set; } } if we run this code and violate the code contract set to Id we will get ContractException. public class Program {     static void Main(string[] args)     {         var product = new Product();         product.Id = -100;     } }   Now let’s make Product to be abstract class and let’s define new class called Food that adds one more contract to Weight property. public class Food : Product {     public override decimal Weight     {         get         {             return base.Weight;         }         set         {             Contract.Requires(value > 1);             Contract.Requires(value < 10);               base.Weight = value;         }     } } Now we should have the following rules at place for Food: weight must be greater than 1, weight must be greater than 3, weight must be less than 100, weight must be less than 10. Interesting part is what happens when we try to violate the lower and upper limits of Food weight. To see what happens let’s try to violate rules #2 and #4. Just comment one of the last lines out in the following method to test another assignment. public class Program {     static void Main(string[] args)     {         var food = new Food();         food.Weight = 12;         food.Weight = 2;     } } And here are the results as pictures to see where exceptions are thrown. Click on images to see them at original size. Violation of lower limit. Violation of upper limit. As you can see for both violations we get ContractException like expected. Code contracts inheritance is powerful and at same time dangerous feature. Although you can always narrow down the conditions that come from more general classes it is possible to define impossible or conflicting contracts at different points in inheritance hierarchy.

    Read the article

  • How do I clean dust from a computer?

    - by Jonas
    As computers become faster and generate more heat it gets more important to have good ventilation, but that also increases the amount of dust sticking to the components of the computer. It's of course better to make sure the computer never gets dusty by vacuum cleaning around it (not in it) frequently. But what to do if it's already to late? I've heard that vacuum cleaning the computer itself is very bad, since it can cause static electricity that hurts the computer. So, Does anyone have any tips for how to remove dust from your computer?

    Read the article

  • Generating documents with templating from a form

    - by Anna
    Hello, I would like to create a document generator with templating. The workflow should be as following: The user input data to a static form (simple text input). The user chooses a graphically designed template. A document with the chosen template containing the user data is generated. The initial templates repository is prepared in advance, but it should be easy to add new templates to the process. I have the full MS Office suite and the preferred file format is an MS .doc. I can do a little VB scripting if needed, but I prefer not to. Any advice would be greatly appreciated. Thank you, Anna

    Read the article

  • Web service not accessible from behind corporates firewalls - how come?

    - by Niro
    We run a Saas serving a widget which is embedded in customer websites. The service include static javascript code hosted on amazon S3 and dynamic part hosted on EC2 with Scalr (using scalr name servers). We received some feedback from users behind corporate firewalls that they cant access our service (while they can access the sites including the widget). This does not make sense to me since the service is using normal http calls on port 80 and our URL is quite new without any reason to be banned by firewalls. My questions are: 1. Why is the service is not accessible and what can I do about it? 2. Is it possible that one of the following is blocked by corporate firewalls: Amazon s3, the dynamic IP address provided by amazon, Scalr name servers. Any other possible reasons, way to check them and remedies for this? Thanks!

    Read the article

  • Having to check collisions twice per game tic

    - by user22241
    I have vertically moving elevators (3 solid tiles wide) and static solid tiles. Each are separate entities and therefore have their own respective collision routines (to check for, and resolve, collisions with the main character) I check my vertical collisions after characters vertical movements and then horizontal collisions after horizontal movements. The problem is that I want my platform to kill the player if it squashes him from the top, and also if he's on a moving platform (that is moving up) that squashes him into a solid block. Correct behaviour, player on solid blocks being squashed from above by decending elevator Here is what happens. Gravity pushes character into solid block, solid block collision routine corrects characters position and sits him on the solid block which pushes him into the moving elevator, elevator routine then checks for collision and kills player. This assumes I am checking solid blocks first, then elevator collisions. However, if it's the other way around, this happens.... Incorrect behaviour, player on accending elevator gets pushed into solid blocks above Player is on an elevator moving up, gravity pushes him into the elevator, solid block CD routine detects no collision, no action taken. Elevator CD routine detects character has been pushed into elevator by gravity, corrects this by moving character up and sitting him on the elevator and pushes him into the solid blocks above, however the solid block vertical routine has now already run for this tic, so the game continues and the next solid block collision that is encountered is the horizontal routine. This detects a collision and moves the character out of the collision to the left or right of the block which looks odd to say the least (character should get killed here). The only way I've managed to get this working correctly is by running the solid block CD, then the elevator CD, then the solid block CD again straight after. This is clearly wasteful but I can't figure out how else to do this. Any help would be appreciated.

    Read the article

  • D-Link DNS-323 NAS firmware update

    - by Mark Beaton
    Hi all, I've got a D-Link DNS-323 NAS enclosure holding a bunch of multimedia files that I've (possibly stupidly) just updated the firmware on, from 1.03 to 1.08. The updater indicated it applied the firmware patch successfully, but after rebooting it I can no longer get into it via the web interface, either via the static IP I had assigned it before the update, or by any of the DHCP-assigned addresses that I can see are currently assigned by my router. The unit just sits there, with the drives (2x512 set up as RAID-1) thrashing away seemingly forever... So, my question - has anyone had a similar experience with one of these units? Any advice etc? I've googled the hell out of it, and can't find anything useful.

    Read the article

  • Connect to wired and wireless networks at same time, Ubuntu

    - by Gary Chambers
    Currently, I have a media PC running Ubuntu 10.04 that I am trying to connect via a wired network cable directly to a NAS box, and wirelessly to the router. This works no problem after I run sudo /etc/init.d/networking restart but I can't get both interfaces to come up on system startup. My /etc/network/interfaces file reads as follows: auto eth0 iface eth0 inet static address 10.0.1.2 netmask 255.255.254.0 broadcast 10.0.1.255 network 10.0.1.0 auto wlan2 iface wlan2 inet dhcp As I say, I know this works, because I can get it to work by restarting the network interfaces, but I can't bring them both up on system startup. Does anyone know why this might be?

    Read the article

  • Working and Studying in Oracle, how I balance my time....

    - by anca.rosu
    Hi, my name is Laura. I am working as an Intern within Executive Administration at Oracle Denmark, whilst studying Information Management at Copenhagen Business school. I have recently handeding a paper on Information Systems which gave me exposure to Oracle. Once completing this paper I came across a job posting on my University’s intranet site and I applied directly online. When I submitted my application for the job offer, I wondered about what language I should use for the application form, as the job posting was in Danish, but the contact person and number looked Irish. I therefore chose English. Later that same day, Fiona, one of Oracle’s Graduates Recruitment Consultants based in Ireland, contacted me. This shows how global Oracle truly is. I went for my face-to-face interview in Oracle Denmark with Charlotte, one of the team managers. I spent 5 minutes waiting in the lobby, just looking around, thinking to myself, I really want to work here. The atmosphere seemed so pleasant with a relaxed approach between colleagues, employees and guests. The interview took about an hour, but we touched on a lot of different subjects. The profile I got of Oraclewas that this is a place where you are encouraged to think for yourself, and you are given the freedom to use your ideas. Later that evening, Fiona called and offered me the job. I was very happy. At Oracle Denmark we have 4 different zones: a Quiet Zone, a Project Zone, a Dialogue Zone and a Call Zone. Everyday when you arrive you consider what will be the most productive for the day’s task, and you take your toolbox and go find a desk in the zone you have decided on. It is therefore very unusual to be next to the same person two days in a row. At Oracle, people are located all over the world, and everybody has team members, colleagues or leaders in other countries, or even other time zones. Initially,I was worried about how I would adapt to this approach but I soon realized I had nothing to worry about and now I appreciate working this way. My colleagues have been very supportive and they have openly welcomed me into my new role. I typically work two days a week and have three days at University. During exam periods, I have the flexibility to work less hours and focus on the exams, in return for putting in more hours at work when needed. The first time I had to ask for time off before handing in a paper, my boss looked at me and said, ”Of course! Your education is the most important!” I hope that by sharing my experiences with you, I can inspire or encourage you to consider Oracle as a potential employer, where you can grow both professionally and personally. If you have any questions related to this article feel free to contact  [email protected].  You can find our job opportunities via http://campus.oracle.com Technorati Tags: Intern,Oracle Denmark,Information Systems,Business school,Copenhagen,Graduates Recruitment,Ireland,Quiet Zone,Project Zone,Dialogue Zone,Call Zone,University,flexibility

    Read the article

  • Smart subdomain routing via reverse proxy

    - by Trevor Hartman
    I have two servers on my home network: OSX Server and an Ubuntu Server. I'd love to have external subdomains osx.mydomain.com point to osx and ubuntu.mydomain.com point to ubuntu. I know the normal way to do this is to have a static external IP address for each, but that's not an option as this is just my home setup. My question is: is there a way to do this with some reverse proxy trickery? OSX is currently the default entry point for all traffic. I was able to setup a reverse proxy on OSX for ubuntu.mydomain.com on port 80, so web traffic was correctly being proxied to my ubuntu. I'd like to ssh and do a bunch of other stuff though!

    Read the article

  • How GZipped contents are transfered on the web?

    - by PJ
    I heard that static contents like CSS and JavaScript can be better delivered in GZip format. And Content Developer Network (CDN) always does so. However I don't understand how the format works. First when I tried making a gzipped file via command-line. The file extension is .gz. This is different from .css and .js. How do browsers recognize which file is gzipped or not. Second, how browsers "decompress" files? I dragged my index.html.gz onto my browsers. But no one worked. How do such gzipped work in the real world? What do I need to do if I want to serve CSS/JavaScript using Gzipped format.

    Read the article

  • Why does explorer restart automatically when I kill it with Process.Kill?

    - by Thomas Levesque
    If I kill explorer.exe like this: private static void KillExplorer() { var processes = Process.GetProcessesByName("explorer"); Console.Write("Killing Explorer... "); foreach (var process in processes) { process.Kill(); process.WaitForExit(); } Console.WriteLine("Done"); } It restarts immediately. But if I use taskkill /F /IM explorer.exe, or kill it from the task manager, it doesn't restart. Why is that? What's the difference? How can I close explorer.exe from code without restarting it? Sure, I could call taskkill from my code, but I was hoping for a cleaner solution...

    Read the article

  • hudson/jenkins: help needed to get started with customization work

    - by user64204
    I'm would to customize jenkins by adding links to the left hand side panel and use the pages associated with these links to serve some custom content in place of the jobs/views table displayed by default. I managed to add links to the side-bar using the sidebar-links plugin. Now I'm trying to see how to replace the content of the <td id="main-panel"> element with some custom content. The custom content is generated by some PHP scripts which ideally should be called by hudson every time the custom pages are requested, though if too complicated I can either create static content to be served by jenkins by calling my PHP scripts in a crontab or see if calls to the PHP scripts can be done by apache itself before the page requests are sent to jenkins. I'm not sure writing a plugin is the best way to proceed and I would like to have your thoughts as to how you think I should implement this.

    Read the article

  • Hyper-V Server hvremote.wsf Script - ns lookup for DNS Verification test fails

    - by Vazgen
    I'm trying to connect my Hyper-V Server to a Windows 8 client for remote management. I have: Joined server to WORKGROUP Enabled Remote Management Set the server name Set a static IP Set the DNS servers to my ISPs DNS Servers (same as default DNS Servers on my Windows 8 remote management client) Set the correct time zone Created net user on server (net user /add admin password) Added user to special Administrators group on server (hvremote /add:admin) Granted anonymous dcom access on client using hvremote However, the "ns lookup for DNS verification" fails on both the client and server with the same error: Server: my.isps.server.name.net Address: 111.222.333.1 *** my.isps.server.name.net can't find 192.168.1.3: Non-existent domain Thanks for the help.

    Read the article

  • Monitoring settings in a configsection of your app.config for changes

    - by dotjosh
    The usage:public static void Main() { using(var configSectionAdapter = new ConfigurationSectionAdapter<ACISSInstanceConfigSection>("MyConfigSectionName")) { configSectionAdapter.ConfigSectionChanged += () => { Console.WriteLine("File has changed! New setting is " + configSectionAdapter.ConfigSection.MyConfigSetting); }; Console.WriteLine("The initial setting is " + configSectionAdapter.ConfigSection.MyConfigSetting); Console.ReadLine(); } }  The meat: public class ConfigurationSectionAdapter<T> : IDisposable where T : ConfigurationSection { private readonly string _configSectionName; private FileSystemWatcher _fileWatcher; public ConfigurationSectionAdapter(string configSectionName) { _configSectionName = configSectionName; StartFileWatcher(); } private void StartFileWatcher() { var configurationFileDirectory = new FileInfo(Configuration.FilePath).Directory; _fileWatcher = new FileSystemWatcher(configurationFileDirectory.FullName); _fileWatcher.Changed += FileWatcherOnChanged; _fileWatcher.EnableRaisingEvents = true; } private void FileWatcherOnChanged(object sender, FileSystemEventArgs args) { var changedFileIsConfigurationFile = string.Equals(args.FullPath, Configuration.FilePath, StringComparison.OrdinalIgnoreCase); if (!changedFileIsConfigurationFile) return; ClearCache(); OnConfigSectionChanged(); } private void ClearCache() { ConfigurationManager.RefreshSection(_configSectionName); } public T ConfigSection { get { return (T)Configuration.GetSection(_configSectionName); } } private System.Configuration.Configuration Configuration { get { return ConfigurationManager.OpenExeConfiguration(ConfigurationUserLevel.None); } } public delegate void ConfigChangedHandler(); public event ConfigChangedHandler ConfigSectionChanged; protected void OnConfigSectionChanged() { if (ConfigSectionChanged != null) ConfigSectionChanged(); } public void Dispose() { _fileWatcher.Changed -= FileWatcherOnChanged; _fileWatcher.EnableRaisingEvents = false; _fileWatcher.Dispose(); } }

    Read the article

  • Suggestions for Troubleshooting WIndows 7 lockups

    - by Craig L
    I've got a Dell Latitude E6500 that was working fine under Vista x64. I got one of the new Seagate 500GB Hybrid SSD/HD 2.5 drives and thought.. hmm.. let's try Win 7 x64 on it. Bottom line: It works great for hours and then it will hard lock. I don't mean BSOD (or whatever the Win7 equivalent is). I mean my screen is displaying a static image (if there is a clock displayed, it will be frozen at the time the lockup occurred) and the mouse and keyboard do not work. Control-Alt-Delete will not work. I have to hold down the power button to reboot. The event log records NOTHING at the time the lockup occurs. Obviously something is happening to the system to cause the lockup, but the default Windows 7 x64 doesn't log it. How can I log the things Windows doesn't normally log in Event Viewer ?

    Read the article

  • Ensuring non conflicting components in a modular system

    - by Hailwood
    So lets say we are creating a simple "modular system" framework. The bare bones might be the user management. But we want things like the Page Manager, the Blog, the Image Gallery to all be "optional" components. So a developer could install the Page Manager to allow their client to add a static home page and about page with content they can easily edit with a wysiwyg editor. The developer could then also install the Blog component to allow the client to add blog entries. The developer could then also install the Gallery component to allow the client to show off a bunch of images. The thing is, all these components are designed to be independent, so how do we go about ensuring they don't clash? E.g. ensuring the client doesn't create a /gallery page with the Page Manager and then wonder why the gallery stopped working, or the same issue with the Blog component, assuming we allow the users to customize the URL structure of the blog (because remember, the Page Manager doesn't necessarily have to be there, so we might not wan't our blog posts to be Date/Title formatted), likewise our clients aren't always going to be happy to have their pages under pages/title formatting. My core question here is, when building a modular system how to we ensure that the modules don't conflict without restricting functionality? Do we just leave it up to the clients/developer using the modules to ensure they get setup in a way that does not conflict?

    Read the article

  • Status code in nginx try_files directive

    - by Hamish
    Is it possible to use the current status code as a parameter in try_files? For example, we try to provide a host specific 503 static response, or a server-wide fallback if it wasn't found: error_page 503 @error503; location @error503 { root /path_to_static_root/; try_files /$host/503.html /503.html =503; } There are a number of these directives, so it would be convenient to do something like: error_page 404 @error error_page 500 @error error_page 503 @error location @error { root /path_to_static_root/; try_files /$host/$status.html /$status.html =$status; } But the Variables documentation doesn't list anything that we could use to do this. Is it possible, or is there an alternative way to do this?

    Read the article

  • Algorithm for dynamically calculating a level based on experience points?

    - by George
    One of the struggles I've always had in game development is deciding how to implement experience points attributed to gaining a level. There doesn't seem to be a pattern to gaining a level in many of the games I've played, so I assume they have a static dictionary table which contains experience points vs. the level. e.g. Experience Level 0 1 100 2 175 3 280 4 800 5 ...There isn't a rhyme or reason why 280 points is equal to level 4, it just is. I'm not sure how those levels are decided, but it certainly wouldn't be dynamic. I've also thought about the possibility of exponential levels, as not to have to keep a separate lookup table, e.g. Experience Level 0 1 100 2 200 3 400 4 800 5 1600 6 3200 7 6400 8 ...but that seems like it would grow out of control rather quickly, as towards the upper levels, the enemies in the game would have to provide a whopping amount of experience to level -- and that would be to difficult to control. Leveling would become an impossible task. Does anyone have any pointers, or methods they use to decide how to level a character based on experience? I want to be fair in leveling and I want to stay ahead of the players as not to worry about constantly adding new experience/level lookups.

    Read the article

  • Solved: Operation is not valid due to the current state of the object

    - by ChrisD
    We use public static methods decorated with [WebMethod] to support our Ajax Postbacks.   Recently, I received an error from a UI developing stating he was receiving the following error when attempting his post back: {   "Message": "Operation is not valid due to the current state of the object.",   "StackTrace": "   at System.Web.Script.Serialization.ObjectConverter.ConvertDictionaryToObject(IDictionary`2 dictionary, Type type, JavaScriptSerializer serializer, Boolean throwOnError, Object& convertedObject)\r\n   at System.Web.Script.Serialization.ObjectConverter.ConvertObjectToTypeInternal(Object o, Type type, JavaScriptSerializer serializer, Boolean throwOnError, Object& convertedObject)\r\n   at System.Web.Script.Serialization.ObjectConverter.ConvertObjectToTypeMain(Object o, Type type, JavaScriptSerializer serializer, Boolean throwOnError, Object& convertedObject)\r\n   at System.Web.Script.Serialization.JavaScriptObjectDeserializer.DeserializeInternal(Int32 depth)\r\n   at System.Web.Script.Serialization.JavaScriptObjectDeserializer.DeserializeDictionary(Int32 depth)\r\n   at System.Web.Script.Serialization.JavaScriptObjectDeserializer.DeserializeInternal(Int32 depth)\r\n   at System.Web.Script.Serialization.JavaScriptObjectDeserializer.DeserializeDictionary(Int32 depth)\r\n   at System.Web.Script.Serialization.JavaScriptObjectDeserializer.DeserializeInternal(Int32 depth)\r\n   at System.Web.Script.Serialization.JavaScriptObjectDeserializer.BasicDeserialize(String input, Int32 depthLimit, JavaScriptSerializer serializer)\r\n   at System.Web.Script.Serialization.JavaScriptSerializer.Deserialize(JavaScriptSerializer serializer, String input, Type type, Int32 depthLimit)\r\n   at System.Web.Script.Serialization.JavaScriptSerializer.Deserialize[T](String input)\r\n   at System.Web.Script.Services.RestHandler.GetRawParamsFromPostRequest(HttpContext context, JavaScriptSerializer serializer)\r\n   at System.Web.Script.Services.RestHandler.GetRawParams(WebServiceMethodData methodData, HttpContext context)\r\n   at System.Web.Script.Services.RestHandler.ExecuteWebServiceCall(HttpContext context, WebServiceMethodData methodData)",   "ExceptionType": "System.InvalidOperationException" }   Goggling this error brought me little support.  All the results talked about increasing the aspnet:MaxJsonDeserializerMembers value to handle larger payloads.  Since 1) I’m not using the asp.net ajax model and 2) the payload is very small, this clearly was not the cause of my issue. Here’s the payload the UI developer was sending to the endpoint: {   "FundingSource": {     "__type": "XX.YY.Engine.Contract.Funding.EvidenceBasedFundingSource,  XX.YY.Engine.Contract",     "MeansType": 13,     "FundingMethodName": "LegalTender",   },   "AddToProfile": false,   "ProfileNickName": "",   "FundingAmount": 0 } By tweaking the JSON I’ve found the culprit. Apparently the default JSS Serializer used doesn’t like the assembly name in the __type value.  Removing the assembly portion of the type name resolved my issue. { "FundingSource": { "__type": "XX.YY.Engine.Contract.Funding.EvidenceBasedFundingSource", "MeansType": 13, "FundingMethodName": "LegalTender", }, "AddToProfile": false, "ProfileNickName": "", "FundingAmount": 0 }

    Read the article

  • Tweaking log4net Settings Programmatically

    - by PSteele
    A few months ago, I had to dynamically add a log4net appender at runtime.  Now I find myself in another log4net situation.  I need to modify the configuration of my appenders at runtime. My client requires all files generated by our applications to be saved to a specific location.  This location is determined at runtime.  Therefore, I want my FileAppenders to log their data to this specific location – but I won't know the location until runtime so I can't add it to the XML configuration file I'm using. No problem.  Bing is my new friend and returned a couple of hits.  I made a few tweaks to their LINQ queries and created a generic extension method for ILoggerRepository (just a hunch that I might want this functionality somewhere else in the future – sorry YAGNI fans): public static void ModifyAppenders<T>(this ILoggerRepository repository, Action<T> modify) where T:log4net.Appender.AppenderSkeleton { var appenders = from appender in log4net.LogManager.GetRepository().GetAppenders() where appender is T select appender as T;   foreach (var appender in appenders) { modify(appender); appender.ActivateOptions(); } } Now I can easily add the proper directory prefix to all of my FileAppenders at runtime: log4net.LogManager.GetRepository().ModifyAppenders<FileAppender>(a => { a.File = Path.Combine(settings.ConfigDirectory, Path.GetFileName(a.File)); }); Thanks beefycode and Wil Peck. Technorati Tags: .NET,log4net,LINQ

    Read the article

  • Initializing OpenFeint for Android outside the main Application

    - by Ef Es
    I am trying to create a generic C++ bridge to use OpenFeint with Cocos2d-x, which is supposed to be just "add and run" but I am finding problems. OpenFeint is very exquisite when initializing, it requires a Context parameter that MUST be the main Application, in the onCreate method, never the constructor. Also, the main Apps name must be edited into the manifest. I am trying to fix this. So far I have tried to create a new Application that calls my Application to test if just the type is needed, but you do really need the main Android application. I also tried using a handler for a static initialization but I found pretty much the same problem. Has anybody been able to do it? This is my working-but-not-as-intended code snippet public class DerpHurr extends Application{ @Override public void onCreate() { super.onCreate(); initializeOpenFeint("TestApp", "edthedthedthedth", "aeyaetyet", "65462"); } public void initializeOpenFeint(String appname, String key, String secret, String id){ Map<String, Object> options = new HashMap<String, Object>(); options.put(OpenFeintSettings.SettingCloudStorageCompressionStrategy, OpenFeintSettings.CloudStorageCompressionStrategyDefault); OpenFeintSettings settings = new OpenFeintSettings(appname, key, secret, id, options); //RIGHT HERE OpenFeint.initialize(***this***, settings, new OpenFeintDelegate() { }); System.out.println("OpenFeint Started"); } } Manifest <application android:debuggable="true" android:label="@string/app_name" android:name=".DerpHurr">

    Read the article

  • What is the difference between Callback<T> and Java 8's Supplier<T>?

    - by Dan Pantry
    I've been switching over to Java from C# after some recommendations from some over at CodeReview. So, when I was looking into LWJGL, one thing I remembered was that every call to Display must be executed on the same thread that the Display.create() method was invoked on. Remembering this, I whipped up a class that looks a bit like this. public class LwjglDisplayWindow implements DisplayWindow { private final static int TargetFramesPerSecond = 60; private final Scheduler _scheduler; public LwjglDisplayWindow(Scheduler displayScheduler, DisplayMode displayMode) throws LWJGLException { _scheduler = displayScheduler; Display.setDisplayMode(displayMode); Display.create(); } public void dispose() { Display.destroy(); } @Override public int getTargetFramesPerSecond() { return TargetFramesPerSecond; } @Override public Future<Boolean> isClosed() { return _scheduler.schedule(() -> Display.isCloseRequested()); } } While writing this class you'll notice that I created a method called isClosed() that returns a Future<Boolean>. This dispatches a function to my Scheduler interface (which is nothing more than a wrapper around an ScheduledExecutorService. While writing the schedule method on the Scheduler I noticed that I could either use a Supplier<T> argument or a Callable<T> argument to represent the function that is passed in. ScheduledExecutorService didn't contain an override for Supplier<T> but I noticed that the lambda expression () -> Display.isCloseRequested() is actually type compatible with both Callable<bool> and Supplier<bool>. My question is, is there a difference between those two, semantically or otherwise - and if so, what is it, so I can adhere to it?

    Read the article

  • Roles / Profiles / Perspectives in NetBeans IDE 7.1

    - by Geertjan
    With a check out of main-silver from yesterday, I'm able to use the brand new "role" attribute in @TopComponent.Registration, as you can see below, in the bit in bold: @ConvertAsProperties(dtd = "-//org.role.demo.ui//Admin//EN", autostore = false) @TopComponent.Description(preferredID = "AdminTopComponent", //iconBase="SET/PATH/TO/ICON/HERE", persistenceType = TopComponent.PERSISTENCE_ALWAYS) @TopComponent.Registration(mode = "editor", openAtStartup = true, role="admin") public final class AdminTopComponent extends TopComponent { And here's a window for general users of the application, with the "role" attribute set to "user": @ConvertAsProperties(dtd = "-//org.role.demo.ui//User//EN", autostore = false) @TopComponent.Description(preferredID = "UserTopComponent", //iconBase="SET/PATH/TO/ICON/HERE", persistenceType = TopComponent.PERSISTENCE_ALWAYS) @TopComponent.Registration(mode = "explorer", openAtStartup = true, role="user") public final class UserTopComponent extends TopComponent { So, I have two windows. One is assigned to the "admin" role, the other to the "user" role. In the "ModuleInstall" class, I add a "WindowSystemListener" and set "user" as the application's role: public class Installer extends ModuleInstall implements WindowSystemListener { @Override public void restored() { WindowManager.getDefault().addWindowSystemListener(this); } @Override public void beforeLoad(WindowSystemEvent event) { WindowManager.getDefault().setRole("user"); WindowManager.getDefault().removeWindowSystemListener(this); } @Override public void afterLoad(WindowSystemEvent event) { } @Override public void beforeSave(WindowSystemEvent event) { } @Override public void afterSave(WindowSystemEvent event) { } } So, when the application starts, the "UserTopComponent" is shown, not the "AdminTopComponent". Next, I have two Actions, for switching between the two roles, as shown below: @ActionID(category = "Window", id = "org.role.demo.ui.SwitchToAdminAction") @ActionRegistration(displayName = "#CTL_SwitchToAdminAction") @ActionReferences({ @ActionReference(path = "Menu/Window", position = 250) }) @Messages("CTL_SwitchToAdminAction=Switch To Admin") public final class SwitchToAdminAction extends AbstractAction { @Override public void actionPerformed(ActionEvent e) { WindowManager.getDefault().setRole("admin"); } @Override public boolean isEnabled() { return !WindowManager.getDefault().getRole().equals("admin"); } } @ActionID(category = "Window", id = "org.role.demo.ui.SwitchToUserAction") @ActionRegistration(displayName = "#CTL_SwitchToUserAction") @ActionReferences({ @ActionReference(path = "Menu/Window", position = 250) }) @Messages("CTL_SwitchToUserAction=Switch To User") public final class SwitchToUserAction extends AbstractAction { @Override public void actionPerformed(ActionEvent e) { WindowManager.getDefault().setRole("user"); } @Override public boolean isEnabled() { return !WindowManager.getDefault().getRole().equals("user"); } } When I select one of the above actions, the role changes, and the other window is shown. I could, of course, add a Login dialog to the "SwitchToAdminAction", so that authentication is required in order to switch to the "admin" role. Now, let's say I am now in the "user" role. So, the "UserTopComponent" shown above is now opened. I decide to also open another window, the Properties window, as below... ...and, when I am in the "admin" role, when the "AdminTopComponent" is open, I decide to also open the Output window, as below... Now, when I switch from one role to the other, the additional window/s I opened will also be opened, together with the explicit members of the currently selected role. And, the main window position and size are also persisted across roles. When I look in the "build" folder of my project in development, I see two different Windows2Local folders, one per role, automatically created by the fact that there is something to be persisted for a particular role, e.g., when a switch to a different role is done: And, with that, we now clearly have roles/profiles/perspectives in NetBeans Platform applications from NetBeans Platform 7.1 onwards.

    Read the article

< Previous Page | 394 395 396 397 398 399 400 401 402 403 404 405  | Next Page >