Search Results

Search found 20569 results on 823 pages for 'pc settings'.

Page 299/823 | < Previous Page | 295 296 297 298 299 300 301 302 303 304 305 306  | Next Page >

  • django dynamically deduce SITE_ID according to the domain

    - by dcrodjer
    I am trying to develop a site which will render multiple customized sites according to the domain name (subdomain to be more precise). My all the domain names are redirected to the So for each site there will be a corresponding model which defines how the site should look (SITE - SITE_SETTINGS) What will be the best way to utilize the django sites framework to get the SITE_ID of the current site from the domain name instead of hard-coding it in the settings files (django sites documentation) and run database queries, render the views accordingly? If using multiple settings file is my only option can this (wsgi script handle domain name) be done? Update So finally, following lukes answer, what I will do is define a custom middleware which makes the views available with the important vars required according to the domain. And as far as sitemaps and comments is concerned, I will have to customize sitemaps app and a custom sites model on which the other models of sites will be based. And since the comments system is based on the hard-coded sitemap ID I can use it just as is on the models (models will already be filtered according to the site based on my sites framework) though the permalink feature will have to be customized. So a lot of customization. Please suggest if I am going wrong anywhere in this because I have to ensure that the features of the project are optimized. Thanks!

    Read the article

  • A C# Refactoring Question...

    - by james lewis
    I came accross the following code today and I didn't like it. It's fairly obvious what it's doing but I'll add a little explanation here anyway: Basically it reads all the settings for an app from the DB and the iterates through all of them looking for the DB Version and the APP Version then sets some variables to the values in the DB (to be used later). I looked at it and thought it was a bit ugly - I don't like switch statements and I hate things that carry on iterating through a list once they're finished. So I decided to refactor it. My question to all of you is how would you refactor it? Or do you think it even needs refactoring at all? Here's the code: using (var sqlConnection = new SqlConnection(Lfepa.Itrs.Framework.Configuration.ConnectionString)) { sqlConnection.Open(); var dataTable = new DataTable("Settings"); var selectCommand = new SqlCommand(Lfepa.Itrs.Data.Database.Commands.dbo.SettingsSelAll, sqlConnection); var reader = selectCommand.ExecuteReader(); while (reader.Read()) { switch (reader[SettingKeyColumnName].ToString().ToUpper()) { case DatabaseVersionKey: DatabaseVersion = new Version(reader[SettingValueColumneName].ToString()); break; case ApplicationVersionKey: ApplicationVersion = new Version(reader[SettingValueColumneName].ToString()); break; default: break; } } if (DatabaseVersion == null) throw new ApplicationException("Colud not load Database Version Setting from the database."); if (ApplicationVersion == null) throw new ApplicationException("Colud not load Application Version Setting from the database."); }

    Read the article

  • Using Kerberos authentication for SQL Server 2008

    - by vivek m
    I am trying to configure my SQL Server to use Kerberos authentication. My setup is like this - My setup is like this- I have 2 virtual PCs in a Windows XP Pro SP3 host. Both VPCs are Windows Server 2003 R2. One VPC acts as the DC, DNS Server, DHCP server, has Active Directory installed and the SQL Server default instance is also running on this VPC. The second VPC is the domain member and it acts as the SQL Server client machine. I configured the SPN on the SQL Server service account to get the Kerberos working. On the client VPC it seems like it is using Kerberos authentication (as desired)- C:\Documents and Settings\administrator.SHAREPOINTSVC>sqlcmd -S vm-winsrvr2003 1> select auth_scheme from sys.dm_exec_connections where session_id=@@spid 2> go auth_scheme ---------------------------------------- KERBEROS (1 rows affected) 1> but on the server computer (where the SQL Server instance is actually running) it looks like it is still using NTLM authentication- . This is not a remote instance, the sql server is local to this machine. C:\Documents and Settings\Administrator>sqlcmd 1> select auth_scheme from sys.dm_exec_connections where session_id=@@spid 2> go auth_scheme ---------------------------------------- NTLM (1 rows affected) 1> What can i do so that it uses Kerberos on the server computer as well ? (or is this something that I should not expect)

    Read the article

  • NHibernate One to One Foreign Key ON DELETE CASCADE

    - by xll
    I need to implement One-to-one association between Project and ProjecSettings using fluent NHibernate: public class ProjectMap : ClassMap<Project> { public ProjectMap() { Id(x => x.Id) .UniqueKey(MapUtils.Col<Project>(x => x.Id)) .GeneratedBy.HiLo("NHHiLoIdentity", "NextHiValue", "1000", string.Format("[EntityName] = '[{0}]'", MapUtils.Table<Project>())) .Not.Nullable(); HasOne(x => x.ProjectSettings) .PropertyRef(x => x.Project); } } public class ProjectSettingsMap : ClassMap<ProjectSettings> { public ProjectSettingsMap() { Id(x => x.Id) .UniqueKey(MapUtils.Col<ProjectSettings>(x => x.Id)) .GeneratedBy.HiLo("NHHiLoIdentity", "NextHiValue", "1000", string.Format("[EntityName] = '[{0}]'", MapUtils.Table<ProjectSettings>())); References(x => x.Project) .Column(MapUtils.Ref<ProjectSettings, Project>(p => p.Project, p => p.Id)) .Unique() .Not.Nullable(); } } This results in the following sql for Project Settings: CREATE TABLE ProjectSettings ( Id bigint PRIMARY KEY NOT NULL, Project_Project_Id bigint NOT NULL UNIQUE, /* Foreign keys */ FOREIGN KEY (Project_Project_Id) REFERENCES Project() ON DELETE NO ACTION ON UPDATE NO ACTION ); What I am trying to achieve is to have ON DELETE CASCADE for the FOREIGN KEY (Project_Project_Id), so that when the project is deleted through sql query, it's settings are deleted too. How can I achieve this ? EDIT: I know about Cascade.Delete() option, but it's not what I need. Is there any way to intercept the FK statement generation?

    Read the article

  • Cross-domain data access in JavaScript

    - by vit
    We have an ASP.Net application hosted on our network and exposed to a specific client. This client wants to be able to import data from their own server into our application. The data is retrieved with an HTTP request and is CSV formatted. The problem is that they do not want to expose their server to our network and are requesting the import to be done on the client side (all clients are from the same network as their server). So, what needs to be done is: They request an import page from our server The client script on the page issues a request to their server to get CSV formatted data The data is sent back to our application This is not a challenge when both servers are on the same domain: a simple hidden iframe or something similar will do the trick, but here what I'm getting is a cross-domain "access denied" error. They also refuse to change the data format to return JSON or XML formatted data. What I tried and learned so far is: Hidden iframe -- "access denied" XMLHttpRequest -- behaviour depends on the browser security settings: may work, may work while nagging a user with security warnings, or may not work at all Dynamic script tags -- would have worked if they could have returned data in JSON format IE client data binding -- the same "access denied" error Is there anything else I can try before giving up and saying that it will not be possible without exposing their server to our application, changing their data format or changing their browser security settings? (DNS trick is not an option, by the way).

    Read the article

  • Where to prompt for required file location at start of Win Forms application

    - by Murph
    I have an application that uses a file to store its data. I store the location of the file in the app settings so have two tests at startup: Do I have a setting for the file and Does the file (if I have a setting) exist If I fail either test I want to prompt the user for the file location - the mechanics of the are not the problem, I can read and write the app settings, fire off dialogs and otherwise request the data. If the user refuses to choose a file (or at least a file location) I want to exit the app. My problem is where to do this i.e. at what point in the flow of code. In an ideal world you start the app, show a splash screen, load the main form and run from there... I'm looking for a general pattern that allows me to slot the test for parameters into the right place so that I can prompt the user for whatever (and allowing that I have to worry about the fact that my splash screen is currently topmost for my app). I appreciate that this is a bit vague so will update this with code as we go along.

    Read the article

  • "Subforms" associated with tree view in VB

    - by knomdeguerre
    I am using VB Express 2008 to demonstrate my ideas for an improved UI for an existing product for my colleagues at work. The current UI has a certain page with ten tabs, allowing the user to define up to ten "things". The available choices for each of the ten "things" are all the same. On each of the ten tabs, there is a checkbox to enable that definition. Generally, a user will never use more than 5 or 6 unique definitions, the rest will remain disabled. So far, my prototype has a tree view control with one branch to contain this list of definitions, Add and Delete buttons. My idea is: there is one sub-branch to start with (corresponding to the first tab in the current UI); if the user wants addtional definitions, they click Add and other branches are added to the tree view, up to maximum of ten. I think I should be able to create a "class" that has a sub-UI (like a sub-form in Access) along with behavior code, that can be instantiated with each press of the Add button; each instantiation's settings can be set independently and is displayed in the main UI form )in a panel or frame) when selected in the tree view. For example, suppose the user Adds to make a total of three definitions: the tree view now has three sub-branches, each of which presents the same sub-UI with settings that can be set specific to the selected sub-branch. I'm sure it's possible but not sure how to do it. I know a comprehensive "answer" might be complicated and long, but I may just need some quick hints to get underway - don't be shy! Thanks in advance!

    Read the article

  • How do I compile and build the taf2-curb Ruby gem on Windows XP with MinGW?

    - by Laran Evans
    How do I compile and build the taf2-curb Ruby gem on Windows XP with MinGW? I tried this, but I'm kinda fishing, unsuccessfully. C:\Documents and Settings\Megem install taf2-curb -- --with-curl-include=C:/curl-7.19.5-devel-mingw32/include --with-curl-dir=C:/curl-7.19.5 --with-curl-lib=C:/curl-7.19.5-devel-mingw32/lib --prefix=C:/MinGW --with-curllib Bulk updating Gem source index for: http://gems.rubyforge.org Updating metadata for 73 gems from http://gems.rubyonrails.org ......................................................................... complete Bulk updating Gem source index for: http://gems.github.com Building native extensions. This could take a while... ERROR: Error installing taf2-curb: ERROR: Failed to build gem native extension. C:/Ruby/bin/ruby.exe extconf.rb install taf2-curb -- --with-curl-include=C:/curl-7.19.5-devel-mingw32/include --with-cur l-dir=C:/curl-7.19.5 --with-curl-lib=C:/curl-7.19.5-devel-mingw32/lib --prefix=C:/MinGW --with-curllib checking for curl-config... no checking for main() in true.lib... no *** extconf.rb failed *** Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. Provided configuration options: --with-opt-dir --without-opt-dir --with-opt-include --without-opt-include=${opt-dir}/include --with-opt-lib --without-opt-lib=${opt-dir}/lib --with-make-prog --srcdir=. --curdir --ruby=C:/Ruby/bin/ruby --with-curl-dir --with-curl-include=${curl-dir}/include --with-curl-lib=${curl-dir}/lib --with-curllib extconf.rb:9: Can't find libcurl or curl/curl.h (RuntimeError) Try passing --with-curl-dir or --with-curl-lib and --with-curl-include options to extconf. Gem files will remain installed in C:/Ruby/lib/ruby/gems/1.8/gems/taf2-curb-0.4.8.0 for inspection. Results logged to C:/Ruby/lib/ruby/gems/1.8/gems/taf2-curb-0.4.8.0/ext/gem_make.out C:\Documents and Settings\Me I've installed curl-7.19.5 and curl-7.19.5-devel-mingw from this url: http://curl.haxx.se/download.html Help! And thanks!

    Read the article

  • Travelling Visual Studio developers

    - by Graphain
    Hi, I am about to travel to Europe (I'm Australian but imagine this is a similar circumstance for US users and simply flipped for European users). However, there is the slim possibility I will need to do some Visual Studio work while I'm travelling. As I see it I have three options: Leave a desktop PC on at home, access remotely via net cafes. Carry a laptop with me on the trip, upload files as required using public wifi. Option 2 but instead buy cheap light netbook that is miraculously capable of running VS. Does anyone have any experience or advice to shed on any of these options? For reference, this existing post suggests that VS remotely for short distances is okay, but over longer distances could be more problematic. I've used VS via RDP to a US server before and it was pretty laggy but for small changes I could get by. Concerns I have that you may have some experience with: Weight of luggage (ideally like to travel light) Security of laptop (imagine it'll be too heavy to carry around all the time so have to leave it at hotel/hostel etc. and hope for the best) Security of data (don't want someone stealing RDP access to my home PC) Security of FTP (don't want someone stealing FTP passwords over wireless)

    Read the article

  • Why I get this error in CXF

    - by Milan
    I want to make dynamic web service invoker in JSF with CXF. But when I load this simple code I get error. The code: JaxWsDynamicClientFactory dcf = JaxWsDynamicClientFactory.newInstance(); Client client = dcf.createClient("http://ws.strikeiron.com/IPLookup2?wsdl"); The error: No Factories configured for this Application. This happens if the faces-initialization does not work at all - make sure that you properly include all configuration settings necessary for a basic faces application and that all the necessary libs are included. Also check the logging output of your web application and your container for any exceptions! If you did that and find nothing, the mistake might be due to the fact that you use some special web-containers which do not support registering context-listeners via TLD files and a context listener is not setup in your web.xml. A typical config looks like this; org.apache.myfaces.webapp.StartupServletContextListener Caused by: java.lang.IllegalStateException - No Factories configured for this Application. This happens if the faces-initialization does not work at all - make sure that you properly include all configuration settings necessary for a basic faces application and that all the necessary libs are included. Also check the logging output of your web application and your container for any exceptions! If you did that and find nothing, the mistake might be due to the fact that you use some special web-containers which do not support registering context-listeners via TLD files and a context listener is not setup in your web.xml. A typical config looks like this; org.apache.myfaces.webapp.StartupServletContextListener Any Idea how to solve the problem?

    Read the article

  • Qt Creator CONFIG (debug, release) switches does NOT work

    - by killdaclick
    Problem: CONFIG(debug,debug|release) and CONFIG(release,deubg|release) are always evaluated wherever debug or release is choosen in Qt Creator 2.8.1 for Linux. My configuration in Qt Creator application (stock - default for new project): Projects->Build Settings->Debug Build Steps: qmake build configuration: Debug Effective qmake call: qmake2 proj.pro -r -spec linux-gnueabi-oe-g++ CONFIG+=debug Projects->Build Settings->Release Build Steps: qmake build configuration: Release Effective qmake call: qmake2 proj.pro -r -spec linux-gnueabi-oe-g++ My configuration in proj.pro: message(Variable CONFIG:) message($$CONFIG) CONFIG(debug,debug|release) { message(Debug build) } CONFIG(release,debug|release) { message(Release build) } Output on console for Debug: Project MESSAGE: Variable CONFIG: Project MESSAGE: lex yacc warn_on debug uic resources warn_on release incremental link_prl no_mocdepend release stl qt_no_framework debug console Project MESSAGE: Debug build Project MESSAGE: Release build Output on console for Release: Project MESSAGE: Variable CONFIG: Project MESSAGE: lex yacc warn_on uic resources warn_on release incremental link_prl no_mocdepend release stl qt_no_framework console Project MESSAGE: Debug build Project MESSAGE: Release build Under Windows 7 I didnt experienced any problem with such .pro configuration and it worked fine. I was desperate and modified .pro file: CONFIG = test message(Variable CONFIG:) message($$CONFIG) CONFIG(debug,debug|release) { message(Debug build) } CONFIG(release,debug|release) { message(Release build) } and I was suprised with the output: Project MESSAGE: Variable CONFIG: Project MESSAGE: test Project MESSAGE: Debug build Project MESSAGE: Release build so even if I completly clean CONFIG variable it still see debug and release configuration. What Im doing wrong?

    Read the article

  • Uploading Binary iPhone App "The signature was invalid" again again and again...

    - by user338386
    Hello! I'm going crazy! I'm trying to upload the binary of my first application but I have always the same error! "The binary you uploaded was invalid. The signature was invalid, or it was not signed with an Apple submission certificate." I did everything, EVERYTHING!! I created the request for the certificate, used it for both developer and distribution certificate, created the provisioning profile (12 times!!!) always cleaning my keychain and my Xcode deleting the old certificates and profiles.. I reboot the machine, reboot Xcode, the log is correct, but... I can't upload my app!!!! Checked if my iPhone is connected (i tried with iPhone disconneted too). I checked the certificate in both my project settings "Distribuition" Configuration (duplicate of "Release" configuration) and in my target settings. Reveal in finder, compress the app and sent the zip... I tried with Application Loader and iTunes connect online.. but nothing! NOTHING!! I've spent 8 hours! And again i can't have my app uploaded!!! I'm really going crazy! Can anyone help me pleeease? Thx!

    Read the article

  • php Warning: strtotime() Error

    - by Kavithanbabu
    I have changed my joomla and wordpress files from old server to new server. In the front end and admin side its working without any errors. But in the Database (phpmyadmin) Section it shows some warning messages like this.. Warning: strtotime() [function.strtotime]: It is not safe to rely on the system's timezone settings. You are *required* to use the date.timezone setting or the date_default_timezone_set() function. In case you used any of those methods and you are still getting this warning, you most likely misspelled the timezone identifier. We selected 'Asia/Calcutta' for 'IST/5.0/no DST' instead in /usr/share/phpmyadmin/libraries/db_info.inc.php on line 88 Warning: strftime() [function.strftime]: It is not safe to rely on the system's timezone settings. You are *required* to use the date.timezone setting or the date_default_timezone_set() function. In case you used any of those methods and you are still getting this warning, you most likely misspelled the timezone identifier. We selected 'Asia/Calcutta' for 'IST/5.0/no DST' instead in /usr/share/phpmyadmin/libraries/common.lib.php on line 1483 Can you please suggest, how to hide these warning messages?? Thanks in advance.

    Read the article

  • Looping through Markers with Google Maps API v3 Problem

    - by Oscar Godson
    I'm not sure why this isn't working. I don't have any errors, but what happens is, no matter what marker I click on, it clicks the 3rd one (which is the last one out of 4 markers. Array starts at 0, obviously) and shows the number "3", which is correct for THAT one, but I'm not clicking that one. Here is most of my code, just not the array of [place-name, coordinates] (var locations, which you will see): function initialize() { var latlng = new google.maps.LatLng(45.522015,-122.683811); var settings = { zoom: 15, center: latlng, disableDefaultUI:true, mapTypeId: google.maps.MapTypeId.SATELLITE }; var map = new google.maps.Map(document.getElementById("map_canvas"), settings); var infowindow = new Array(); var marker = new Array(); for(x in locations){ console.log(x); infowindow[x] = new google.maps.InfoWindow({content: x}); marker[x] = new google.maps.Marker({title:locations[x][0],map:map,position:locations[x][1]}); google.maps.event.addListener(marker[x], 'click', function() {infowindow[x].open(map,marker[x]);}); } } initialize() The console.log output is (its correct, and what i expect): 0 1 2 3 So, any ideas?

    Read the article

  • Datareceived Serialport event stops raising after some seconds

    - by Mario
    Hi, I was hoping someone could help me out with this problem. I have a system (VB .NET) where I must read a person's weight (RS232 Sluice) and id (Fingerprint - 2 biometric reader, rs232) and compare it to a database. I have 3 serialports in my app, one for the sluice and the other 2 are to receive the id from the fingerprint readers, both of which call the same sub to get the id from the reader. I've been testing just one reader and it seemed to work fine, I got data from the datareceived and joined it together to get the id. The problem comes at this moment: I put a finger, sends the id, if it's ok sends a message, otherwise, writes the id to a textbox. But in-between reads, if I let 5 or 10 seconds pass without putting a finger on the reader it seems like I get no data at all anymore, the datareceived event nevers gets raised but if I keep putting a finger constantly it works pretty good, this is really weird to me. I was thinking of some things: **Maybe the port gets closed somehow after some time? I never call the CLose() method **The fact both datareceived eventhandlers call the same method and delegate **Maybe the connection settings are missing something? I tested with hyperterminal and the port keeps getting info even after time without activity and I use the same config with my application, maybe I need to change more settings like DTEenable and RTSenable? Please I need some help with this issue, it's to control access so it needs to be running 24/7 thanks in advance!

    Read the article

  • Adobe Air update error "file version doesn't match" but it's the same!...

    - by baroquedub
    I'm using Claus Wahlers' AIR Remote Updater class (codeazur.com.br/lab/airremoteupdater/). All works fine and an update is triggered if the remote version is newer. The newer file is downloaded and the update starts. However I then get "an error has occured" message: "This application cannot be installed because this installer has been mis-configured" (The same file will update without errors when run manually "Would you like to replace the currently installed version?" Choosing 'replace' works fine) I have enabled Air Application Installer logging and I can see that both the app id and the pub id match - this seems to be a common reason for this problem (forums.adobe.com/thread/243421?tstart=60) The error given in the log file is as follows: AIR file version doesn't match Requested version: ; AIR file version: 1.0.2 But if I unzip the new app file and look at META-INF\AIR\application.xml the version designator shows <version>1.0.2</version> As requested! The log file is also showing me where the newer file is being downloaded and unpacked. If I look at the application.xml file in that directory: Unpackaging to C:\Documents and Settings\myusername\Local Settings\Temp\fla893D.tmp the version designator also shows <version>1.0.2</version> I don't get it?! The log tells me that the requested file version doesn't match but it's exactly the same as what's shown in the version designator of the downloaded update package... This is driving me crazy. Can anyone help?

    Read the article

  • How to find out where a thread lock happend?

    - by SchlaWiener
    One of our company's Windows Forms application had a strange problem for several month. The app worked very reliable for most of our customers but on some PC's (mostly with a wireless lan connection) the app sometimes just didn't respond anymore. (You click on the UI and windows ask you to wait or kill the app). I wasn't able to track down the problem for a long time but now I figured out what happend. The app had this line of code // don't blame me for this. Wasn't my code :D Control.CheckForIllegalCrossThreadCalls = false and used some background threads to modify the controls. No I found a way to reproduce the application stopping responding bug on my dev machine and tracked it down to a line where I actually used Invoke() to run a task in the main thread. Me.Invoke(MyDelegate, arg1, arg2) Obviously there was a thread lock somewhere. After removing the Control.CheckForIllegalCrossThreadCalls = false statement and refactoring the whole programm to use Invoke() if modifying a control from a background thread, the problem is (hopefully) gone. However, I am wondering if there is a way to find such bugs without debugging every line of code (Even if I break into debugger after the app stops responding I can't tell what happend last, because the IDE didn't jump to the Invoke() statement) In other words: If my apps hangs how can I figure out which line of code has been executed last? Maybe even on the customers PC. I know VS2010 offers some backwards debugging feature, maybe that would be a solution, but currently I am using VS2008.

    Read the article

  • Tips for Using Multiple Development Systems

    - by Tim Lytle
    When I travel, I don't pack up the desktop I use in the office and take it with me. Maybe I should, but I don't. However, since I'm a contract programmer I like to be able to work wherever I am: I'm mostly thinking of web development here. Version Control goes a long way in keeping sane and working on multiple projects on multiple systems (two or three computers); however, there are the issues of: IDE settings - different display sizes mean the IDE settings can't be completely synced, if at all. Database - if the database is 'external' (even if it's running on the same system, it's not in version control), how do you maintain the needed syncs of structure. Development Stack - Some projects need non-standard extensions, libraries, etc installed. Just an overview of some of the hassle involved with developing on multiple systems. I'll probably end up asking some specific questions, but I thought a CW style tips might reveal some things I would even think to ask about. Update: I guess this would also address tips to make upgrading/replacing your development system easier (something I've just done). So, one tip per answer please, so the 'top' tips are easy to find. How do you make it easier to develop on multiple systems, or to transfer work after upgrading/replaceing a development system?

    Read the article

  • vs2008, opencv2.1,compile error in cxcore.hpp

    - by Long Gu
    Hi, gurus: I installed opencv2.1, made a new project (proj_A) using vs2008, used it for my computer vision tasks, it works fine. I copied an old project (proj_B, also made using vs2008) from other PC, compile it with ".h" and ".lib" files copied from opencv1.0 (which I did not install onto my PC), it compiles fine. I re-directed ".h" and ".lib" files in proj_B to opencv2.1 folders instead, compiled the proj_B, and then I got these errors from cxcore.hpp: class CV_EXPORTS RNG { public: enum { A=4164903690U, UNIFORM=0, NORMAL=1 }; // errors here, line 936 errors are: 5c:\opencv2.1\include\opencv\cxcore.hpp(936) : error C2143: syntax error : missing '}' before 'constant' 5c:\opencv2.1\include\opencv\cxcore.hpp(936) : error C2059: syntax error : 'constant' 5c:\opencv2.1\include\opencv\cxcore.hpp(936) : error C2143: syntax error : missing ';' before '}' 5c:\opencv2.1\include\opencv\cxcore.hpp(936) : error C2238: unexpected token(s) preceding ';' (400+ similar errors, but I believe the answer should be the same, so only list 1 set here) I compared setting for proj_A and proj_B, made them identical, and find no improvement. proj_A works well, proj_B refuse to compile. May I know what's wrong? Urgent, need to get it solved ASAP! Thanks a lot!

    Read the article

  • Firefox proxy dilemma

    - by Mike L.
    Any idea why when using system proxy settings in firefox, it can not accept a proxy such as: user:[email protected]:port ??? IE will allow and connect to a proxy in this format. Not only does firefox not work, but it does not prompt for the password, nor attempt to make a connection to the proxy. Basically get a "proxy server not found" error. Anybody know a way around this? I am working on a proxy switching program for IE & Firefox, and I would like to use system-wide proxy settings. If I just store the server:port combination, firefox prompts for the password, as well as IE. Then they can be cached and it will not ask again. Maybe my only option is to programmatically cache the user/pass? Anybody know a way to do this? I am pretty sure IE stores them at HTTP basic authentication passwords and I can add them with AddCredential. After saving a password for a proxy in firefox, it shows up in saved passwords in a format like "moz-proxy://server:port" anybody know how to programmatically add a saved password to firefox? Thanks

    Read the article

  • How to accept confirmation Automatically in PowerShell for Outlook

    - by user2919845
    How to accept confirmation Automatically in PowerShell for Outlook I have script for Export attachments from email from Outlook - see next It works correctly on one PC, but on another PC is there a problem: Outlook gives message and wants answer: Permit Denny Help If I manually click on Permit or Denny it works correctly. I want to automate it. Can you give me some suggestion how to do it in PowerShell? I have tried to set Outlook to not give this message but I didn’t success. My script: # <-- Script ---------> # script works with outlook Inbox folder # check if email have attachments with ".txt" and save those attachments to $filepath # path for exported files - attachments $filepath = "d:\Exported_files\" # create object outlook $o = New-Object -comobject outlook.application $n = $o.GetNamespace("MAPI") # $f - folder „dorucena posta“ 6 - Inbox $f = $n.GetDefaultFolder(6) # 6 - Inbox # select newest 10 emails, from it olny this one with attachments $f.Items| select -last 10| Where {$_.Attachments}| foreach { # process only unreaded mail if($_.unread -eq $True) { # processed mail set as read, not to process this mail again next day $_.unread = $False $SenderName = $_.SenderName Write-Host "Email from: ", $SenderName # process all attachments $_.attachments|foreach { $a = $_.filename If ($a.Contains(".txt")) { Write-Host $SenderName," ", $a # copy *.txt attachments to folder $filepath $_.saveasfile((Join-Path $filepath "$a")) } } } } Write-Host "Finish" # <------ End Script ---------------------------------->

    Read the article

  • Firefox proxy delima

    - by Mike L.
    Any idea why when using system proxy settings in firefox, it can not accept a proxy such as: user:[email protected]:port ??? IE will allow and connect to a proxy in this format. Not only does firefox not work, but it does not prompt for the password, nor attempt to make a connection to the proxy. Basically get a "proxy server not found" error. Anybody know a way around this? I am working on a proxy switching program for IE & Firefox, and I would like to use system-wide proxy settings. If I just store the server:port combination, firefox prompts for the password, as well as IE. Then they can be cached and it will not ask again. Maybe my only option is to programmatically cache the user/pass? Anybody know a way to do this? I am pretty sure IE stores them at HTTP basic authentication passwords and I can add them with AddCredential. After saving a password for a proxy in firefox, it shows up in saved passwords in a format like "moz-proxy://server:port" anybody know how to programmatically add a saved password to firefox? Thanks

    Read the article

  • Listening socket

    - by hoodoos
    I got a strange problem, I never actually expirienced this before, here is the code of the server (client is firefox in this case), the way I create it: _Socket = new Socket( AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp ); _Socket.Bind( new IPEndPoint( Settings.IP, Settings.Port ) ); _Socket.Listen( 1000 ); _Socket.Blocking = false; the way i accept connection: while( _IsWorking ) { if( listener.Socket.Poll( -1, SelectMode.SelectRead ) ) { Socket clientSocket = listener.Socket.Accept(); clientSocket.Blocking = false; clientSocket.SetSocketOption( SocketOptionLevel.Tcp, SocketOptionName.NoDelay, true ); } } So I'm expecting it hang on listener.Socket.Poll till new connection comes, but after first one comes it hangs on poll forever. I tried to poll it constantly with smaller delay, let's say 10 microseconds, then it never goes in SelectMode.SelectRead. I guess it maybe somehow related on client's socket reuse? Maybe I don't shutdown client socket propertly and client(firefox) decides to use an old socket? I disconnect client socket this way: Context.Socket.Shutdown( SocketShutdown.Both ); // context is just a wrapper around socket Context.Socket.Close(); What may cause that problem?

    Read the article

  • jQuery: bind generated elements

    - by superUntitled
    Hello, thank you for taking time to look at this. I am trying to code my very first jQuery plugin and have run into my first problem. The plugin is called like this <div id="kneel"></div> <script type="text/javascript"> $("#kneel").zod(1, { }); </script> It takes the first option (integer), and returns html content that is dynamically generated by php. The content that is generated needs to be bound by a variety of functions (such as disabling form buttons and click events that return ajax data. The plugin looks like this (I have included the whole plugin in case that matters)... (function( $ ){ $.fn.zod = function(id, options ) { var settings = { 'next_button_text': 'Next', 'submit_button_text': 'Submit' }; return this.each(function() { if ( options ) { $.extend( settings, options ); } // variables var obj = $(this); /* these functions contain html elements that are generated by the get() function below */ // disable some buttons $('div.bario').children('.button').attr('disabled', true); // once an option is selected, enable the button $('input[type="radio"]').live('click', function(e) { $('div.bario').children('.button').attr('disabled', false); }) // when a button is clicked, return some data $('.button').bind('click', function(e) { e.preventDefault(); $.getJSON('/returnSomeData.php, function(data) { $('.text').html('<p>Hello: ' + data + '</p>'); }); // generate content, this is the content that needs binding... $.get('http://example.com/script.php?id='+id, function(data) { $(obj).html(data); }); }); }; })( jQuery ); The problem I am having is that the functions created to run on generated content are not binding to the generated content. How do I bind the content created by the get() function?

    Read the article

  • Using capistrano to deploy from different git branches

    - by Toms Mikoss
    I am using capistrano to deploy a RoR application. The codebase is in a git repository, and branching is widely used in development. Capistrano uses deploy.rb file for it's settings, one of them being the branch to deploy from. My problem is this: let's say I create a new branch A from master. The deploy file will reference master branch. I edit that, so A can be deployed to test environment. I finish working on the feature, and merge branch A into master. Since the deploy.rb file from A is fresher, it gets merged in and now the deploy.rb in master branch references A. Time to edit again. That's a lot of seemingly unnecessary manual editing - the parameter should always match current branch name. On top of that, it is easy to forget to edit the settings each and every time. What would be the best way to automate this process? Edit: Turns out someone already had done exactly what I needed.

    Read the article

< Previous Page | 295 296 297 298 299 300 301 302 303 304 305 306  | Next Page >