Search Results

Search found 19788 results on 792 pages for 'remote host'.

Page 722/792 | < Previous Page | 718 719 720 721 722 723 724 725 726 727 728 729  | Next Page >

  • Is it possible to limit outside connections to a subdomain with .htaccess or similar?

    - by digidave0205
    I host a web application. This application serves static html pages that are refreshed at various intervals. Some as often as every 30 secs. At this time I have about 300 unique pages that are accessed via 300 unique subdomains. Some clients have at most 50 visitors to their unique page and it refreshes every 30 secs, no problem. Other clients have up to 1000 or more visitors to their page. These clients are killing my server. There was no predefined limit upon signup but now I have to impose such a limit to remain afloat financially. I would like to define a finite number of connections allowed for each individual subdomain in my hosting account. Connections attempted out of range of this finite value would either be rejected or redirected. I have access to .htaccess and php.ini. Is something of this nature possible? Oh, I have a dedicated/managed server at 1and1.

    Read the article

  • Git Svn dcommit error - restart the commit

    - by Rob Wilkerson
    Last week, I made a number of changes to my local branch before leaving town for the weekend. This morning I wanted to dcommit all of those changes to the company's Svn repository, but I get a merge conflict in one file: Merge conflict during commit: Your file or directory 'build.properties.sample' is probably out-of-date: The version resource does not correspond to the resource within the transaction. Either the requested version resource is out of date (needs to be updated), or the requested version resource is newer than the transaction root (restart the commit). I'm not sure exactly why I'm getting this, but before attempting to dcommit, I did a git svn rebase. That "overwrote" my commits. To recover from that, I did a git reset --hard HEAD@{1}. Now my working copy seems to be where I expect it to be, but I have no idea how to get past the merge conflict; there's not actually any conflict to resolve that I can find. Any thoughts would be appreciated. EDIT: Just wanted to specify that I am working locally. I have a local branch for the trunk that references svn/trunk (the remote branch). All of my work was done on the local trunk: $ git branch maint-1.0.x master * trunk $ git branch -r svn/maintenance/my-project-1.0.0 svn/trunk Similarly, git log currently shows 10 commits on my local trunk since the last commit with a Svn ID. Hopefully that answers a few questions. Thanks again.

    Read the article

  • Recreating http request with cURL incl. files

    - by Toby
    I consistently get the error 'failed creating formpost data' from the below code, the same thing works perfectly on my local testing server, but on my shared host it throws the error. The sample part is just to simulate building the array with both files and non-file data. Essentially all I'm trying to do here is redirect the same http request to another server, but I'm running into so many troubles. $count=count($_FILES['photographs']['tmp_name']); $file_posts=array('samplesample' => 'ladeda'); for($i=0;$i<$count;$i++) { if(!empty($_FILES['photographs']['name'][$i])) { $fn = genRandomString(); $file_posts[$fn] = "@".$_FILES['photographs']['tmp_name'][$i]; } } $ch = curl_init(); curl_setopt($ch,CURLOPT_URL,"http://myurl/wp-content/plugins/autol/rec.php"); curl_setopt($ch,CURLOPT_USERAGENT, "Mozilla/4.0 (compatible; MSIE 5.01; Windows NT 5.0)"); curl_setopt($ch,CURLOPT_HEADER,TRUE); curl_setopt($ch,CURLOPT_POST,TRUE); curl_setopt($ch,CURLOPT_POSTFIELDS,$file_posts); curl_exec($ch); print curl_error($ch); curl_close($ch);

    Read the article

  • Curl download image not working

    - by mark
    I would like to check whether a remote image is not older than 2 days and then download it. The image is not downloaded in anycase. What is wrong here? $ch = curl_init($file_source); // the file we are downloading curl_setopt($ch, CURLOPT_TIMEOUT, 15); curl_setopt($ch, CURLOPT_FOLLOWLOCATION, true); curl_setopt($ch, CURLOPT_FILETIME, true); curl_setopt($ch, CURLOPT_HEADER, true); curl_setopt($ch, CURLOPT_RETURNTRANSFER, false); curl_exec($ch); $headers = curl_getinfo($ch); $last_modified = $headers['filetime']; if ($last_modified != -1) { if ($last_modifiedtime()-86400*2) { $ch2 = curl_init($file_source); $wh = fopen($file_target, 'wb') or errorIMG('003'); curl_setopt($ch2, CURLOPT_FILE, $wh); curl_setopt($ch2, CURLOPT_TIMEOUT, 25); curl_setopt($ch2, CURLOPT_FOLLOWLOCATION, true); curl_setopt($ch2, CURLOPT_HEADER, true); curl_setopt($ch2, CURLOPT_RETURNTRANSFER, true); curl_exec($ch2); curl_close($ch2); fclose($wh); } } curl_close($ch);

    Read the article

  • $this->url() to another subdomain

    - by Supertino7
    Hello, I created subdomain for my application. host_www.type = "Zend_Controller_Router_Route_Hostname" host_www.route = "www.mywebsite.com" host_www.defaults.module = "produits" host_www.defaults.controller = "produits" host_www.defaults.action = "index" fiche_boutique.route = "ficheboutique/:boutique" fiche_boutique.defaults.controller = "boutique" fiche_boutique.defaults.action = "fiche-boutique" fiche_boutique.defaults.module = "default" fiche_boutique.chain = "host_www" host_produits.type = "Zend_Controller_Router_Route_Hostname" host_produits.route = "produits.mywebsite.com" host_produits.defaults.module = "produits" host_produits.defaults.controller = "produits" host_produits.defaults.action = "index" fiche_produit.type = "Zend_Controller_Router_Route_Regex" fiche_produit.route = "([-\w]+).htm" fiche_produit.reverse = "%s.htm" fiche_produit.map.1 = "q" fiche_produit.defaults.module = "produits" fiche_produit.defaults.controller = "produits" fiche_produit.defaults.action = "voir-produit" fiche_produit.chain = "host" I don't know if the syntax in this zend config ini file is correct, in particular for routes chaining. Once I'm on this subdomain, urls constructed with $this-url() like this : <a href="<?= $this->url(array('boutique' => 1234), 'fiche_boutique', true) ?>"> Visit this store </a> still point to the subdomain produits.mywebsite.com, where I want it to point to www.mywebsite.com For the moment, I do this : <a href="http://www.mywebsite.com<?= $this->url(array('boutique' => 1234), 'fiche_boutique', true) ?>"> Visit this store </a> But it's not flexible at all. Is there a solution, a parameter to add, or my config file is wrong ? thanks in advance for your help.

    Read the article

  • How can I automate new system provisioning with scripts under Mac OS X 10.6?

    - by deeviate
    I've been working on this for days but simply cannot find the correct references to make it work. The idea is to have a script that will baseline newly purchased Macs that comes into the company with basic stuffs like set autologin to off, create a new admin user (for remote admins to access for support, set password to unlock screensaver and etc) . Sample list for baseline that admins have to do on each new machine: Click the Login Options button Set Automatic Login: OFF Check: Show the Restart, Sleep, and Shutdown buttons Uncheck: Show input menu in login window Uncheck: Show password hints Uncheck: Use voice over in the login window Check: Show fast user switching menu as Short Name (note: this is only part of a long list to do on each machine) I've managed to find some references to make some parts work. Like autologin can be unset with: defaults write /Library/Preferences/.GlobalPreferences com.apple.userspref.DisableAutoLogin -bool TRUE and I've kinda found ways to muscle in a new user creation (including prompts) with AppleScript and shell commands. But generally its tough finding ways to do somewhat simple things like turn on password to get out of screensaver or to allow fast user switching. References are either too limited or just no where to be seen (e.g. I can unset autologin via cli but the very next setting on the system preference "show restart, sleep and shutdown buttons" is somewhere else and I can't find any command line to make it set) Does anyone have any ideas on a list, document, reference or anything of where each setting on the system resides so that I can be pointed to make it work? or maybe sample scripts for the above example... My thanks for reading thus far—a huge thank you for whoever that has any info on the above.

    Read the article

  • How do I retrieve twitter xml for Flash site via php properly

    - by daidai
    Am I using TwitterScript to retrieve Twitter data for inside a Flash site. Due to Twitter's crossdomain policy, I need to setup a php proxy... Firstly I made a simple one <?php $url = $_GET['url']; readfile($url); ?> but I then get this error URL file-access is disabled in the server configuration which is only resolved by getting my host to turn fopen() on, which I don't want to do. Then I found this <?php function get_content($url) { $ch = curl_init(); curl_setopt ($ch, CURLOPT_URL, $url); curl_setopt ($ch, CURLOPT_HEADER, 0); ob_start(); curl_exec ($ch); curl_close ($ch); $string = ob_get_contents(); ob_end_clean(); return $string; } #usage: $url = $_GET['url']; $content = get_content ($url); var_dump ($content); ?> which solves that problem but the data now is the correct XML but looks like: string(39950) "<?xml version="1.0" encoding="UTF-8"?> <statuses type="array"> <status> ... </statuses>" How do I get the XML data out of that string?

    Read the article

  • Dom Traversal to Automate Keyboard Focus - Spatial Navigation

    - by Steve
    I'm going to start with a little background that will hopefully help my question make more sense. I am developing an application for a television. The concept is simple and basically works by overlaying a browser over the video plane of the TV. Now being a TV, there is no mouse or additional pointing device. All interaction is done through a remote control. Therefore, the user needs to be able to visually tell which element they are currently focused upon. To indicate that an element is focused, I currently append a colored transparent image over the element to indicate focus. Now, when a user hits the arrow keys, I need to respond by focusing on the correct elements according to the key pressed. So, if the down arrow is pressed I need to focus on the next focusable element in the DOM tree (which may be a child or sibling), and if they hit the up arrow, I need to respond to the previous element. This would essentially simulate spatial navigation within a browser. I am currently setting an attribute (focusable=true) on any DOM elements that should be able to receive focus. What I would like to do is determine the previous or next focusable element (i.e. attribute focusable=true) and apply focus to the element. I was hoping to traverse the DOM tree to determine the next and previously focusable elements, but I am not sure how to perform this in JQuery, or in general. I was leaning towards trying to use the JQuery tree travesal methods like next(), prev(), etc. What approach would you take to solve this type of issue? Thanks

    Read the article

  • Ways to access a 32bit DLL from a 64bit exe

    - by bufferz
    I have a project that must be compiled and run in 64 bit mode. Unfortunately, I am required to call upon a DLL that is only available in 32 bit mode, so there's no way I can house everything in a 1 Visual Studio project. I am working to find the best way to wrap the 32 bit DLL in its own exe/service and issue remote (although on the same machine) calls to that exe/service from my 64 bit app. My OS is Win7 Pro 64 bit. The required calls to this 32 bit process are several dozen per second, but low data volume. This is a realtime image analysis application so response time is critical despite low volume. Lots of sending/receiving single primitives. Ideally, I would host a WCF service to house this DLL, but in a 64 bit OS one cannot force the service to run as x86! Source. That is really unfortunate since I timed function calls to the WCF service to be only 4ms on my machine. I have experimented with named pipes is .net. I found them to be 40-50 times slower than WCF (unusable for me). Any other options or suggestions for the best way to approach my puzzle?

    Read the article

  • Sample twitter App

    - by Jack
    I am now running my code on a web hosting service http://xtreemhost.com/ <?php function updateTwitter($status) { $username = 'xxxxxx'; $password = 'xxxx'; $url = 'http://twitter.com/statuses/update.xml'; $postargs = 'status='.urlencode($status); $responseInfo=array(); $ch = curl_init($url); curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, 2); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt ($ch, CURLOPT_POST, true); // Give CURL the arguments in the POST curl_setopt ($ch, CURLOPT_POSTFIELDS, $postargs); // Set the username and password in the CURL call curl_setopt($ch, CURLOPT_USERPWD, $username.':'.$password); // Set some cur flags (not too important) $response = curl_exec($ch); if($response === false) { echo 'Curl error: ' . curl_error($ch); } else { echo 'Operation completed without any errors<br/>'; } // Get information about the response $responseInfo=curl_getinfo($ch); // Close the CURL connection curl_close($ch); // Make sure we received a response from Twitter if(intval($responseInfo['http_code'])==200){ // Display the response from Twitter echo $response; }else{ // Something went wrong echo "Error: " . $responseInfo['http_code']; } curl_close($ch); } updateTwitter("Just finished a sweet tutorial on http://brandontreb.com"); ?> I get the following error now Curl error: Couldn't resolve host 'api.twitter.com' Error: 0 Please somebody solve my problem

    Read the article

  • How can I initialize an ActiveX control from a URL?

    - by Peter Ruderman
    I have an MFC ActiveX control embedded in a web page. Some of the parameters for this control are very large. I don't know what these values will be at compile time, but I do know that once retrieved, they will almost certainly never change. Currently, I embed the parameters like so: <object name="MyActiveX"> <param name="param" value="<%= GetData() %>" /> </object> I want to do something like this: <object name="MyActiveX"> <param name="param" value="content/data" valuetype="ref" /> </object> The idea is that the browser would retrieve the resource from the web server and pass it on to the control. The browser's own caching would then take care of the unneccesary downloads. Unfortunately, ref parameters don't work like this. The browser just passes the url along to the control (which strikes me as utterly useless, but I digress). So, is there some way I can make this work? Alternatively, is there an easy way in MFC to instruct the control's host container to retrieve a URI identified resource? Any better ideas?

    Read the article

  • Send SOAP via curl

    - by danrichardson
    Hi. This has been bugging me for days, i'm trying to send a SOAP post via curl but i just keep getting a 'couldn't connect to host' error but i really cant see how. I have an asp version which works fine with the same url and data, i think it's just a php/curl thing...? I currently have the following code (the CURLOPT_POSTFIELDS data is a valid soap envelope string) $soap_do = curl_init(); curl_setopt($soap_do, CURLOPT_URL, "https://xxx.yyy.com:517/zzz.asmx" ); curl_setopt($soap_do, CURLOPT_CONNECTTIMEOUT, 10); curl_setopt($soap_do, CURLOPT_TIMEOUT, 10); curl_setopt($soap_do, CURLOPT_RETURNTRANSFER, true ); curl_setopt($soap_do, CURLOPT_SSL_VERIFYPEER, false); curl_setopt($soap_do, CURLOPT_SSL_VERIFYHOST, false); curl_setopt($soap_do, CURLOPT_POST, true ); curl_setopt($soap_do, CURLOPT_POSTFIELDS, '<soap:Envelope>...</soap:Envelope>'); curl_setopt($soap_do, CURLOPT_HTTPHEADER, array('Content-Type: text/xml; charset=utf-8', 'Content-Length: '.strlen('<soap:Envelope>...</soap:Envelope>') )); if(curl_exec($soap_do) === false) { $err = 'Curl error: ' . curl_error($soap_do); curl_close($soap_do); return $err; } else { curl_close($soap_do); return 'Operation completed without any errors'; } So any ideas why it just errors all the time? The asp version works fine! The code is; Set xmlhttp = server.Createobject("MSXML2.ServerXMLHTTP") xmlhttp.Open "POST","https://xxx.yyy.com:517/zzz.asmx" xmlhttp.setRequestHeader "Content-Type", "text/xml; charset=utf-8" xmlhttp.Send('<soap:Envelope>...</soap:Envelope>')

    Read the article

  • How to add clear option to this whiteboard?

    - by swift
    i have to add clear screen option to my whiteboard application, usual procedure is to draw a fill rect to the sizeof the image. But in my app i have transparent panels added one above the other i.e as layers, if i follow the usual procedure the drawing from the underlying panel wont be visible. please tell me any logic to do this. public void createFrame() { JFrame frame = new JFrame(); JLayeredPane layerpane=frame.getLayeredPane(); board= new Whiteboard(client); //board is a transparent panel // tranparent image: board.image = new BufferedImage(590,690, BufferedImage.TYPE_INT_ARGB); board.setBounds(74,23,590,690); board.setImage(image); virtualboard.setImage(image); //virtualboardboard is a transparent panel virtualboard.setBounds(74,23,590,690); JPanel background=new JPanel(); background.setBackground(Color.white); background.setBounds(74,25,590,685); layerpane.add(board,new Integer(5)); layerpane.add(virtualboard,new Integer(4));//Panel where remote user draws layerpane.add(background,new Integer(3)); layerpane.add(board.colourButtons(),new Integer(2)); layerpane.add(board.shapeButtons(),new Integer(1)); layerpane.add(board.createEmptyPanel(),new Integer(0)); }

    Read the article

  • Parsing Windows Event Logs, is it possible?

    - by xceph
    Hello, I am doing a little research into the feasibility of a project I have in mind. It involves doing a little forensic work on images of hard drives, and I have been looking for information on how to analyze saved windows event log files. I do not require the ability to monitor current events, I simply want to be able to view events which have been created, and record the time and application/process which created those events. However I do not have much experience in the inner workings of the windows system specifics, and am wondering if this is possible? The plan is to create images of a hard drive, and then do the analysis on a second machine. Ideally this would be done in either Java or Python, as they are my most proficient languages. The main concerns I have are as follows: Is this information encrypted in anyway? Are there any existing API for parsing this data directly? Is there information available regarding the format in which these logs are stored, and how does it differ from windows versions? This must be possible from analyzing the drive itself, as ideally the installation of windows on the drive would not be running, (as it would be a mounted image on another system) The closest thing I could find in my searches is http://www.j-interop.org/ but that seems to be aimed at remote clients. Ideally nothing would have to be installed on the imaged drive. The other solution which seemed to also pop up is the JNI library, but that also seems to be more so in the area of monitoring a running system. Any help at all is greatly appreciated. :)

    Read the article

  • [C++] Trouble declaring and recognizing global functions

    - by Sarah
    I've created some mathematical functions that will be used in main() and by member functions in multiple host classes. I was thinking it would be easiest to make these math functions global in scope, but I'm not sure how to do this. I've currently put all the functions in a file called Rdraws.cpp, with the prototypes in Rdraws.h. Even with all the #includes and externs, I'm getting a "symbol not found" error at the first function call in main(). Here's what I have: // Rdraws.cpp #include <cstdlib> using namespace std; #include <cmath> #include "Rdraws.h" #include "rng.h" extern RNG rgen // this is the PRNG used in the simulation; global scope void rmultinom( double p_trans[], int numTrials, int numTrans, int numEachTrans[] ) { // function 1 def } void rmultinom( const double p_trans[], const int numTrials, int numTrans, int numEachTrans[]) { // function 2 def } int rbinom( int nTrials, double pLeaving ) { // function 3 def } // Rdraws.h #ifndef RDRAWS #define RDRAWS void rmultinom( double[], int, int, int[] ); void rmultinom( const double[], const int, int, int[] ); int rbinom( int, double ); #endif // main.cpp ... #include "Rdraws.h" ... extern void rmultinom(double p_trans[], int numTrials, int numTrans, int numEachTrans[]); extern void rmultinom(const double p_trans[], const int numTrials, int numTrans, int numEachTrans[]); extern int rbinom( int n, double p ); ... int main() { ... } I'm pretty new to programming. If there's a dramatically smarter way to do this, I'd love to know.

    Read the article

  • Nginx Rails app can't deploy

    - by user3596718
    I have an issue with my rails application running with passenger and nginx hosted in Ubuntu 12.04. In the nginx.conf file below, my "example.com" (Regular HTML) and "redmine.example.com" (Rails app) are working perfectly, but my "crete.example.com" (Another Rails app) is showing "502 bad gateway". I have them both hosted in /var/data with the same permissions and ownerships, also tried different ports, I can't think of something else to try. worker_processes 1; events { worker_connections 1024; } http { passenger_root /usr/lib/ruby/vendor_ruby/phusion_passenger/locations.ini; include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; server{ listen 80; server_name example.com; root /opt/nginx/html; } server{ server_name redmine.example.com; root /var/data/redmine/public; passenger_enabled on; location ~ ^/<SUBURI>(/.*|$){ alias /var/data/redmine/public$1; passenger_base_uri /redmine; passenger_app_root /var/data/redmine; passenger_document_root /var/data/redmine/public; passenger_enabled on;} } server{ server_name crete.example.com; root /var/data/crete/public; passenger_enabled on; location ~ ^/<SUBURI>(/.*|$){ alias /var/data/crete/public$1; passenger_base_uri /crete; passenger_app_root /var/data/crete; passenger_document_root /var/data/crete/public; passenger_enabled on;} } } This are my Ruby and Rails versions: ruby 2.0.0p451 (2014-02-24 revision 45167) [x86_64-linux] Rails 4.1.0 My nginx error.log 2014/05/02 12:29:50 [error] 3343#0: *4 upstream prematurely closed connection while reading response header from upstream, client: xxx.xx.xx.xx, server: crete.example.com, request: "GET / HTTP/1.1", upstream: "passenger:/tmp/passenger.1.0.3 323/generation-0/request:", host: "crete.example.com" Any other conf file you might need to solve this don't hesitate to ask.

    Read the article

  • Can't get Dialog hosting a WebView to layout properly

    - by user246114
    Hi, I'm trying to host a webview on a dialog, without a titlebar, but I'm getting odd layout results. Example test: @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); requestWindowFeature(Window.FEATURE_NO_TITLE); WindowManager wm = (WindowManager)getContext().getSystemService(Context.WINDOW_SERVICE); Display display = wm.getDefaultDisplay(); LinearLayout ll = new LinearLayout(getContext()); ll.setOrientation(LinearLayout.VERTICAL); ll.setLayoutParams(new LayoutParams( LinearLayout.LayoutParams.FILL_PARENT, LinearLayout.LayoutParams.FILL_PARENT)); ll.setMinimumWidth(display.getWidth() - 10); ll.setMinimumHeight(display.getHeight() - 10); WebView wv = new WebView(getContext()); wv.setLayoutParams(new LayoutParams( LinearLayout.LayoutParams.FILL_PARENT, LinearLayout.LayoutParams.FILL_PARENT)); wv.getSettings().setJavaScriptEnabled(true); ll.addView(mWebView); setContentView(ll, new LinearLayout.LayoutParams( LinearLayout.LayoutParams.FILL_PARENT, LinearLayout.LayoutParams.FILL_PARENT)); } the dialog is inflated at startup, but the webview is not visible. If I rotate the device, it becomes visible, and fills the whole parent layout as requested. What is the right way to do this? I simply want a dialog which occupies most of the device screen, and has a WebView on it which fills the entire space. Thanks

    Read the article

  • Good Hosting Providers With Zend Framework Support

    - by manyxcxi
    I currently use ixwebhosting for my hosting services. They're cheap and work (most of the time). The databases are horribly slow, the servers are horribly slow, and their support (though usually prompt) is tough to deal with. That being said, they're cheap, I've got like 20 domains hosted in my account, none of them are high volume, and they work JUST good enough- until today. This isn't meant to be a condemnation of ixwh though. Their prices are very low for what they do offer and most things work just fine, most of the time. I need to be able to host web apps written with Zend Framework in a fairly easy fashion. The server performance can't be worse than what I've already had (a pretty low hurdle to clear), and I don't want to spend $30/mo. These are not money making websites- they're projects. My requirements are PHP 5.3, ZF support, MySQL databases, multiple domains- not much. Who should I look at, and who should I look out for? Also- I put this on SO instead of SF because of the Zend Framework specific requirement. If I'm wrong, do as you wish.

    Read the article

  • When is a Web Service constructor called? [Java Netbeans 6.7.1 & Tomcat 6.0.18]

    - by Shaitan00
    I am migrating a Java RMI application to Java Web Service (school assignment) and I've encountered an issue... Currently my Java Server creates an instance of the Remote Object, this object has a constructor and takes a parameter (int ID) which tells it which database to load in memory - works like a charm ... Now, migrating this to Web Services is causing my a problem - first I needed to add a default constructor because it wouldn't deploy without it, and then while doing some reading all these discussions about "stateless web services" kept coming up ... For example, if I "start" my webservice with parameter(0) it would load from Databse 0 and all requests from Clients would be done using that data... I want this to only happen when I start the WebService and NOT everytime the client connects... Loading from the DB is expensive and takes time, so I want to do it once so that clients when they connect just deal with the data in memory ... This is how it works with my Java RMI .... but can this also work with Web Services? Any advice would be much appreciated. Thanks,

    Read the article

  • .NET: Calling GetInterface method of Assembly obj with a generic interface argument

    - by Khnle
    I have the following interface: public interface PluginInterface<T> where T : MyData { List<T> GetTableData(); } In a separate assembly, I have a class that implements this interface. In fact, all classes that implement this interface are in separate assemblies. The reason is to architect my app as a plugin host, where plugin can be done in the future as long as they implement the above interface and the assembly DLLs are copied to the appropriate folder. My app discovers the plugins by first loading the assembly and performs the following: List<PluginInterface<MyData>> Plugins = new List<PluginInterface<MyData>>(); string FileName = ...;//name of the DLL file that contains classes that implement the interface Assembly Asm = Assembly.LoadFile(Filename); foreach (Type AsmType in Asm.GetTypes()) { //Type type = AsmType.GetInterface("PluginInterface", true); // Type type = AsmType.GetInterface("PluginInterface<T>", true); if (type != null) { PluginInterface<MyData> Plugin = (PluginInterface<MyData>)Activator.CreateInstance(AsmType); Plugins.Add(Plugin); } } The trouble is because neither line where I am getting the type as by doing Type type = ... seems to work, as both seems to be null. I have the feeling that the generic somehow contributes to the trouble. Do you know why?

    Read the article

  • PHP: exif_imagetype() not working?

    - by Karem
    I have this extension checker: $upload_name = "file"; $max_file_size_in_bytes = 8388608; $extension_whitelist = array("jpg", "gif", "png", "jpeg"); /* checking extensions */ $path_info = pathinfo($_FILES[$upload_name]['name']); $file_extension = $path_info["extension"]; $is_valid_extension = false; foreach ($extension_whitelist as $extension) { if (strcasecmp($file_extension, $extension) == 0) { $is_valid_extension = true; break; } } if (!$is_valid_extension) { echo "{"; echo "error: 'ext not allowed!'\n"; echo "}"; exit(0); } And then i added this: if (exif_imagetype($_FILES[$upload_name]['name']) != IMAGETYPE_GIF OR exif_imagetype($_FILES[$upload_name]['name']) != IMAGETYPE_JPEG OR exif_imagetype($_FILES[$upload_name]['name']) != IMAGETYPE_PNG) { echo "{"; echo "error: 'This is no photo..'\n"; echo "}"; exit(0); } As soon when I added this to my imageupload function, the function stops working. I dont get any errors, not even the one i made myself "this is no photo", what could be wrong? Just checked with my host. They support the function exif_imagetype()

    Read the article

  • IIS7 URL Rewriting: How not to drop HTTPS protocol from rewritten URL?

    - by Scott Mitchell
    I'm working on a website that's using IIS 7's URL rewriting feature to do a permanent redirect from example.com to www.example.com, as well as rewrites from similar domain names to the "main" one, such as from www.examples.com to www.example.com. This rewrite rule - shown below - has worked well for sometime now. However, we recently added HTTPS support and noticed that if users visit one of the URLs to be rewritten to www.example.com then HTTPS is dropped. For instance, if a user visits https://example.com they get redirected to http://www.example.com, whereas we would like them to be sent to https://www.example.com. Here is the rewrite rule of interest (in Web.config): <rule name="Canonical Host Name" stopProcessing="true"> <match url="(.*)" /> <conditions logicalGrouping="MatchAny"> <add input="{HTTP_HOST}" pattern="^example\.com$" /> <add input="{HTTP_HOST}" pattern="^(www\.)?example\.net$" /> <add input="{HTTP_HOST}" pattern="^(www\.)?example\.info$" /> <add input="{HTTP_HOST}" pattern="^(www\.)?examples\.com$" /> </conditions> <action type="Redirect" url="http://www.example.com/{R:1}" redirectType="Permanent" /> </rule> As you can see, the action element's url attribute points directly to http://, so I get why https://example.com is redirected to http://www.example.com. My question is, how do I fix this? I tried (naively) to just drop the http:// part from the url attribute, but that didn't work. Thanks!

    Read the article

  • Perl, Net::Traceroute::PurePerl return value

    - by John R
    This is a sub routine that I copied from CPAN. It works fine as it is when I run it from the command line. I have a similar function from Net::Traceroute that also works fine AND allows me to return the string with a SOAP call. The problem comes when I try to return the ~string(?) from the function below with a SOAP call. sub tr { use Net::Traceroute::PurePerl; my $t = new Net::Traceroute::PurePerl( backend => 'PurePerl', # this optional host => 'www.whatever.com', debug => 0, max_ttl => 30, query_timeout => 2, packetlen => 40, protocol => 'udp', # Or icmp ); $t->traceroute; $t->pretty_print; return $t; #print $t; } The output looks like a string except the last part of the string looks like this: 28 * * * 29 * * * 30 * * * Net::Traceroute::PurePerl=HASH(0x11fa6bf0) I don't know what is different about Net::Traceroute::PurePerl that won't allow me to return the value with SOAP since the Net::Traceroute version does allow me to return it with SOAP.

    Read the article

  • find and replace values in csv using PHP

    - by peirix
    I'd think there was a question on this already, but I can't find one. Maybe the solution is too easy... Anyway, I have a csv and want to let the user change the values based on a name. I've already sorted out creating new name+value-pairs using the fopen('a') mode, using jQuery to send the AJAX call with newValue and newName. But say the content looks like this: host|http:www.stackoverflow.com folder|/questions/ folder2|/users/ And now I want to change the folder value. So I'll send in folder as oldName and /tags/ as newValue. What's the best way to overwrite the value? The order in the list doesn't matter, and the name will always be on the left, followed by a |(pipe), the value and then a new-line. My first thought was to read the list, store it in an array, search all the [0]'s for oldName, then change the [1] that belongs to it, and then write it back to a file. But I feel there is a better way around this? Any ideas? Maybe regex?

    Read the article

  • Using a CNAME with Shared Windows Azure Website

    - by user1679021
    I've been following instructions on the Azure site to add a CNAME to point to my Azure website. I have had some problems getting it to work and there seems to be some contradictory information in some of the posts. I have my website running in "Shared" mode, which according to the Azure instructions supports custom domains and indeed it seems to allow me to manage domains. But some posts seem to indicate that I have to run in reserved mode. Can anyone confirm this? Also, some posts seem to indicate that I need to add the CNAME in the Azure management portal, but I cannot find where this is. Any help appreciated? I don't really understand A records and CNAME that well. My DNS provider allows me to add both. Do I need to change both? Currently my A record points the "root" to the IP address that Azure gives me and the CNAME points www.mydomain to the Azure website host mysite.azurewebsites.net. I have left them for a while to propogate and nothing seem to happen.

    Read the article

< Previous Page | 718 719 720 721 722 723 724 725 726 727 728 729  | Next Page >