Search Results

Search found 29 results on 2 pages for 'florin gogianu'.

Page 1/2 | 1 2  | Next Page >

  • FileWatcher fires change event after file deletion

    - by florin
    Hi all, I'm using a FileWatcher to trigger processing of files as soon as they are added to a folder. After the file it is processed it is deleted. My problem is that after the file is deleted I get another file change event which is so close to the deletion than in some cases checking for File.Exists it tells that the file still exists. But of course some milliseconds later when looking to process the file it does not really exists. The FileWatcher is set to notify on NotifyFilters.FileName | NotifyFilters.LastAccess | NotifyFilters.LastWrite | NotifyFilters.Size | NotifyFilters.Attributes Thanks, florin

    Read the article

  • WordPress 3.5 Multisite and nginx siteurl issues

    - by Florin Gogianu
    I'm setting up multisite on localhost in subdirectories. The problem is that when I'm trying to access the dashboard of a site I just created ( localhost/wptest/site/wp-admin ) I get "This webpage has a redirect loop" and when I try to access the actual website ( localhost/wptest/site ) the page loads but without assets, such as css. When I access the network dashboard, or the primary site dashboard on localhost/wptest everything is just fine. Also when I edit the permalink of the second site in the network dashboard, to be like this: localhost/site it also runs fine. How to make it work with the default permalink structure localhost/wptest/site? The wordpress files are in /usr/share/html/wptest The wp-config.php is as follows: define('WP_ALLOW_MULTISITE', true); define('MULTISITE', true); define('SUBDOMAIN_INSTALL', false); define('DOMAIN_CURRENT_SITE', 'localhost'); define('PATH_CURRENT_SITE', '/wptest/'); define('SITE_ID_CURRENT_SITE', 1); define('BLOG_ID_CURRENT_SITE', 1); And the server block / virtual host is like this: server { ##DM - uncomment following line for domain mapping listen 80 default_server; #server_name example.com *.example.com ; ##DM - uncomment following line for domain mapping #server_name_in_redirect off; access_log /var/log/nginx/example.com.access.log; error_log /var/log/nginx/example.com.error.log; root /usr/share/nginx/html/wptest; index index.html index.htm index.php; if (!-e $request_filename) { rewrite /wp-admin$ $scheme://$host$uri/ permanent; rewrite ^(/[^/]+)?(/wp-.*) $2 last; rewrite ^(/[^/]+)?(/.*\.php) $2 last; } location / { try_files $uri $uri/ /index.php?$args ; } location ~ \.php$ { try_files $uri /index.php; include fastcgi_params; fastcgi_pass unix:/var/run/php5-fpm.sock; } location ~* ^.+\.(ogg|ogv|svg|svgz|eot|otf|woff|mp4|ttf|rss|atom|jpg|jpeg|gif|png|ico|zip|tgz|gz|rar|bz2|doc|xls|exe|ppt|tar|mid|midi|wav|bmp|rtf)$ { access_log off; log_not_found off; expires max; } location = /robots.txt { access_log off; log_not_found off; } location ~ /\. { deny all; access_log off; log_not_found off; } } And finally here's an error log: 2013/06/29 08:05:37 [error] 4056#0: *52 rewrite or internal redirection cycle while internally redirecting to "/index.php", client: 127.0.0.1, server: example.com, request: "GET /nginx HTTP/1.1", host: "localhost"

    Read the article

  • TransactionScope won't work with DB2 provider

    - by Florin
    Hi Everyone, I've been trying to use TransactionScope with a DB2 database (using DB2 .Net provider v 9.0.0.2 and c# 2.0) which SHOULD be supported according to IBM. I have tried all the advice i could find on the IBM forums (such as here) to no avail. I have enabled XA transactions on my XP Sp2 machine, tried also from a Win 2003 Server machine but i consistently get the infamous error: ERROR [58005] [IBM][DB2/NT] SQL0998N Error occurred during transaction or heuristic processing. Reason Code = "16". Subcode = "2-80004005". SQLSTATE=58005 The windows event log says: The XA Transaction Manager attempted to load the XA resource manager DLL. The call to LOADLIBRARY for the XA resource manager DLL failed: DLL=C:\APPS\IBM\DB2v95fp2\SQLLIB\BIN\DB2APP.DLL File=d:\comxp_sp2\com\com1x\dtc\dtc\xatm\src\xarmconn.cpp Line=2467. Also, granted the NETWORK SERVICE user full rights to the folder and dll. Here's the MSDTC startup message MS DTC started with the following settings: Security Configuration (OFF = 0 and ON = 1): Network Administration of Transactions = 0, Network Clients = 0, Inbound Distributed Transactions using Native MSDTC Protocol = 0, Outbound Distributed Transactions using Native MSDTC Protocol = 0, Transaction Internet Protocol (TIP) = 0, XA Transactions = 1 Any help would be much appreciated! Thanks, Florin

    Read the article

  • The best choice of linux file system and software that can be accesed from Windows

    - by Florin
    I am curently having ubuntu and win 7 dual boot and I want to delete my windows 7 and format all my partitions to use a linux file system. But I want to leave a door open in case I have any problems with linux, to be able to acces my linux file system with windows. I know that there are programs that can give you read-write acces to a ext2/3/4 FS (I tested none). I need advice in choosing the right FS, what are the diferences between ext 2/3/4 and what is the best software to do that.

    Read the article

  • The best linux file system and software to read write on it in Windows

    - by Florin
    I am curently having ubuntu and win 7 dual boot and I want to delete my windows 7 and format all my partitions to use a linux file system. But I want to leave a door open in case I have any problems with linux, to be able to acces my linux file system with windows. I know that there are programs that can give you read-write acces to a ext2/3/4 FS (I tested none). I need advice in choosing the right FS, what are the diferences between ext 2/3/4 and what is the best software to do that.

    Read the article

  • Experience vs. versatility

    - by Florin Bombeanu
    Let's say a .NET programmer works at a company which provides software on demand, not as a product. The programmer works in WPF for a period of time and he/she invests lots of time in it. He/she get very good at WPF and Windows Forms and desktop development in general. But the company has to provide a web application now, so the developer has to learn MVC or Web Forms. He/she is not experienced in web development so he/she starts investing time in this new technology and in time they get good at it. But this time the company has to provide a Sharepoint solution, and so on. What is more important: Being very very good at a certain technology, Or be as versatile as possible knowing less in each technology but covering a greater area of expertise? Should the programmer keep studying and working in WPF until he/she reaches a guru level or is it a good thing that they had to learn other technologies as well? I agree with those of you who will say that when learning different technologies you will also learn things which are useful no matter the technology you're programming in. But eventually, when the programmer will want to change jobs, will it matter more that he/she knows some WPF, MVC or Sharepoint than the fact that he/she is insanely good at one of them? I would think the second one is more important since most companies are looking for a developer for a certain technology. I don't think there are many companies looking for technical know-it-all people. What do you think?

    Read the article

  • Broadcom BCM4313 wireless slow and high-latency

    - by Florin Andrei
    Ubuntu 12.10 64 bit on a Dell Latitude E6330 laptop. Wireless is pretty slow. It gets connected quick enough, but then it acts like a dialup connection. My ssh sessions over WiFi are slow and laggy. Even browsing is slow, the pages are loading like it's 1998. This does not depend on the access point, it's the same both at home and at work. Other systems work fine on these access points. I had an older Dell laptop before, different WiFi hardware, and it was much faster over the same wireless access points. Is this a known issue with this hardware? If so, any solutions?

    Read the article

  • How to access photos from my Genius G-Shot P7545 camera?

    - by Florin
    I have a Genius G-Shot P7545 camera. In Windows I had just to plug in the camera to the usb and acces it like a usb stick. I tried to do that in Ubuntu 10.10 with no result. How can I acces the photots? With these comands I get: lsusb Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 002 Device 015: ID 0784:1692 Vivitar, Inc. Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub sudo fdisk -l Disk /dev/sda: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xcf5acf5a Device Boot Start End Blocks Id System /dev/sda1 * 1 2550 20478976 83 Linux /dev/sda3 2550 19458 135810048 f W95 Ext'd (LBA) /dev/sda5 2550 19458 135809024 83 Linux Unable to read /dev/sdb

    Read the article

  • Canonical links for huge websites

    - by Florin
    Let's say I have 5 products that are identical but the product code, the product color specifications and the product image. The title, meta and description are identical (by the way the color is in a select form). I made 4 products link canonical to the 1 that is the master based on many factors. If the master becomes inactive or without a stock one product from the other 4 will become the new master and the rest will become canonical to it. The question is if that by becomeing master from canonical will the site suffer a penalty from Google or it will work just fine? What will Google think about this strategy?

    Read the article

  • Choosing what logwatch is reporting on, on Centos 5.4

    - by florin
    I have two Centos 5.4 servers that I set up within weeks of each other. One is e-mail server (let's label it EM) and the other is a web and ftp server (labeled WF). Logwatch came pre-configured and I have not altered its setup in any way -- but the log messages are quite different between the two: server EM reports ssh status while WF does not. With ntpd, the situation is reversed. I know I could start tweaking some knobs in /etc/logwatch and somesuch, but why are the results from the default configuration so different?

    Read the article

  • How to start vim without executing /etc/vimrc?

    - by florin
    On my Linux server at work, the admins did not install cscope, and I installed it from source in my home directory and added it to the $PATH. The trouble is, the /etc/vimrc has a reference to /usr/bin/cscope which does not exist and everytime I start vim, it complains about that and I have to press for that message to go away. It is interesting that if I remove cscope from my $PATH, I don't get that behavior - so it is possible that vim is testing that cscope exists somewhere, and only then executing the cscope configuration - but then it gets it wrong! So my question is: can I set something up in my .vimrc so it does not source the global /etc/vimrc? I don't want to move cscope out of PATH, as I don't want to type the full directory name every time I run it from the command line.

    Read the article

  • Monitoring / metric collection for system collectives that change a lot in time (a.k.a. cloud)

    - by Florin Andrei
    When your server fleet doesn't change a lot in time, like when you're using bare-metal hosting, classic monitoring and metric collection solutions (Nagios, Munin) work well. But if the number of systems varies a lot in time, and may in fact vary rapidly, classic software is more difficult to setup and use. E.g., trying to make Nagios (monitoring) keep up with a rapidly evolving cloud infrastructure can be cumbersome. Same for Munin (metric collection). It's not just the configuration, but the way the information is conveyed to the user, or displayed, is inadequate for the cloud. What are some possible alternatives that work well with the cloud? The goals are to collect and display metrics (analog to Munin), and generate alerts when certain metrics go out of bounds or when certain services are unavailable (analog to Nagios), and do everything in a cloud-friendly manner. Some cloud providers offer monitoring / metric collection as services, but not always, and if you use more than one provider you don't want to become too dependent of just one vendor. So provider-independent solutions are required. EDIT: I am asking this question in a general fashion - not limited to any given cloud infrastructure (like OpenStack), but in the general case of using arbitrary cloud providers.

    Read the article

  • Configure Windows Routes for VPN

    - by Florin Sabau
    I have a Virtual PC/VMWare machine that runs Windows Server 2003. This virtual machine uses an IPSec VPN client program to connect to a remote network. I configured the virtual machine to have 2 NICs: NAT - to be used by the VPN Client to access the remote network Host only - to be able to access the virtual machine from the host The reason I have this setup is because I want to be able to access some remote network from the host machine. I could've installed the VPN client on the host machine, but the host runs Windows 7 and the client doesn't support it. The problem: although the virtual machine is normally reachable (ping + http access), as soon as the VPN client is started, neither of the NIC addresses are reachable anymore. I'm wondering if it is a routing problem that needs to be addressed? How do routing/VPN client connection affect the ability of the server to respond to client requests from the host?

    Read the article

  • Clear Type problem in Windows 7

    - by Florin Sabau
    I try to tune ClearType in Windows 7 x64 using the ClearType Text Tuner. I can choose whatever options I want on the first 3 pages, but on the last page, whatever I choose is reverted as soon as I click finish. Next time I run the tuner I can see that the second option is selected, not the option that I wanted (the last one). Has anybody else found this odd behavior?

    Read the article

  • Is there an apache module to slow down site scans?

    - by florin
    I am administering a few web servers. Each night, random hosts from the Internet are probing them for various vulnerabilities in php, phpadmin, horde, mysqladmin, etc. Is there a way (apache plugin?) to slow down the rate of attack? For SSH, I have a rate limiting rule on the firewall, which does not allow more than three connections per minute. But I don't want to rate limit all HTTP access, only the access that returns 404s. Is there such an apache module?

    Read the article

  • Configure Windows Routes for VPN

    - by Florin Sabau
    I have a Virtual PC/VMWare machine that runs Windows Server 2003. This virtual machine uses an IPSec VPN client program to connect to a remote network. I configured the virtual machine to have 2 NICs: NAT - to be used by the VPN Client to access the remote network Host only - to be able to access the virtual machine from the host The reason I have this setup is because I want to be able to access some remote network from the host machine. I could've installed the VPN client on the host machine, but the host runs Windows 7 and the client doesn't support it. The problem: although the virtual machine is normally reachable (ping + http access), as soon as the VPN client is started, neither of the NIC addresses are reachable anymore. I'm wondering if it is a routing problem that needs to be addressed? How do routing/VPN client connection affect the ability of the server to respond to client requests from the host?

    Read the article

  • Would an array of SSD drives be able to succesfully substitute the system memory?

    - by Florin Mircea
    I watched a few videos trying to answer this. This video (youtube.com/watch?v=eULFf6F5Ri8) shows a bunch of guys stacking 24 SSD's reaching a peak of around 2GBps r/w. That's under the limit of the worst DDR3 in this list (memorybenchmark.net/write_ddr3_amd.html) - that shows DDR3 memory performance varying from 2.78 to 6.55 Gb per second, but that video is over 3 years old. This video (youtube.com/watch?v=27GmBzQWwP0) shows a more optimistic situation, but for PCI-E SSD drives: 5 drives peaking at around 4Gb. And this other video shows that stacking up more than 3 SSD's doesn't realistically offer a substantial added performance. This and the fact that in all benchmarks the drives act quite poorly when dealing with small files (5k file read/write averaging from 10MB to around 30-40MBps) as opposed to how native memory handles such files, seems to indicate a definite NO to this question. Also, the write life cycle is indeed limited and the drives might wear out quickly, as kindly pointed out by paddy. However, I wanted to get more opinions on this. Would it be possible to at least obtain current memory performance with SSD's in RAID 0? And if so, in what circumstances? I am assuming using this configuration with a Windows OS that has a memory pagefile resident to that stack of SSD's, thus making it very fast to work with.

    Read the article

  • Precompile asp.net webpart dll

    - by mathias florin
    Hi, I have built a few custom webparts for WSS 3 using the Visual Studio 2010 Web application template. When I compile the application, Visual Studio creates the assembly file in the bin directory which I copy over later to the production server (another machine) with WSS 3. The compiled webpart dll is copied into the bin folder inside the virtual directory of WSS. I would like to precompile the webpart / dll using the aspnet_precompiler on my development machine to reduce the delay when the page is first requested. I have used the following command to precompile the entire Web Application: aspnet_precompile -v / -p PATH_TO_WEB_APPLICATION C:\WebApp -errorstack The compilation runs fine without any errors and I end up with a couple of .compiled files and also a Web_App_xxxxx.dll file inside the C:\WebApp\bin folder. From here onwards I am a bit lost how to proceed. Could you give me an advise to which folder I need to copy the compiled files on the production server? Do they need to go into the bin folder on the server or better inside the temporary asp.net folder %SystemRoot%\Microsoft.NET\Framework\versionNumber\Temporary ASP.NET Files folder? Can I precompile the Web application on a development machine without the IIS metabase? Cheers, Mathias

    Read the article

  • Add properties to stdClass object from another object

    - by Florin
    I would like to be able to do the following: $obj = new stdClass; $obj->status = "success"; $obj2 = new stdClass; $obj2->message = "OK"; How can I extend $obj so that it contains the properties of $obj2, eg: $obj->status //"success" $obj->message // "OK" I know I could use an array, add all properties to the array and then cast that back to object, but is there a more elegant way, something like this: extend($obj, $obj2); //adds all properties from $obj2 to $obj Thanks!

    Read the article

  • Calling a method on an unitialized object (null pointer)

    - by Florin
    What is the normal behavior in Objective-C if you call a method on an object (pointer) that is nil (maybe because someone forgot to initialize it)? Shouldn't it generate some kind of an error (segmentation fault, null pointer exception...)? If this is normal behavior, is there a way of changing this behavior (by configuring the compiler) so that the program raises some kind of error / exception at runtime? To make it more clear what I am talking about, here's an example. Having this class: @interface Person : NSObject { NSString *name; } @property (nonatomic, retain) NSString *name; - (void)sayHi; @end with this implementation: @implementation Person @synthesize name; - (void)dealloc { [name release]; [super dealloc]; } - (void)sayHi { NSLog(@"Hello"); NSLog(@"My name is %@.", name); } @end Somewhere in the program I do this: Person *person = nil; //person = [[Person alloc] init]; // let's say I comment this line person.name = @"Mike"; // shouldn't I get an error here? [person sayHi]; // and here [person release]; // and here

    Read the article

  • FB.logout not working in IE8

    - by Florin
    I've integrated facebook login with my application and I want to logout the user from facebook when he logs out of my application. So I did the following: <a href="<c:url value='/security_logout'/>" onclick="FB.logout();">Logout</a> This works on Firefox and Chrome but doesn't work on IE8. In IE8 the user is logged out of the application but is not logged out of Facebook. Anyone else experiencing this?

    Read the article

  • Backgrund PNG image with Jquery - Hover IE6 problems

    - by florin
    Hi to all and PLS HELP I have next menu: http://health-fitness-news.info/menu/. The links from the list have PNG background images. All browsers work fine except IE6. I found a script which resolves this problem in IE6 but doesn't work at mouse HOVER. When the mouse is over the link the bg image doesn't have transparency. What should I do do fix that?

    Read the article

  • php extract data from mysql

    - by florin
    I have this mysql table with the following rows: id_cont suma_lun month year -------------------------------------------- FL28 2133 March 2012 FL28 2144 April 2012 FL28 2155 May 2012 FL28 2166 June 2012 How can i extract suma_lun, month and year foreach id_cont? so that i get an output like this: ID: Month: Monthly Sum: Year: ---------------------------------------------- FL28 March 2133 2012 April 2144 2012 May 2155 2012 June 2166 2012 This is my current code: $link = mysql_connect(DB_HOST, DB_USER, DB_PASSWORD); if(!$link) die ('Could not connect to database: '.mysql_error()); mysql_select_db(DB_DATABASE,$link); $sql="SELECT * FROM test WHERE id_cont = '$cur'"; $result=mysql_query($sql); while ($row=mysql_fetch_array($result)) { $a=$row["id_cont"]; $b=$row["suma_lun"]; $c=$row["month"]; $d=$row["year"]; } I echo the data in a table Thanks!

    Read the article

  • Bypass OpenID. Please give us a simple login form.

    - by Florin
    Can I kindly ask that we're allowed to login without the OpenID nonsense? This system is so popular that stack-overflow is the only place that I use it. If it is the policy of stack-overflow to prevent people to login, they've succeeded. I am a passive reader. For some reasons, I really don't like the idea of having one Id for all sites. To me, this system is dead in the water. Unless used within organizations I will never use it. Of course, until the government decides to reign us all in. Will you give them a hand? Until then, can we simply have a login form as in 1995? Thank you for your consideration.

    Read the article

1 2  | Next Page >