Search Results

Search found 14544 results on 582 pages for 'ssh config'.

Page 548/582 | < Previous Page | 544 545 546 547 548 549 550 551 552 553 554 555  | Next Page >

  • Timer in a Windows service - not really working?

    - by marc_s
    I have a Windows NT Service in C# which basically wakes up every x seconds, checks to see if any mail notifications need to be sent out, and then goes back to sleep. It looks something like this (the Timer class is from the System.Threading namespace): public partial class MyService : ServiceBase { private Timer _timer; private int _timeIntervalBetweenRuns = 10000; public MyService() { InitializeComponent(); } protected override void OnStart(string[] args) { // when NT Service starts - create timer to wake up every 10 seconds _timer = new Timer(OnTimer, null, _timeIntervalBetweenRuns, Timeout.Infinite); } protected override void OnStop() { // on stop - stop timer by freeing it _timer = null; } private void OnTimer(object state) { // when the timer fires, e.g. when 10 seconds are over // stop the timer from firing again by freeing it _timer = null; // check for mail and sent out notifications, if required - works just fine MailHandler handler = new MailHandler(); handler.CheckAndSendMail(); // once done, re-enable the timer by creating it from scratch _timer = new Timer(OnTimer, null, _timeIntervalBetweenRuns, _timeIntervalBetweenRuns); } } Sending the mail and all works just fine, and the service also wakes up every 10 seconds (in reality, this is a setting from a config file - simplified for this example). However, at times, the service seems to wake up too quickly.... 2010-04-09 22:50:16.390 2010-04-09 22:50:26.460 2010-04-09 22:50:36.483 2010-04-09 22:50:46.500 2010-04-09 22:50:46.537 ** why again after just 37 milliseconds...... ?? 2010-04-09 22:50:56.507 Works fine to 22:50:45.500 - why does it log another entry just 37 milliseconds later?? Here, it seems it's totally out of whack.... seems to wake up twice or even three times every time 10 seconds are over.... 2010-04-09 22:51:16.527 2010-04-09 22:51:26.537 2010-04-09 22:51:26.537 2010-04-09 22:51:36.543 2010-04-09 22:51:36.543 2010-04-09 22:51:46.553 2010-04-09 22:51:46.553 2010-04-09 22:51:56.577 2010-04-09 22:51:56.577 2010-04-09 22:52:06.590 2010-04-09 22:52:06.590 2010-04-09 22:52:06.600 2010-04-09 22:52:06.600 Any ideas why?? It's not a huge problem, but I'm concerned it might start to put too much load on the server, if the interval I configure (10 seconds, 30 seconds - whatever) seems to be ignored more and more, the longer the service runs. Have I missed something very fundamental in my service code?? Am I ending up with multiple timers, or something?? I can't seem to really figure it out..... have I picked the wrong timer (System.Threading.Timer) ? There's at least 3 Timer classes in .NET - why?? :-)

    Read the article

  • How to retain canvas state and use it in onDraw() method

    - by marqss
    I want to make a measure tape component for my app. It should look something like this with values from 0cm to 1000cm: Initially I created long bitmap image with repeated tape background. I drew that image to canvas in onDraw() method of my TapeView (extended ImageView). Then I drew a set of numbers with drawText() on top of the canvas. public TapeView(Context context, AttributeSet attrs){ ImageView imageView = new ImageView(mContext); LayoutParams params = new LayoutParams(LayoutParams.WRAP_CONTENT, LayoutParams.FILL_PARENT); imageView.setLayoutParams(params); mBitmap = createTapeBitmap(); imageView.setImageBitmap(mBitmap); this.addView(imageView); } private Bitmap createTapeBitmap(){ Bitmap mBitmap = Bitmap.createBitmap(5000, 100, Config.ARGB_8888); //size of the tape Bitmap tape = BitmapFactory.decodeResource(getResources(),R.drawable.tape);//the image size is 100x100px Bitmap scaledTape = Bitmap.createScaledBitmap(tape, 100, 100, false); Canvas c = new Canvas(mBitmap); Paint paint = new Paint(); paint.setColor(Color.WHITE); paint.setFakeBoldText(true); paint.setAntiAlias(true); paint.setTextSize(30); for(int i=0; i<=500; i++){ //draw background image c.drawBitmap(scaledTape,(i * 200), 0, null); //draw number in the middle of that background String text = String.valueOf(i); int textWidth = (int) paint.measureText(text); int position = (i * 100) + 100 - (textWidth / 2); c.drawText(text, position, 20, paint); } return mBitmap; } Finally I added this view to HorizontalScrollView. At the beginning everything worked beautifully but I realised that the app uses a Lot of memory and sometimes crashed with OutOfMemory exception. It was obvious because a size of the bitmap image was ~4mb! In order to increase the performance, instead of creating the bitmap I use Drawable (with the yellow tape strip) and set the tile mode to REPEAT: setTileModeX(TileMode.REPEAT); The view now is very light but I cannot figure out how to add numbers. There are too many of them to redraw them each time the onDraw method is called. Is there any way that I can draw these numbers on canvas and then save that canvas so it can be reused in onDraw() method?

    Read the article

  • Why does Module::Build's testcover gives me "use of uninitialized value" warnings?

    - by Kurt W. Leucht
    I'm kinda new to Module::Build, so maybe I did something wrong. Am I the only one who gets warnings when I change my dispatch from "test" to "testcover"? Is there a bug in Devel::Cover? Is there a bug in Module::Build? I probably just did something wrong. I'm using ActiveState Perl v5.10.0 with Module::Build version 0.31012 and Devel::Cover 0.64 and Eclipse 3.4.1 with EPIC 0.6.34 for my IDE. UPDATE: I upgraded to Module::Build 0.34 and the warnings are still output. *UPDATE: Looks like a bug in B::Deparse. Hope it gets fixed someday.* Here's my unit test build file: use strict; use warnings; use Module::Build; my $build = Module::Build->resume ( properties => { config_dir => '_build', }, ); $build->dispatch('test'); When I run this unit test build file, I get the following output: t\MyLib1.......ok t\MyLib2.......ok t\MyLib3.......ok All tests successful. Files=3, Tests=24, 0 wallclock secs ( 0.00 cusr + 0.00 csys = 0.00 CPU) But when I change the dispatch line to 'testcover' I get the following output which always includes a bunch of "use of uninitialized value in bitwise and" warning messages: Deleting database D:/Documents and Settings/<username>/My Documents/<SNIP>/cover_db t\MyLib1.......ok Use of uninitialized value in bitwise and (&) at D:/Perl/lib/B/Deparse.pm line 4252. Use of uninitialized value in bitwise and (&) at D:/Perl/lib/B/Deparse.pm line 4252. t\MyLib2.......ok Use of uninitialized value in bitwise and (&) at D:/Perl/lib/B/Deparse.pm line 4252. Use of uninitialized value in bitwise and (&) at D:/Perl/lib/B/Deparse.pm line 4252. t\MyLib3.......ok Use of uninitialized value in bitwise and (&) at D:/Perl/lib/B/Deparse.pm line 4252. Use of uninitialized value in bitwise and (&) at D:/Perl/lib/B/Deparse.pm line 4252. All tests successful. Files=3, Tests=24, 0 wallclock secs ( 0.00 cusr + 0.00 csys = 0.00 CPU) Reading database from D:/Documents and Settings/<username>/My Documents/<SNIP>/cover_db ---------------------------- ------ ------ ------ ------ ------ ------ ------ File stmt bran cond sub pod time total ---------------------------- ------ ------ ------ ------ ------ ------ ------ .../lib/ActivePerl/Config.pm 0.0 0.0 0.0 0.0 0.0 n/a 0.0 ...l/lib/ActiveState/Path.pm 0.0 0.0 0.0 0.0 100.0 n/a 4.8 <SNIP> blib/lib/<SNIP>/MyLib2.pm 100.0 90.0 n/a 100.0 100.0 0.0 98.5 blib/lib/<SNIP>/MyLib3.pm 100.0 90.9 100.0 100.0 100.0 0.6 98.0 Total 14.4 6.7 3.8 18.3 20.0 100.0 11.6 ---------------------------- ------ ------ ------ ------ ------ ------ ------ Writing HTML output to D:/Documents and Settings/<username>/My Documents/<SNIP>/cover_db/coverage.html ... done.

    Read the article

  • Tools to Help out with

    - by peter
    I am developing a C# .NET application. In the app.config file I add trace logging as shown, <?xml version="1.0" encoding="UTF-8" ?> <configuration> <system.diagnostics> <trace autoflush="true" /> <sources> <source name="System.Net.Sockets" maxdatasize="1024"> <listeners> <add name="MyTraceFile"/> </listeners> </source> </sources> <sharedListeners> <add name="MyTraceFile" type="System.Diagnostics.TextWriterTraceListener" initializeData="System.Net.trace.log" /> </sharedListeners> <switches> <add name="System.Net" value="Verbose" /> </switches> </system.diagnostics> </configuration> Are there any good tools around to analyse the log file that is output? The output looks like this, System.Net.Sockets Verbose: 0 : [5900] Data from Socket#8764489::Send DateTime=2010-04-07T03:22:40.1067012Z System.Net.Sockets Verbose: 0 : [5900] 00000000 : 4D 49 4D 45 2D 56 65 72-73 69 6F 6E 3A 20 31 2E : MIME-Version: 1. DateTime=2010-04-07T03:22:40.1067012Z System.Net.Sockets Verbose: 0 : [5900] 00000060 : 65 3A 20 37 20 41 70 72-20 32 30 31 30 20 31 35 : e: 7 Apr 2010 15 DateTime=2010-04-07T03:22:40.1067012Z System.Net.Sockets Verbose: 0 : [5900] 00000070 : 3A 32 32 3A 34 30 20 2B-31 32 30 30 0D 0A 53 75 : :22:40 +1200..Su DateTime=2010-04-07T03:22:40.1067012Z System.Net.Sockets Verbose: 0 : [5900] 00000080 : 62 6A 65 63 74 3A 20 5B-45 72 72 6F 72 5D 20 45 : bject: [Error] E DateTime=2010-04-07T03:22:40.1067012Z System.Net.Sockets Verbose: 0 : [5900] 00000090 : 78 63 65 70 74 69 6F 6E-20 69 6E 20 53 79 6E 63 : xception in Sync DateTime=2010-04-07T03:22:40.1067012Z System.Net.Sockets Verbose: 0 : [5900] 000000A0 : 53 65 72 76 69 63 65 20-28 32 30 30 38 2E 30 2E : Service (2008.0. DateTime=2010-04-07T03:22:40.1067012Z System.Net.Sockets Verbose: 0 : [5900] 000000B0 : 33 30 34 2E 31 32 33 34-32 29 0D 0A 43 6F 6E 74 : 304.12342)..Cont DateTime=2010-04-07T03:22:40.1067012Z Is there anything that can take the output shown above (my output is a text file 100mb in size), group together packets, and help out with finding particular issues I would like to hear about it. Thanks.

    Read the article

  • ASA 5505 Vlan question

    - by Wayne
    I am setting up a cisco asa 5505 with the base license. I can communicate from inside-outside, outside-inside, inside-home, which is my desired traffic security. I can get http, ssh, and other access from inside-home, but I can't ping from inside-home (192.168.110.0 host to 192.168.7.1 or 192.168.7.0 host). Can someone explain. My config is listed below interface Vlan1<br> nameif inside<br> security-level 100<br> ip address 192.168.110.254 255.255.255.0 <br> !<br> interface Vlan2<br> nameif outside<br> security-level 0<br> pppoe client vpdn group birdie<br> ip address removedIP 255.255.255.255 pppoe <br> !<br> interface Vlan3<br> no forward interface Vlan1<br> nameif home<br> security-level 50<br> ip address 192.168.7.1 255.255.255.0 <br> ! <br> interface Ethernet0/0<br> switchport access vlan 2<br> ! <br> interface Ethernet0/1<br> ! <br> interface Ethernet0/2<br> ! <br> interface Ethernet0/3<br> ! <br> interface Ethernet0/4<br> switchport access vlan 3<br> ! <br> interface Ethernet0/5<br> shutdown <br> ! <br> interface Ethernet0/6<br> shutdown <br> ! <br> interface Ethernet0/7<br> shutdown <br> ! <br> ftp mode passive<br> clock timezone EST -5<br> clock summer-time EDT recurring<br> access-list Outside-In extended permit icmp any any <br> access-list Outside-In extended permit tcp any any eq www <br> access-list Outside-In extended permit tcp any any eq https <br> access-list Outside-In extended permit tcp any any eq 5969 <br> access-list inside_nat0_outbound extended permit ip any 192.168.111.0 255.255.255.224 <br> access-list standardUser_splitTunnelAcl1 extended permit ip 192.168.111.0 255.255.255.0 any <br> access-list standardUser_splitTunnelAcl1 extended permit ip 192.168.110.0 255.255.255.0 <br>any access-list inside_in extended permit icmp any any <br> access-list inside_in extended permit ip any any <br> access-list home_in extended permit icmp any any <br> access-list home_in extended permit ip any any <br> pager lines 24<br> logging enable<br> logging asdm informational<br> mtu inside 1492<br> mtu outside 1492<br> mtu home 1500 <br> ip local pool vpnuser 192.168.111.5-192.168.111.20<br> icmp unreachable rate-limit 1 burst-size 1<br> asdm image disk0:/asdm-524.bin<br> no asdm history enable<br> arp timeout 14400<br> nat-control <br> global (outside) 1 interface<br> nat (inside) 0 access-list inside_nat0_outbound<br> nat (inside) 1 0.0.0.0 0.0.0.0<br> nat (home) 1 192.168.7.0 255.255.255.0<br> static (inside,outside) tcp interface https 192.168.110.6 https netmask 255.255.255.255 <br> static (inside,outside) tcp interface www 192.168.110.6 www netmask 255.255.255.255 <br> static (inside,outside) tcp interface 5969 192.168.110.12 5969 netmask 255.255.255.255 <br> static (inside,home) 192.168.110.0 192.168.110.0 netmask 255.255.255.0 <br> access-group inside_in in interface inside<br> access-group Outside-In in interface outside<br> access-group home_in in interface home<br> route outside 0.0.0.0 0.0.0.0 RemovedIP 1<br>

    Read the article

  • Entity Framework 4 CTP 5 POCO - Many-to-many configuration, insertion, and update?

    - by Saxman
    I really need someone to help me to fully understand how to do many-to-many relationship with Entity Framework 4 CTP 5, POCO. I need to understand 3 concepts: How to config my model to indicates some tables are many-to-many. How to properly do insert. How to properly do update. Here are my current models: public class MusicSheet { [Key] public int ID { get; set; } public string Title { get; set; } public string Key { get; set; } public virtual ICollection<Author> Authors { get; set; } public virtual ICollection<Tag> Tags { get; set; } } public class Author { [Key] public int ID { get; set; } public string Name { get; set; } public string Bio { get; set; } public virtual ICollection<MusicSheet> MusicSheets { get; set; } } public class Tag { [Key] public int ID { get; set; } public string TagName { get; set; } public virtual ICollection<MusicSheet> MusicSheets { get; set; } } As you can see, the MusicSheet can have many Authors or Tags, and an Author or Tag can have multiple MusicSheets. Again, my questions are: What to do on the EntityTypeConfiguration to set the relationship between them as well as mapping to an table/object that associates with the many-to-many relationship. How to insert a new music sheets (where it might have multiple authors or multiple tags). How to update a music sheet. For example, I might set TagA, TagB to MusicSheet1, but later I need to change the tags to TagA and TagC. It seems like I need to first check to see if the tags already exists, if not, insert the new tag and then associate it with the music sheet (so that I doesn't re-insert TagA?). Or this is something already handled by the framework? Thank you very much. I really hope to fully understand it rather than just doing it without fully understand what's going on. Especially on #3.

    Read the article

  • Optimising speeds in HDF5 using Pytables

    - by Sree Aurovindh
    The problem is with respect to the writing speed of the computer (10 * 32 bit machine) and the postgresql query performance.I will explain the scenario in detail. I have data about 80 Gb (along with approprite database indexes in place). I am trying to read it from Postgresql database and writing it into HDF5 using Pytables.I have 1 table and 5 variable arrays in one hdf5 file.The implementation of Hdf5 is not multithreaded or enabled for symmetric multi processing.I have rented about 10 computers for a day and trying to write them inorder to speed up my data handling. As for as the postgresql table is concerned the overall record size is 140 million and I have 5 primary- foreign key referring tables.I am not using joins as it is not scalable So for a single lookup i do 6 lookup without joins and write them into hdf5 format. For each lookup i do 6 inserts into each of the table and its corresponding arrays. The queries are really simple select * from x.train where tr_id=1 (primary key & indexed) select q_t from x.qt where q_id=2 (non-primary key but indexed) (similarly five queries) Each computer writes two hdf5 files and hence the total count comes around 20 files. Some Calculations and statistics: Total number of records : 14,37,00,000 Total number of records per file : 143700000/20 =71,85,000 The total number of records in each file : 71,85,000 * 5 = 3,59,25,000 Current Postgresql database config : My current Machine : 8GB RAM with i7 2nd generation Processor. I made changes to the following to postgresql configuration file : shared_buffers : 2 GB effective_cache_size : 4 GB Note on current performance: I have run it for about ten hours and the performance is as follows: The total number of records written for each file is about 6,21,000 * 5 = 31,05,000 The bottle neck is that i can only rent it for 10 hours per day (overnight) and if it processes in this speed it will take about 11 days which is too high for my experiments. Please suggest me on how to improve. Questions: 1. Should i use Symmetric multi processing on those desktops(it has 2 cores with about 2 GB of RAM).In that case what is suggested or prefereable? 2. If i change my postgresql configuration file and increase the RAM will it enhance my process. 3. Should i use multi threading.. In that case any links or pointers would be of great help Thanks Sree aurovindh V

    Read the article

  • Asp.net login problem.

    - by Catarrunas
    Hello, im building a asp.net web site with 2.0 framework. I've been "fighting" with web.config, i've changed it quiet some times. So to start from scracht this is what i have: <?xml version="1.0" encoding="utf-16"?> <configuration> <connectionStrings> <remove name="LocalSqlServer"/> <add name="ABC" connectionString="Database=jsilvaqqc.mdf; Data Source=213.175.208.3;Initial Catalog=jsilvaqqc;User ID=jsilva;Password=joao123#;" providerName="System.Data.SqlClient"/> <add name="LocalSqlServer" connectionString="Database=jsilvaqqc.mdf; Data Source=213.175.208.3;Initial Catalog=jsilvaqqc;User ID=jsilva;Password=joao123#;" providerName="System.Data.SqlClient"/> </connectionStrings> <location path="Members"> <system.web> <authorization> <allow users="*"/> <deny users="?"/> </authorization> </system.web> </location> <system.web> <compilation debug="true"/> </system.web></configuration> It works fine im my machine. I've created the users for the login and the role to access the "Members" folder. But in my host company, it doesnt work. I have the aspnet database from my computer in that databese "jsilvaqqc.mdf". When i try to log on pops up box requiring autentication. But i've alreadu given that in the log in form. Do i need aspnet "authentication" tag? Why dont i need it in my machine if i access the same database? Thanks for you help.

    Read the article

  • Anybody Know of any Tools to help Analysing .NET Trace Log Files?

    - by peter
    I am developing a C# .NET application. In the app.config file I add trace logging as shown, <?xml version="1.0" encoding="UTF-8" ?> <configuration> <system.diagnostics> <trace autoflush="true" /> <sources> <source name="System.Net.Sockets" maxdatasize="1024"> <listeners> <add name="MyTraceFile"/> </listeners> </source> </sources> <sharedListeners> <add name="MyTraceFile" type="System.Diagnostics.TextWriterTraceListener" initializeData="System.Net.trace.log" /> </sharedListeners> <switches> <add name="System.Net" value="Verbose" /> </switches> </system.diagnostics> </configuration> Are there any good tools around to analyse the log file that is output? The output looks like this, System.Net.Sockets Verbose: 0 : [5900] Data from Socket#8764489::Send DateTime=2010-04-07T03:22:40.1067012Z System.Net.Sockets Verbose: 0 : [5900] 00000000 : 4D 49 4D 45 2D 56 65 72-73 69 6F 6E 3A 20 31 2E : MIME-Version: 1. DateTime=2010-04-07T03:22:40.1067012Z System.Net.Sockets Verbose: 0 : [5900] 00000060 : 65 3A 20 37 20 41 70 72-20 32 30 31 30 20 31 35 : e: 7 Apr 2010 15 DateTime=2010-04-07T03:22:40.1067012Z System.Net.Sockets Verbose: 0 : [5900] 00000070 : 3A 32 32 3A 34 30 20 2B-31 32 30 30 0D 0A 53 75 : :22:40 +1200..Su DateTime=2010-04-07T03:22:40.1067012Z System.Net.Sockets Verbose: 0 : [5900] 00000080 : 62 6A 65 63 74 3A 20 5B-45 72 72 6F 72 5D 20 45 : bject: [Error] E DateTime=2010-04-07T03:22:40.1067012Z System.Net.Sockets Verbose: 0 : [5900] 00000090 : 78 63 65 70 74 69 6F 6E-20 69 6E 20 53 79 6E 63 : xception in Sync DateTime=2010-04-07T03:22:40.1067012Z System.Net.Sockets Verbose: 0 : [5900] 000000A0 : 53 65 72 76 69 63 65 20-28 32 30 30 38 2E 30 2E : Service (2008.0. DateTime=2010-04-07T03:22:40.1067012Z System.Net.Sockets Verbose: 0 : [5900] 000000B0 : 33 30 34 2E 31 32 33 34-32 29 0D 0A 43 6F 6E 74 : 304.12342)..Cont DateTime=2010-04-07T03:22:40.1067012Z Is there anything that can take the output shown above (my output is a text file 100mb in size), group together packets, and help out with finding particular issues I would like to hear about it. Thanks.

    Read the article

  • Application error with MyFaces 1.2: java.lang.IllegalStateException: No Factories configured for this Application.

    - by IgorB
    For my app I'm using Tomcat 6.0.x and Mojarra 1.2_04 JSF implementation. It works fine, just I would like to switch now to MyFaces 1.2_10 impl of JSF. During the deployment of my app a get the following error: ERROR [org.apache.catalina.core.ContainerBase.[Catalina].[localhost].[/myApp]] StandardWrapper.Throwable java.lang.IllegalStateException: No Factories configured for this Application. This happens if the faces-initialization does not work at all - make sure that you properly include all configuration settings necessary for a basic faces application and that all the necessary libs are included. Also check the logging output of your web application and your container for any exceptions! If you did that and find nothing, the mistake might be due to the fact that you use some special web-containers which do not support registering context-listeners via TLD files and a context listener is not setup in your web.xml. A typical config looks like this; <listener> <listener-class>org.apache.myfaces.webapp.StartupServletContextListener</listener-class> </listener> at javax.faces.FactoryFinder.getFactory(FactoryFinder.java:106) at javax.faces.webapp.FacesServlet.init(FacesServlet.java:137) at org.apache.myfaces.webapp.MyFacesServlet.init(MyFacesServlet.java:113) at org.apache.catalina.core.StandardWrapper.loadServlet(StandardWrapper.java:1172) at org.apache.catalina.core.StandardWrapper.load(StandardWrapper.java:992) at org.apache.catalina.core.StandardContext.loadOnStartup(StandardContext.java:4058) at org.apache.catalina.core.StandardContext.start(StandardContext.java:4371) ... Here is part of my web.xml configuration: <servlet> <servlet-name>Faces Servlet</servlet-name> <!-- <servlet-class>javax.faces.webapp.FacesServlet</servlet-class> --> <servlet-class>org.apache.myfaces.webapp.MyFacesServlet</servlet-class> <load-on-startup>1</load-on-startup> </servlet> ... <listener> <listener- class>org.apache.myfaces.webapp.StartupServletContextListener</listener-class> </listener> Has anyone experienced similar error, and what should I do i order to fix it? Thanx!

    Read the article

  • Returning mySQL error with jQuery & AJAX

    - by kel
    I've got a form inserting data into mySQL. It works but I'm trying to add error handling in case something happens. If I break the Insert statements mySQL dies but I'm still getting a success message on the front end. What am I doing wrong? AJAX function postData(){ var employeeName = jQuery('#employeeName').val(); var hireDate = jQuery('#hireDate').val(); var position = jQuery('#position').val(); var location = jQuery('#location').val(); var interveiwer = jQuery('#interviewersID').val(); var q01 = jQuery('#q01').val(); var q02 = jQuery('#q02').val(); var q03 = jQuery('#q03').val(); var q04 = jQuery('#q04').val(); var q05 = jQuery('#q05').val(); var summary = jQuery('#summary').val(); jQuery.ajax({ type: 'POST', url: 'queryDay.php', data: 'employeeName='+ employeeName +'&hireDate='+ hireDate +'&position='+ position +'&location='+ location +'&interveiwer='+ interveiwer +'&q01='+ q01 +'&q02='+ q02 +'&q03='+ q03 +'&q04='+ q04 +'&q05='+ q05 +'&summary='+ summary, success: function(){ jQuery('#formSubmitted').show(); }, error: function(jqXHR, textStatus, errorThrown){ jQuery('#returnError').html(errorThrown); jQuery('#formError').show(); } }); }; PHP require_once 'config.php'; $employeeName = $_POST['employeeName']; $hireDate = $_POST['hireDate']; $position = $_POST['position']; $location = $_POST['location']; $interviewerID = $_POST['interveiwer']; $q01 = $_POST['q01']; $q02 = $_POST['q02']; $q03 = $_POST['q03']; $q04 = $_POST['q04']; $q05 = $_POST['q05']; $summary = $_POST['summary']; mysql_query("INSERT INTO employee (name, hiredate, position, location) VALUES ('$employeeName', '$hireDate', '$position', '$location')") or die (mysql_error()); $employeeID = mysql_insert_id(); mysql_query("INSERT INTO day (employee, interviewer, datetaken, q01, q02, q03, q04, q05, summary) VALUES ('$employeeID', '$interviewerID', NOW(), '$q01', '$q02', '$q03', '$q04', '$q05', '$summary')") or die (mysql_error());

    Read the article

  • Moving a Drupal between linux servers, best practice to avoid file-ownership problems

    - by zero
    I want to port over a Drupal commons 6x24 from a local LAMP-stack to a production webserver. Both systems run OpenSuse Linux. How do I do this, what are the most important steps. How should I handle file-ownership. It's important for me to have to have full control of the file ownership. If I use the wwwrun account, I frequently run into problems, due to a very strict webserver-admin. See for example the long history of looking for fixes and solutions see this thread and even more interesting see this very long and impressive thread here. All troubles I run into have to do with file-owernship and permissions. This is my current setup; Note: This was just a quick hacked installation - quick and dirty. Well my interest is after the general options i have in the port of a drupal from linux to linux linux-vi17:/srv/www/htdocs/com624 # ls -l insgesamt 224 -rwxrwxrwx 1 root www 45285 19. Jan 00:54 CHANGELOG.txt -rwxrwxrwx 1 root www 925 19. Jan 00:54 COPYRIGHT.txt -rwxrwxrwx 1 root www 206 19. Jan 00:54 cron.php drwxrwxrwx 2 root www 4096 19. Jan 00:54 includes -rwxrwxrwx 1 root www 923 19. Jan 00:54 index.php -rwxrwxrwx 1 root www 1244 19. Jan 00:54 INSTALL.mysql.txt -rwxrwxrwx 1 root www 1011 19. Jan 00:54 INSTALL.pgsql.txt -rwxrwxrwx 1 root www 47073 19. Jan 00:54 install.php -rwxrwxrwx 1 root www 15572 19. Jan 00:54 INSTALL.txt -rwxrwxrwx 1 root www 14940 19. Jan 00:54 LICENSE.txt -rwxrwxrwx 1 root www 1858 19. Jan 00:54 MAINTAINERS.txt drwxrwxrwx 3 root www 4096 19. Jan 00:54 misc drwxrwxrwx 35 root www 4096 19. Jan 00:54 modules drwxrwxrwx 4 root www 4096 19. Jan 00:54 profiles -rwxrwxrwx 1 root www 1470 19. Jan 00:54 robots.txt drwxrwxrwx 2 root www 4096 19. Jan 00:54 scripts drwxrwxrwx 4 root www 4096 19. Jan 00:54 sites drwxrwxrwx 7 root www 4096 19. Jan 00:54 themes -rwxrwxrwx 1 root www 26250 19. Jan 00:54 update.php -rwxrwxrwx 1 root www 4864 19. Jan 00:54 UPGRADE.txt -rwxrwxrwx 1 root www 294 19. Jan 00:54 xmlrpc.php linux-vi17:/srv/www/htdocs/com624 # thx to BetaRides answer here a quick overview on the drush functionality with rsync http://drush.ws/ core-rsync Rsync the Drupal tree to/from another server using ssh. Examples: drush rsync @dev @stage Rsync Drupal root from dev to stage (one of which must be local). drush rsync ./ @stage:%files/img Rsync all files in the current directory to the 'img' directory in the file storage folder on stage. Arguments: source May be rsync path or site alias. See rsync documentation and example.aliases.drushrc.php. destination May be rsync path or site alias. See rsync documentation and example.aliases.drushrc.php. Options: --mode The unary flags to pass to rsync; --mode=rultz implies rsync -rultz. Default is -az. --RSYNC-FLAG Most rsync flags passed to drush sync will be passed on to rsync. See rsync documentation. --exclude-conf Excludes settings.php from being rsynced. Default. --include-conf Allow settings.php to be rsynced --exclude-files Exclude the files directory. --exclude-sites Exclude all directories in "sites/" except for "sites/all". --exclude-other-sites Exclude all directories in "sites/" except for "sites/all" and the site directory for the site being synced. Note: if the site directory is different between the source and destination, use --exclude-sites followed by "drush rsync @from:%site @to:%site" --exclude-paths List of paths to exclude, seperated by : (Unix-based systems) or ; (Windows). --include-paths List of paths to include, seperated by : (Unix-based systems) or ; (Windows). Topics: docs-aliases Site aliases overview with examples Aliases: rsync

    Read the article

  • How to fine tune FluentNHibernate's auto mapper?

    - by Venemo
    Okay, so yesterday I managed to get the latest trunk builds of NHibernate and FluentNHibernate to work with my latest little project. (I'm working on a bug tracking application.) I created a nice data access layer using the Repository pattern. I decided that my entities are nothing special, and also that with the current maturity of ORMs, I don't want to hand-craft the database. So, I chose to use FluentNHibernate's auto mapping feature with NHibernate's "hbm2ddl.auto" property set to "create". It really works like a charm. I put the NHibernate configuration in my app domain's config file, set it up, and started playing with it. (For the time being, I created some unit tests only.) It created all tables in the database, and everything I need for it. It even mapped my many-to-many relationships correctly. However, there are a few small glitches: All of the columns created in the DB allow null. I understand that it can't predict which properties should allow null and which shouldn't, but at least I'd like to tell it that it should allow null only for those types for which null makes sense in .NET (eg. non-nullable value types shouldn't allow null). All of the nvarchar and varbinary columns it created, have a default length of 255. I would prefer to have them on max instead of that. Is there a way to tell the auto mapper about the two simple rules above? If the answer is no, will it work correctly if I modify the tables it created? (So, if I set some columns not to allow null, and change the allowed length for some other, will it correctly work with them?) EDIT: I managed to achieve the above by using Fluent NHibernate's convention API. Thanks to everyone who helped! However, there is one more thing: after checking out the convention API, I really would like my IDs to be calld "ID", not "Id", but it seems to me that the PrimaryKey.Name.Is(x => "ID") is not working at all. If I add it to the conventions collection and rewrite my entities' properties to "ID" instead of "Id", it throws an exception that there is no primary key mapped. Any thoughts on this?

    Read the article

  • Cannot Call WordPress Plugin Files Under wp-content

    - by Volomike
    I have a client who has many blog customers. Each of these WordPress blogs calls a plugin that provides a product link. The way that link is composed looks like this: {website}/wp-content/plugins/prodx/product?id=432320 This works fine on all blogs except two. On those two, when you try to call the URL, you get a 404. So, I disabled all plugins except prodx and reverted the theme to default (Kubrick), thinking perhaps a plugin intercept with add_action() API was doing this, such as intercepting URLs and redirecting them. However, this did not help. So, I upgraded the WordPress to the latest version. Again, didn't fix. So, I checked permissions, comparing with a blog that worked just fine. Again, didn't fix. So I replaced the .htaccess, using one from a working blog. Again, didn't fix. So I replaced all the files using some from a working blog that was identical to this one, and then restored the wp-config.php file back so that it talked to the right blog database. Again, didn't fix. Again I checked permissions meticulously, comparing to a perfectly working blog. Again, didn't fix. So, I created a test.php that looks like so: <?php print_r($_GET); echo "hello world"; I then copied it into another plugin folder and used my browser to get to it -- again, 404. So I copied it into the root of wp-content/plugins and tried to call it there -- again, 404. So I copied it into wp-content -- again, 404. Last, I copied it into the root of the WordPress blog website, and this time, it worked! Doesn't make sense. I started to think that perhaps something was going on with /etc/httpd/conf/httpd.conf for this customer, but the only thing I saw different in their for this customer was the IP address was different than the customer's blog that worked. Each customer gets their own IP in this environment my client has built. My client sysop is baffled too. What do you think is going on? Is there something wrong in the WP database for this customer? Is there something wrong in httpd.conf?

    Read the article

  • PHP - Cannot modify header information...

    - by Scott W.
    Hi, I am going crazy with this error: Cannot modify header information - headers already sent by... Please note that I know about the gazillion results on google and on stack overflow. My problem is the way I've constructed my pages. To keep html separate from php, I use include files. So, for example, my pages look something like this: <?php require_once('web.config.php'); ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>Login</title> <link rel="shortcut icon" href="images/favicon.gif"/> <link rel="shortcut icon" href="images/favicon.ico"/> <link rel="stylesheet" type="text/css" href="<?php echo SITE_STYLE; ?>"/> </head> <body> <div id="page_effect" style="display:none;"> <?php require_once('./controls/login/login.control.php'); ?> </div> </body> </html> So, by the time my php file is included, the header is already sent. Part of the include file looks like this: // redirect to destination if($user_redirect != 'default') { $destination_url = $row['DestinationUrl']; header('Location:'.$user_redirect); } elseif($user_redirect == 'default' && isset($_GET['ReturnURL'])) { $destination_url = $_GET['ReturnURL']; header('Location:'.$destination_url); } else { header('Location:'.SITE_URL.'login.php'); } But I can't figure out how to work around this. I can't have the header redirect before the output so having output buffering on is the only thing I can do. Naturally it works fine that way - but having to rely on that just stinks. It would be nice if PHP had an alternative way to redirect or had additional parameters to tell it to clear the buffer.

    Read the article

  • How can a single disk in a hardware SATA RAID-10 array bring the entire array to a screeching halt?

    - by Stu Thompson
    Prelude: I'm a code-monkey that's increasingly taken on SysAdmin duties for my small company. My code is our product, and increasingly we provide the same app as SaaS. About 18 months ago I moved our servers from a premium hosting centric vendor to a barebones rack pusher in a tier IV data center. (Literally across the street.) This ment doing much more ourselves--things like networking, storage and monitoring. As part the big move, to replace our leased direct attached storage from the hosting company, I built a 9TB two-node NAS based on SuperMicro chassises, 3ware RAID cards, Ubuntu 10.04, two dozen SATA disks, DRBD and . It's all lovingly documented in three blog posts: Building up & testing a new 9TB SATA RAID10 NFSv4 NAS: Part I, Part II and Part III. We also setup a Cacit monitoring system. Recently we've been adding more and more data points, like SMART values. I could not have done all this without the awesome boffins at ServerFault. It's been a fun and educational experience. My boss is happy (we saved bucket loads of $$$), our customers are happy (storage costs are down), I'm happy (fun, fun, fun). Until yesterday. Outage & Recovery: Some time after lunch we started getting reports of sluggish performance from our application, an on-demand streaming media CMS. About the same time our Cacti monitoring system sent a blizzard of emails. One of the more telling alerts was a graph of iostat await. Performance became so degraded that Pingdom began sending "server down" notifications. The overall load was moderate, there was not traffic spike. After logging onto the application servers, NFS clients of the NAS, I confirmed that just about everything was experiencing highly intermittent and insanely long IO wait times. And once I hopped onto the primary NAS node itself, the same delays were evident when trying to navigate the problem array's file system. Time to fail over, that went well. Within 20 minuts everything was confirmed to be back up and running perfectly. Post-Mortem: After any and all system failures I perform a post-mortem to determine the cause of the failure. First thing I did was ssh back into the box and start reviewing logs. It was offline, completely. Time for a trip to the data center. Hardware reset, backup an and running. In /var/syslog I found this scary looking entry: Nov 15 06:49:44 umbilo smartd[2827]: Device: /dev/twa0 [3ware_disk_00], 6 Currently unreadable (pending) sectors Nov 15 06:49:44 umbilo smartd[2827]: Device: /dev/twa0 [3ware_disk_07], SMART Prefailure Attribute: 1 Raw_Read_Error_Rate changed from 171 to 170 Nov 15 06:49:45 umbilo smartd[2827]: Device: /dev/twa0 [3ware_disk_10], 16 Currently unreadable (pending) sectors Nov 15 06:49:45 umbilo smartd[2827]: Device: /dev/twa0 [3ware_disk_10], 4 Offline uncorrectable sectors Nov 15 06:49:45 umbilo smartd[2827]: Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error Nov 15 06:49:45 umbilo smartd[2827]: # 1 Short offline Completed: read failure 90% 6576 3421766910 Nov 15 06:49:45 umbilo smartd[2827]: # 2 Short offline Completed: read failure 90% 6087 3421766910 Nov 15 06:49:45 umbilo smartd[2827]: # 3 Short offline Completed: read failure 10% 5901 656821791 Nov 15 06:49:45 umbilo smartd[2827]: # 4 Short offline Completed: read failure 90% 5818 651637856 Nov 15 06:49:45 umbilo smartd[2827]: So I went to check the Cacti graphs for the disks in the array. Here we see that, yes, disk 7 is slipping away just like syslog says it is. But we also see that disk 8's SMART Read Erros are fluctuating. There are no messages about disk 8 in syslog. More interesting is that the fluctuating values for disk 8 directly correlate to the high IO wait times! My interpretation is that: Disk 8 is experiencing an odd hardware fault that results in intermittent long operation times. Somehow this fault condition on the disk is locking up the entire array Maybe there is a more accurate or correct description, but the net result has been that the one disk is impacting the performance of the whole array. The Question(s) How can a single disk in a hardware SATA RAID-10 array bring the entire array to a screeching halt? Am I being naïve to think that the RAID card should have dealt with this? How can I prevent a single misbehaving disk from impacting the entire array? Am I missing something?

    Read the article

  • web service filling gridview awfully slow, as is paging/sorting

    - by nat
    Hi I am making a page which calls a web service to fill a gridview this is returning alot of data, and is horribly slow. i ran the svcutil.exe on the wsdl page and it generated me the class and config so i have a load of strongly typed objects coming back from each request to the many service functions. i am then using LINQ to loop around the objects grabbing the necessary information as i go, but for each row in the grid i need to loop around an object, and grab another list of objects (from the same request) and loop around each of them.. 1 to many parent object child one.. all of this then gets dropped into a custom datatable a row at a time.. hope that makes sense.... im not sure there is any way to speed up the initial load. but surely i should be able to page/sort alot faster than it is doing. as at the moment, it appears to be taking as long to page/sort as it is to load initially. i thought if when i first loaded i put the datasource of the grid in the session, that i could whip it out of the session to deal with paging/sorting and the like. basically it is doing the below protected void Page_Load(object sender, EventArgs e) { //init the datatable //grab the filter vars (if there are any) WebServiceObj WS = WSClient.Method(args); //fill the datatable (around and around we go) foreach (ParentObject po in WS.ReturnedObj) { var COs = from ChildObject c in WS.AnotherReturnedObj where c.whatever.equals(...) ...etc foreach(ChildObject c in COs){ myDataTable.Rows.Add(tlo.this, tlo.that, c.thisthing, c.thatthing, etc......); } } grdListing.DataSource = myDataTable; Session["dt"] = myDataTable; grdListing.DataBind(); } protected void Listing_PageIndexChanging(object sender, GridViewPageEventArgs e) { grdListing.PageIndex = e.NewPageIndex; grdListing.DataSource = Session["dt"] as DataTable; grdListing.DataBind(); } protected void Listing_Sorting(object sender, GridViewSortEventArgs e) { DataTable dt = Session["dt"] as DataTable; DataView dv = new DataView(dt); string sortDirection = " ASC"; if (e.SortDirection == SortDirection.Descending) sortDirection = " DESC"; dv.Sort = e.SortExpression + sortDirection; grdListing.DataSource = dv.ToTable(); grdListing.DataBind(); } am i doing this totally wrongly? or is the slowness just coming from the amount of data being bound in/return from the Web Service.. there are maybe 15 columns(ish) and a whole load of rows.. with more being added to the data the webservice is querying from all the time any suggestions / tips happily received thanks

    Read the article

  • Slow MySQL query....only sometimes

    - by Shane N
    I have a query that's used in a reporting system of ours that sometimes runs quicker than a second, and other times takes 1 to 10 minutes to run. Here's the entry from the slow query log: # Query_time: 543 Lock_time: 0 Rows_sent: 0 Rows_examined: 124948974 use statsdb; SELECT count(distinct Visits.visitorid) as 'uniques' FROM Visits,Visitors WHERE Visits.visitorid=Visitors.visitorid and candidateid in (32) and visittime>=1275721200 and visittime<=1275807599 and (omit=0 or omit>=1275807599) AND Visitors.segmentid=9 AND Visits.visitorid NOT IN (SELECT Visits.visitorid FROM Visits,Visitors WHERE Visits.visitorid=Visitors.visitorid and candidateid in (32) and visittime<1275721200 and (omit=0 or omit>=1275807599) AND Visitors.segmentid=9); It's basically counting unique visitors, and it's doing that by counting the visitors for today and then substracting those that have been here before. If you know of a better way to do this, let me know. I just don't understand why sometimes it can be so quick, and other times takes so long - even with the same exact query under the same server load. Here's the EXPLAIN on this query. As you can see it's using the indexes I've set up: id select_type table type possible_keys key key_len ref rows Extra 1 PRIMARY Visits range visittime_visitorid,visitorid visittime_visitorid 4 NULL 82500 Using where; Using index 1 PRIMARY Visitors eq_ref PRIMARY,cand_visitor_omit PRIMARY 8 statsdb.Visits.visitorid 1 Using where 2 DEPENDENT SUBQUERY Visits ref visittime_visitorid,visitorid visitorid 8 func 1 Using where 2 DEPENDENT SUBQUERY Visitors eq_ref PRIMARY,cand_visitor_omit PRIMARY 8 statsdb.Visits.visitorid 1 Using where I tried to optimize the query a few weeks ago and came up with a variation that consistently took about 2 seconds, but in practice it ended up taking more time since 90% of the time the old query returned much quicker. Two seconds per query is too long because we are calling the query up to 50 times per page load, with different time periods. Could the quick behavior be due to the query being saved in the query cache? I tried running 'RESET QUERY CACHE' and 'FLUSH TABLES' between my benchmark tests and I was still getting quick results most of the time. Note: last night while running the query I got an error: Unable to save result set. My initial research shows that may be due to a corrupt table that needs repair. Could this be the reason for the behavior I'm seeing? In case you want server info: Accessing via PHP 4.4.4 MySQL 4.1.22 All tables are InnoDB We run optimize table on all tables weekly The sum of both the tables used in the query is 500 MB MySQL config: key_buffer = 350M max_allowed_packet = 16M thread_stack = 128K sort_buffer = 14M read_buffer = 1M bulk_insert_buffer_size = 400M set-variable = max_connections=150 query_cache_limit = 1048576 query_cache_size = 50777216 query_cache_type = 1 tmp_table_size = 203554432 table_cache = 120 thread_cache_size = 4 wait_timeout = 28800 skip-external-locking innodb_file_per_table innodb_buffer_pool_size = 3512M innodb_log_file_size=100M innodb_log_buffer_size=4M

    Read the article

  • Trappings MySQL Warnings on Calls Wrapped in Classes -- Python

    - by chernevik
    I can't get Python's try/else blocks to catch MySQL warnings when the execution statements are wrapped in classes. I have a class that has as a MySQL connection object as an attribute, a MySQL cursor object as another, and a method that run queries through that cursor object. The cursor is itself wrapped in a class. These seem to run queries properly, but the MySQL warnings they generate are not caught as exceptions in a try/else block. Why don't the try/else blocks catch the warnings? How would I revise the classes or method calls to catch the warnings? Also, I've looked through the prominent sources and can't find a discussion that helps me understand this. I'd appreciate any reference that explains this. Please see code below. Apologies for verbosity, I'm newbie. #!/usr/bin/python import MySQLdb import sys import copy sys.path.append('../../config') import credentials as c # local module with dbase connection credentials #============================================================================= # CLASSES #------------------------------------------------------------------------ class dbMySQL_Connection: def __init__(self, db_server, db_user, db_passwd): self.conn = MySQLdb.connect(db_server, db_user, db_passwd) def getCursor(self, dict_flag=True): self.dbMySQL_Cursor = dbMySQL_Cursor(self.conn, dict_flag) return self.dbMySQL_Cursor def runQuery(self, qryStr, dict_flag=True): qry_res = runQueryNoCursor(qryStr=qryStr, \ conn=self, \ dict_flag=dict_flag) return qry_res #------------------------------------------------------------------------ class dbMySQL_Cursor: def __init__(self, conn, dict_flag=True): if dict_flag: dbMySQL_Cursor = conn.cursor(MySQLdb.cursors.DictCursor) else: dbMySQL_Cursor = conn.cursor() self.dbMySQL_Cursor = dbMySQL_Cursor def closeCursor(self): self.dbMySQL_Cursor.close() #============================================================================= # QUERY FUNCTIONS #------------------------------------------------------------------------------ def runQueryNoCursor(qryStr, conn, dict_flag=True): dbMySQL_Cursor = conn.getCursor(dict_flag) qry_res =runQueryFnc(qryStr, dbMySQL_Cursor.dbMySQL_Cursor) dbMySQL_Cursor.closeCursor() return qry_res #------------------------------------------------------------------------------ def runQueryFnc(qryStr, dbMySQL_Cursor): qry_res = {} qry_res['rows'] = dbMySQL_Cursor.execute(qryStr) qry_res['result'] = copy.deepcopy(dbMySQL_Cursor.fetchall()) qry_res['messages'] = copy.deepcopy(dbMySQL_Cursor.messages) qry_res['query_str'] = qryStr return qry_res #============================================================================= # USAGES qry = 'DROP DATABASE IF EXISTS database_of_armaments' dbConn = dbMySQL_Connection(**c.creds) def dbConnRunQuery(): # Does not trap an exception; warning displayed to standard error. try: dbConn.runQuery(qry) except: print "dbConn.runQuery() caught an exception." def dbConnCursorExecute(): # Does not trap an exception; warning displayed to standard error. dbConn.getCursor() # try/except block does catches error without this try: dbConn.dbMySQL_Cursor.dbMySQL_Cursor.execute(qry) except Exception, e: print "dbConn.dbMySQL_Cursor.execute() caught an exception." print repr(e) def funcRunQueryNoCursor(): # Does not trap an exception; no warning displayed try: res = runQueryNoCursor(qry, dbConn) print 'Try worked. %s' % res except Exception, e: print "funcRunQueryNoCursor() caught an exception." print repr(e) #============================================================================= if __name__ == '__main__': print '\n' print 'EXAMPLE -- dbConnRunQuery()' dbConnRunQuery() print '\n' print 'EXAMPLE -- dbConnCursorExecute()' dbConnCursorExecute() print '\n' print 'EXAMPLE -- funcRunQueryNoCursor()' funcRunQueryNoCursor() print '\n'

    Read the article

  • New to asp.net. Need help debugging this email form.

    - by Roeland
    Hey guys, First of all, I am a php developer and most of .net is alien to me which is why I am posting here! I just migrated over a site from one set of webhosting to another. The whole site is written in .net. None of the site is database driven so most of it works, except for the contact form. The output on the site simple states there was an error with "There has been an error - please try to submit the contact form again, if you continue to experience problems, please notify our webmaster." This is just a simple message it pops out of it gets to the "catch" part of the email function. I went into web.config and changed the parameters: <emailaddresses> <add name="System" value="[email protected]"/> <add name="Contact" value="[email protected]"/> <add name="Info" value="[email protected]"/> </emailaddresses> <general> <add name="WebSiteDomain" value="hoyespharmacy.com"/> </general> Then the .cs file for contact contains the mail function EmailFormData(): private void EmailFormData() { try { StringBuilder body = new StringBuilder(); body.Append("Name" + ": " + txtName.Text + "\n\r"); body.Append("Phone" + ": " + txtPhone.Text + "\n\r"); body.Append("Email" + ": " + txtEmail.Text + "\n\r"); body.Append("Fax" + ": " + txtEmail.Text + "\n\r"); body.Append("Subject" + ": " + ddlSubject.SelectedValue + "\n\r"); body.Append("Message" + ": " + txtMessage.Text); MailMessage mail = new MailMessage(); mail.IsBodyHtml = false; mail.To.Add(new MailAddress(Settings.GetEmailAddress("System"))); mail.Subject = "Contact Us Form Submission"; mail.From = new MailAddress(Settings.GetEmailAddress("System"), Settings.WebSiteDomain); mail.Body = body.ToString(); SmtpClient smtpcl = new SmtpClient(); smtpcl.Send(mail); } catch { Utilities.RedirectPermanently(Request.Url.AbsolutePath + "?messageSent=false"); } } How do I see what the actual error is. I figure I can do something with the "catch" part of the function.. Any pointers? Thanks!

    Read the article

  • Greasemonkey is getting an empty document.body on select Google pages.

    - by Brock Adams
    Hi, I have a Greasemonkey script that processes Google search results. But it's failing in a few instances, when xpath searches (and document body) appear to be empty. Running the code in Firebug's console works every time. It only fails in a Greasemonkey script. Greasemonkey sees an empty document.body. I've boiled the problem down to a test, greasemonkey script, below. I'm using Firefox 3.5.9 and Greasemonkey 0.8.20100408.6 (but earlier versions had the same problem). Problem: Greasemonkey sees an empty document.body. Recipe to Duplicate: Install the Greasemonkey script. Open a new tab or window. Navigate to Google.com (http://www.google.com/). Search on a simple term like "cats". Check Firefox's Error console (Ctrl-shift-J) or Firebug's console. The script will report that document body is empty. Hit refresh. The script will show a good result (document body found). Note that the failure only reliably appears on Google results obtained this way, and on a new tab/window. Turn javascript off globally (javascript.enabled set to false in about:config). Repeat steps 2 thru 5. Only now the Greasemonkey script will work. It seems that Google javascript is killing the DOM tree for greasemonkey, somehow. I've tried a time-delayed retest and even a programmatic refresh; the script still fails to see the document body. Test Script: // // ==UserScript== // @name TROUBLESHOOTING 2 snippets // @namespace http://www.google.com/ // @description For code that has funky misfires and defies standard debugging. // @include http://*/* // ==/UserScript== // function LocalMain (sTitle) { var sUserMessage = ''; //var sRawHtml = unsafeWindow.document.body.innerHTML; //-- unsafeWindow makes no difference. var sRawHtml = document.body.innerHTML; if (sRawHtml) { sRawHtml = sRawHtml.replace (/^\s\s*/, ''). substr (0, 60); sUserMessage = sTitle + ', Doc body = ' + sRawHtml + ' ...'; } else { sUserMessage = sTitle + ', Document body seems empty!'; } if (typeof (console) != "undefined") { console.log (sUserMessage); } else { if (typeof (GM_log) != "undefined") GM_log (sUserMessage); else if (!sRawHtml) alert (sUserMessage); } } LocalMain ('Preload'); window.addEventListener ("load", function() {LocalMain ('After load');}, false);

    Read the article

  • JPA entity design / cannot delete entity

    - by timaschew
    I though its simple what I want, but I cannot find any solution for my problem. I'm using playframework 1.2.3 and it's using Hibernate as JPA. So I think playframework has nothing to do with the problem. I have some classes (I omit the nonrelevant fields) public class User { ... } public class Task { public DataContainer dataContainer; } public class DataContainer { public Session session; public User user; } public class Session { ... } So I have association from Task to DataContainer and from DataContainer to Sesssion and the DataContainer belongs to a User. The DataContainers can have always the same User, but the Session have to be different for each instance. And the DataContainer of a Task have also to be different in each instance. A DataContainer can have a Sesesion or not (it's optinal). I use only unidirectional assoc. It should be sufficient. In other words: Every Task must has one DataContainer. Every DataContainer must has one/the same User and can have one Session. To create a DB schema I use JPA annotations: @Entity public class User extends Model { ... } @Entity public class Task extends Model { @OneToOne(optional = false, cascade = CascadeType.ALL) public DataContainer dataContainer; } @Entity public class DataContainer extends Model { @OneToOne(optional = true, cascade = CascadeType.ALL) public Session session; @ManyToOne(optional = false, cascade = CascadeType.ALL) public User user; } @Entity public class Session extends Model { ... } BTW: Model is a play class and provides the primary id as long type. When I create some for each entity a object and 'connect them', I mean the associations, it works fine. But when I try to delete a Session, I get a constraint violation exception, because a DataContainer still refers to the Session I want to delete. I want that the Session (field) of the DataContainer will be set to null respectively the foreign key (session_id) should be unset in the database. This will be okay, because its optional. I don't know, I think I have multiple problems. Am I using the right annotation @OneToOne ? I found on the internet some additional annotation and attributes: @JoinColumn and a mappedBy attribute for the inverse relationship. But I don't have it, because its not bidirectional. Or is a bidirectional assoc. essentially? Another try was to use @OnDelete(action = OnDeleteAction.CASCADE) the the contraint changed from NO ACTIONs when update or delete to: ADD CONSTRAINT fk4745c17e6a46a56 FOREIGN KEY (session_id) REFERENCES annotation_session (id) MATCH SIMPLE ON UPDATE NO ACTION ON DELETE CASCADE; But in this case, when I delete a session, the DataContainer and User is deleted. That's wrong for me. EDIT: I'm using postgresql 9, the jdbc stuff is included in play, my only db config is db=postgres://app:app@localhost:5432/app

    Read the article

  • How to use Facebook graph API to retrieve fan photos uploaded to wall of fan page?

    - by Joe
    I am creating an external photo gallery using PHP and the Facebook graph API. It pulls thumbnails as well as the large image from albums on our Facebook Fan Page. Everything works perfect, except I'm only able to retrieve photos that an ADMIN posts to our page. (graph.facebook.com/myalbumid/photos) Is there a way to use graph api to load publicy uploaded photos from fans? I want to retrieve the pictures from the "Photos from" album, but trying to get the ID for the graph query is not like other albums... it looks like this: http://www.facebook.com/media/set/?set=o.116860675007039 Another note: The only way i've come close to retreiving this data is by using the "feed" option.. ie: graph.facebook.com/pageid/feed EDIT: This is about as far as I could get- it works, but has certain issues stated below. Maybe someone could expand on this, or provide a better solution. (Using FB PHP SDK) <?php require_once ('config.php'); // get all photos for album $photos = $facebook->api("/YourID/tagged"); $maxitem =10; $count = 0; foreach($photos['data'] as $photo) { if ($photo['type'] == "photo"): echo "<img src='{$photo['picture']}' />", "<br />"; endif; $count+= 1; if ($count >= "$maxitem") break; } ?> Issues with this: 1) The fact that I don't know a method for graph querying specific "types" of Tags, I had to run a conditional statement to display photos. 2) You cannot effectively use the "?limit=#" with this, because as I said the "tagged" query contains all types (photo, video, and status). So if you are going for a photo gallery and wish to avoid running an entire query by using ?limit, you will lose images. 3) The only content that shows up in the "tagged" query is from people that are not Admins of the page. This isn't the end of the world, but I don't understand why Facebook wouldn't allow yourself to be shown in this data as long as you posted it "as yourself" and not as the page.

    Read the article

  • drupal (CMS) or codeigniter (MVC) for creating a new web application?

    - by ajsie
    im going to create a new web application that is very customized. it will contain images, that are fully searchable - in a very, very customized way. when you click on the pictures you can add comments and so on. it requires users to be registered, but the registration/login process will be highly customized too. at the moment im using CodeIgniter for this. But i've read a lot of posts about CMS like Drupal and it sounds like i could let it handle basic stuff, maybe design and other front end work. i have no experience with CMS, in fact, i just started to use a MVC framework like CI and was impressed of how much easier it gets to start developing. so i wonder, if i'm going to create this kind of application, could i use drupal and then add the usual stuff, as i was going to do with CodeIgniter, like controllers, views, models, config files, my own libraries and so on? how does it work on a system like Drupal. how do you code PHP with it as with any MVC framework. it sounds like it has a lot of modules, i just wonder, if i can use it as a MVC framework but have the benefit of having all these basic stuff and design ready to use? cause then it sounds like the best "library" to provide for a web application from scratch. or is it difficult to create a customized app with it? i guess it has modules like images and users, but then how could i customize these so that every image has tags on it and country information, or have every user subscribing to changes to an image, that email will be sent to users and so on? cause i guess its easy to install a module. the question is, how do i customize it. maybe i don't need all that table columns. maybe i want to add/remove business logic. what are the pros and cons with using Drupal for this? is it even the right way to go? can you make a Stackoverflow with Drupal? Facebook? Twitter? Youtube? assuming that you know php of course. share your thoughts cause im totally new on creating a web application! thanks

    Read the article

  • Custom SessionListener, name is not bound in this context, javax.naming.NameNotFoundException

    - by mehmet6parmak
    Hi, I am trying to implement HttpSessionListener so that users of this listener can register implementation of ISessionEvent interface to session Events.code is below: public class MySessionListener implements HttpSessionListener{ @Resource ISessionEvent sessionEvent; public ISessionEvent getSessionEvent() { return sessionEvent; } public void setSessionEvent(ISessionEvent sessionEvent) { this.sessionEvent = sessionEvent; } @Override public void sessionCreated(HttpSessionEvent arg0) { sessionEvent.SessionCreated(arg0.getSession()); } @Override public void sessionDestroyed(HttpSessionEvent arg0) { sessionEvent.SessionDestroyed(arg0.getSession()); } } When user implement ISessionEvent and add as a bean, SessionCreated and SessionDestroyed functions of implementation will be called when these events occured. You can ask why dont you just write inside listeners methods, i dont i'm just trying. When i try the code above i got the following error message: javax.naming.NameNotFoundException: Name com.mehmet6parmak.sessionlistener.MySessionListener is not bound in this Context at org.apache.naming.NamingContext.lookup(NamingContext.java:770) at org.apache.naming.NamingContext.lookup(NamingContext.java:153) at org.apache.catalina.util.DefaultAnnotationProcessor.lookupFieldResource(DefaultAnnotationProcessor.java:278) at org.apache.catalina.util.DefaultAnnotationProcessor.processAnnotations(DefaultAnnotationProcessor.java:187) at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4082) at org.apache.catalina.core.StandardContext.start(StandardContext.java:4630) at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1045) at org.apache.catalina.core.StandardHost.start(StandardHost.java:785) at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1045) at org.apache.catalina.core.StandardEngine.start(StandardEngine.java:445) at org.apache.catalina.core.StandardService.start(StandardService.java:519) at org.apache.catalina.core.StandardServer.start(StandardServer.java:710) at org.apache.catalina.startup.Catalina.start(Catalina.java:581) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:289) at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:414) Resource annotation causes the error but i could not resolve it. Thanks All... Interface and Implementation @Resource public interface ISessionEvent { public void SessionCreated(HttpSession session); public void SessionDestroyed(HttpSession session); } @Resource public class SessionEvent implements ISessionEvent { @Override public void SessionDestroyed(HttpSession session) { System.out.println("From Session Event Callback(Destroy):" + session.getId()); } @Override public void SessionCreated(HttpSession session) { System.out.println("From Session Event Callback(Create):" + session.getId()); } } Bean Definition <context:annotation-config/> <context:component-scan base-package="com.mehmet6parmak"> </context:component-scan> <bean id="sessionEvent" autowire="byName" class="com.mehmet6parmak.sessionlistener.SessionEvent"></bean> Solution:Using the method used in link works.

    Read the article

< Previous Page | 544 545 546 547 548 549 550 551 552 553 554 555  | Next Page >