Search Results

Search found 3431 results on 138 pages for 'happy clicker'.

Page 130/138 | < Previous Page | 126 127 128 129 130 131 132 133 134 135 136 137  | Next Page >

  • Comments show up in database, but only show up on my index page after a refresh.

    - by Truong
    Hi, I have AJAX, PHP, jquery, and mySQL in this very simple website I'm trying to make. All there is is a text area that sends data to the database and uses ajax\jquery to display that data onto the index page. For some reason though, I press submit and the data goes to the database, but I have to refresh the page myself to see that data on the page. I'm assuming that the problem has to do with my AJAX JQuery or even some mistake in the index. Also, when I type the text into the text area and press submit, the text remains in the textarea until I refresh the page. Haha, sorry if this is such a noob question.. I'm trying to learn. Thanks so much Here is the AJAX: <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.3.0/jquery.min.js"></script> <script type="text/javascript"> $(function() { $(".submit").click(function() { var comment = $("#comment").val(); var post_id = $("#post").val(); var dataString = '&comment=' + comment if(comment=='') { alert('Fill something in please!'); } else { $("#flash").show(); $("#flash").fadeIn(400).html('<img src="noworries.jpg" /> '); $.ajax({ type: "POST", url: "commentajax.php", data: dataString, cache: false, success: function(html){ $("ol#update").append(html); $("ol#update li:last").fadeIn("slow"); $("#flash").hide(); } }); }return false; }); }); </script> Here is the index\form area: <body> <div id="container"><img src="banner.jpg" width="890" height="150" alt="title" /></div> <id="update" class="timeline"> <div id="flash"></div> <div id="container"> <form action="#" method="post"> <textarea name="comment" id="comment" cols="35" rows="4"></textarea><br /> <input name="submit" type="submit" class="submit" id="submit" value=" Submit Comment " /><br /> </form> </div> <id="update" class="timeline"> <?php include('config.php'); //$post_id value comes from the POSTS table $prefix="I'm happy when"; $sql=mysql_query("select * from comments order by com_id desc"); while($row=mysql_fetch_array($sql)) { $comment=$row['com_dis']; ?> <!--Displaying comments--> <div id="container"> <class="box"> <?php echo "$prefix $comment"; ?> </div> <?php } ?> Here is my commentajax.php <?php include('config.php'); if($_POST) { $comment=$_POST['comment']; $comment=mysql_real_escape_string($comment); mysql_query("INSERT INTO comments(com_id,com_dis) VALUES ('NULL', '$comment')"); } ?> <li class="box"><br /> <?php echo $comment; ?> </li> I'm sorry for so much code but I just started learning this four days ago and this is probably one of the last bugs until the website is functional.

    Read the article

  • Load-balancing between a Procurve switch and a server

    - by vlad
    Hello I've been searching around the web for this problem i've been having. It's similar in a way to this question: How exactly & specifically does layer 3 LACP destination address hashing work? My setup is as follows: I have a central switch, a Procurve 2510G-24, image version Y.11.16. It's the center of a star topology, there are four switches connected to it via a single gigabit link. Those switches service the users. On the central switch, I have a server with two gigabit interfaces that I want to bond together in order to achieve higher throughput, and two other servers that have single gigabit connections to the switch. The topology looks as follows: sw1 sw2 sw3 sw4 | | | | --------------------- | sw0 | --------------------- || | | srv1 srv2 srv3 The servers were running FreeBSD 8.1. On srv1 I set up a lagg interface using the lacp protocol, and on the switch I set up a trunk for the two ports using lacp as well. The switch showed that the server was a lacp partner, I could ping the server from another computer, and the server could ping other computers. If I unplugged one of the cables, the connection would keep working, so everything looked fine. Until I tested throughput. There was only one link used between srv1 and sw0. All testing was conducted with iperf, and load distribution was checked with systat -ifstat. I was looking to test the load balancing for both receive and send operations, as I want this server to be a file server. There were therefore two scenarios: iperf -s on srv1 and iperf -c on the other servers iperf -s on the other servers and iperf -c on srv1 connected to all the other servers. Every time only one link was used. If one cable was unplugged, the connections would keep going. However, once the cable was plugged back in, the load was not distributed. Each and every server is able to fill the gigabit link. In one-to-one test scenarios, iperf was reporting around 940Mbps. The CPU usage was around 20%, which means that the servers could withstand a doubling of the throughput. srv1 is a dell poweredge sc1425 with onboard intel 82541GI nics (em driver on freebsd). After troubleshooting a previous problem with vlan tagging on top of a lagg interface, it turned out that the em could not support this. So I figured that maybe something else is wrong with the em drivers and / or lagg stack, so I started up backtrack 4r2 on this same server. So srv1 now uses linux kernel 2.6.35.8. I set up a bonding interface bond0. The kernel module was loaded with option mode=4 in order to get lacp. The switch was happy with the link, I could ping to and from the server. I could even put vlans on top of the bonding interface. However, only half the problem was solved: if I used srv1 as a client to the other servers, iperf was reporting around 940Mbps for each connection, and bwm-ng showed, of course, a nice distribution of the load between the two nics; if I run the iperf server on srv1 and tried to connect with the other servers, there was no load balancing. I thought that maybe I was out of luck and the hashes for the two mac addresses of the clients were the same, so I brought in two new servers and tested with the four of them at the same time, and still nothing changed. I tried disabling and reenabling one of the links, and all that happened was the traffic switched from one link to the other and back to the first again. I also tried setting the trunk to "plain trunk mode" on the switch, and experimented with other bonding modes (roundrobin, xor, alb, tlb) but I never saw any traffic distribution. One interesting thing, though: one of the four switches is a Cisco 2950, image version 12.1(22)EA7. It has 48 10/100 ports and 2 gigabit uplinks. I have a server (call it srv4) with a 4 channel trunk connected to it (4x100), FreeBSD 8.0 release. The switch is connected to sw0 via gigabit. If I set up an iperf server on one of the servers connected to sw0 and a client on srv4, ALL 4 links are used, and iperf reports around 330Mbps. systat -ifstat shows all four interfaces are used. The cisco port-channel uses src-mac to balance the load. The HP should use both the source and destination according to the manual, so it should work as well. Could this mean there is some bug in the HP firmware? Am I doing something wrong?

    Read the article

  • How to pre-load all deployed assemblies for an AppDomain

    - by Andras Zoltan
    Given an App Domain, there are many different locations that Fusion (the .Net assembly loader) will probe for a given assembly. Obviously, we take this functionality for granted and, since the probing appears to be embedded within the .Net runtime (Assembly._nLoad internal method seems to be the entry-point when Reflect-Loading - and I assume that implicit loading is probably covered by the same underlying algorithm), as developers we don't seem to be able to gain access to those search paths. My problem is that I have a component that does a lot of dynamic type resolution, and which needs to be able to ensure that all user-deployed assemblies for a given AppDomain are pre-loaded before it starts its work. Yes, it slows down startup - but the benefits we get from this component totally outweight this. The basic loading algorithm I've already written is as follows. It deep-scans a set of folders for any .dll (.exes are being excluded at the moment), and uses Assembly.LoadFrom to load the dll if it's AssemblyName cannot be found in the set of assemblies already loaded into the AppDomain (this is implemented inefficiently, but it can be optimized later): void PreLoad(IEnumerable<string> paths) { foreach(path p in paths) { PreLoad(p); } } void PreLoad(string p) { //all try/catch blocks are elided for brevity string[] files = null; files = Directory.GetFiles(p, "*.dll", SearchOption.AllDirectories); AssemblyName a = null; foreach (var s in files) { a = AssemblyName.GetAssemblyName(s); if (!AppDomain.CurrentDomain.GetAssemblies().Any( assembly => AssemblyName.ReferenceMatchesDefinition( assembly.GetName(), a))) Assembly.LoadFrom(s); } } LoadFrom is used because I've found that using Load() can lead to duplicate assemblies being loaded by Fusion if, when it probes for it, it doesn't find one loaded from where it expects to find it. So, with this in place, all I now have to do is to get a list in precedence order (highest to lowest) of the search paths that Fusion is going to be using when it searches for an assembly. Then I can simply iterate through them. The GAC is irrelevant for this, and I'm not interested in any environment-driven fixed paths that Fusion might use - only those paths that can be gleaned from the AppDomain which contain assemblies expressly deployed for the app. My first iteration of this simply used AppDomain.BaseDirectory. This works for services, form apps and console apps. It doesn't work for an Asp.Net website, however, since there are at least two main locations - the AppDomain.DynamicDirectory (where Asp.Net places it's dynamically generated page classes and any assemblies that the Aspx page code references), and then the site's Bin folder - which can be discovered from the AppDomain.SetupInformation.PrivateBinPath property. So I now have working code for the most basic types of apps now (Sql Server-hosted AppDomains are another story since the filesystem is virtualised) - but I came across an interesting issue a couple of days ago where this code simply doesn't work: the nUnit test runner. This uses both Shadow Copying (so my algorithm would need to be discovering and loading them from the shadow-copy drop folder, not from the bin folder) and it sets up the PrivateBinPath as being relative to the base directory. And of course there are loads of other hosting scenarios that I probably haven't considered; but which must be valid because otherwise Fusion would choke on loading the assemblies. I want to stop feeling around and introducing hack upon hack to accommodate these new scenarios as they crop up - what I want is, given an AppDomain and its setup information, the ability to produce this list of Folders that I should scan in order to pick up all the DLLs that are going to be loaded; regardless of how the AppDomain is setup. If Fusion can see them as all the same, then so should my code. Of course, I might have to alter the algorithm if .Net changes its internals - that's just a cross I'll have to bear. Equally, I'm happy to consider SQL Server and any other similar environments as edge-cases that remain unsupported for now. Any ideas!?

    Read the article

  • Vertical textes inside of table headers with respect of a JavaScript based on SVG library

    - by Oleg
    I use jqGrid with many columns contains Boolean information, which are displayed as checkboxes inside of table (see http://www.ok-soft-gmbh.com/VerticalHeaders/TestFixedO.htm as an example). To display information more compact I use vertical column headers. It works very well and works in jqGrid in all browsers (see my discussion with Tony Tomov in jqGrid forum http://www.trirand.com/blog/?page_id=393/feature-request/headers-with-vertical-orientation/), but in IE vertical texts looks not nice enough. I was asked from users why the texted displayed so strange. So I think about using a JavaScript based SVG library like SVG Web ( http://code.google.com/p/svgweb/ ) or Raphaël ( http://raphaeljs.com/ ). SVG is very powerful and it is difficult to find a good example is not very easy. I need only display vertical texts (-90 grad, from bottom to up) and use if possible without working in mode of absolute position. So one more times my question: I need have a possibility to display vertical texts (-90 grad rotation) inside of <td> element of table header. I want use a JavaScript based SVG library like SVG Web or Raphaël. The solution must support on IE6. Have somebody a good reference to example which could help me to do this? If somebody post a whole solution of the problem I would be happy. To be exact here is my current solution: I define .rotate { -webkit-transform: rotate(-90deg); /* Safari, Chrome */ -moz-transform: rotate(-90deg); /* Firefox */ -o-transform: rotate(-90deg); /* Opera starting with 10.50 */ /* Internet Explorer: */ filter: progid:DXImageTransform.Microsoft.BasicImage(rotation=3); } define RotateCheckboxColumnHeaders function var RotateCheckboxColumnHeaders = function (grid, headerHeight) { // we use grid as context (if one have more as one table on tnhe page) var trHead = $("thead:first tr", grid.hdiv); var cm = grid.getGridParam("colModel"); $("thead:first tr th").height(headerHeight); headerHeight = $("thead:first tr th").height(); for (var iCol = 0; iCol < cm.length; iCol++) { var cmi = cm[iCol]; if (cmi.formatter === 'checkbox') { // we must set width of column header div BEFOR adding class "rotate" to // prevent text cutting based on the current column width var headDiv = $("th:eq(" + iCol + ") div", trHead); headDiv.width(headerHeight).addClass("rotate"); if (!$.browser.msie) { if ($.browser.mozilla) { headDiv.css("left", (cmi.width - headerHeight) / 2 + 3).css("bottom", 7); } else { headDiv.css("left", (cmi.width - headerHeight) / 2); } } else { var ieVer = jQuery.browser.version.substr(0, 3); // Internet Explorer if (ieVer !== "6.0" && ieVer !== "7.0") { headDiv.css("left", cmi.width / 2 - 4).css("bottom", headerHeight / 2); $("span", headDiv).css("left", 0); } else { headDiv.css("left", 3); } } } } }; And include a call like RotateCheckboxColumnHeaders(grid, 110); after creating jqGrid.

    Read the article

  • Access violation when accessing a COM object from .Net

    - by Groo
    Dear Sirs, I am sorry if the post is too long, but I would be happy if someone would at least point read the bolded titles, and point me in the right direction. I am having this problem for couple of days, but was unable to found the answer on the net. These are the things I have found out so far. 1. "Access violation" exception crushes my managed application My C# WinForms app sometimes closes with an "Access violation" exception ("Attempted to read or write protected memory"), right in the moment when selecting a TabPage in a windows form TabControl. From the stack trace (try/catch around Application.Run) I can see that the exception happens at System.Windows.Forms.UnsafeNativeMethods.DispatchMessageW(MSG& msg), called inside UnsafeNativeMethods.IMsoComponentManager.FPushMessageLoop(Int32 dwComponentID, Int32 reason, Int32 pvLoopData). -- Message: Attempted to read or write protected memory. This is often an indication that other memory is corrupt. -- Stack trace: at System.Windows.Forms.UnsafeNativeMethods.DispatchMessageW(MSG& msg) at System.Windows.Forms.Application.ComponentManager .System.Windows.Forms.UnsafeNativeMethods .IMsoComponentManager.FPushMessageLoop (Int32 dwComponentID, Int32 reason, Int32 pvLoopData) at System.Windows.Forms.Application.ThreadContext .RunMessageLoopInner(Int32 reason, ApplicationContext context) at System.Windows.Forms.Application.ThreadContext .RunMessageLoop(Int32 reason, ApplicationContext context) at System.Windows.Forms.Application.Run(ApplicationContext context) at MyApp.Program.Main() 2. The faulting module seems to be a COM object (ChartFX Client Server 6.2) Using WinDbg (with SoS loaded), I caught it on the unmanaged side, inside ChartFX.ClientServer.Core.dll (that's a COM charting component we are using): (ca84.c98c): Access violation - code c0000005 (first chance) First chance exceptions are reported before any exception handling. This exception may be expected and handled. eax=00000000 ebx=06e67c38 ecx=06e67c38 edx=000018c6 esi=06e7df30 edi=317a9e80 eip=31666110 esp=0015e040 ebp=0015e08c iopl=0 nv up ei pl zr na pe nc cs=001b ss=0023 ds=0023 es=0023 fs=003b gs=0000 efl=00010246 ChartFX_ClientServer_Core!Ordinal5507+0x97b7: 31666110 8a404d mov al,byte ptr [eax+4Dh] ds:0023:0000004d=?? [edit:] I also wasn't able to get the unmamanged stack details from WinDbg (it said "Stack unwind info not available"): 0:000 kP ChildEBP RetAddr WARNING: Stack unwind information not available. Following frames may be wrong. 0015e08c 3166288b ChartFX_ClientServer_Core!Ordinal5507+0x97b7 0015e394 3165a921 ChartFX_ClientServer_Core!Ordinal5507+0x5f32 0015e480 31678685 ChartFX_ClientServer_Core!Ordinal5496+0x26a 0015e568 3167bef4 ChartFX_ClientServer_Core!Ordinal5492+0x975 0015e668 316a356b ChartFX_ClientServer_Core!Ordinal5492+0x41e4 0015e77c 31709496 ChartFX_ClientServer_Core!Ordinal443+0x5745 0015e7d0 31707f70 ChartFX_ClientServer_Core!Ordinal2584+0x3cdc 0015e7f8 3170817d ChartFX_ClientServer_Core!Ordinal2584+0x27b6 0015e81c 3162fd76 ChartFX_ClientServer_Core!Ordinal2584+0x29c3 0015e86c 7719f8d2 ChartFX_ClientServer_Core!Ordinal899+0x6b6 0015e898 7719f794 USER32!GetMessageW+0x93 0015e910 771a06f6 USER32!GetWindowLongW+0x115 0015e940 771a069c USER32!CallWindowProcW+0x75 0015e960 747fcef4 USER32!CallWindowProcW+0x1b 0015e97c 747fd073 comctl32!Ordinal377+0x5c 0015e9e0 747fd027 comctl32!DefSubclassProc+0x92 0015ea04 747fd4e6 comctl32!DefSubclassProc+0x46 0015ea20 747fd073 comctl32!DefSubclassProc+0x505 0015ea84 747fd118 comctl32!DefSubclassProc+0x92 0015eae4 7719f8d2 comctl32!DefSubclassProc+0x137 3. Bug is not easy to reproduce (although it can be provoked usually in less than 5 min.) I have several Chart instances in several TabPages, and this usually happens while I am switching the tabs. I still don't know how to reproduce it, besides switching those tabs for several minutes before it happens, so I cannot use our source control to reliably find the build which didn't have this problem. I am accessing the charts through the managed AxChart wrapper class (derived from AxHost), which was created by VS designer automatically. 4. What should be my next step? If someone could point me to the next step I should do to find the actual cause, I would be very grateful. Experimenting (removing and returning code) does not do much good, because I don't know how to reproduce it, so it would take large amounts of time on each iteration just to convince myself that the bug is still there. I have found that people often suggest something like "switching compiler optimizations", but since the exception is not thrown deterministically, I don't want to simply rearrange some bytes and hope that it never returns. Thanks a lot in advance! Best regards, Groo

    Read the article

  • Need help with BOOST_FOREACH/compiler bug

    - by Jacek Lawrynowicz
    I know that boost or compiler should be last to blame, but I can't see another explanation here. I'm using msvc 2008 SP1 and boost 1.43. In the following code snippet execution never leaves third BOOST_FOREACH loop typedef Graph<unsigned, unsigned>::VertexIterator Iter; Graph<unsigned, unsigned> g; g.createVertex(0x66); // works fine Iter it = g.getVertices().first, end = g.getVertices().second; for(; it != end; ++it) ; // fine std::pair<Iter, Iter> p = g.getVertices(); BOOST_FOREACH(unsigned handle, p) ; // fine unsigned vertex_count = 0; BOOST_FOREACH(unsigned handle, g.getVertices()) vertex_count++; // oops, infinite loop vertex_count = 0; BOOST_FOREACH(unsigned handle, g.getVertices()) vertex_count++; vertex_count = 0; BOOST_FOREACH(unsigned handle, g.getVertices()) vertex_count++; // ... last block repeated 7 times Iterator code: class Iterator : public boost::iterator_facade<Iterator, unsigned const, boost::bidirectional_traversal_tag> { public: Iterator() : list(NULL), handle(INVALID_ELEMENT_HANDLE) {} explicit Iterator(const VectorElementsList &list, unsigned handle = INVALID_ELEMENT_HANDLE) : list(&list), handle(handle) {} friend std::ostream& operator<<(std::ostream &s, const Iterator &it) { s << "[list: " << it.list <<", handle: " << it.handle << "]"; return s; } private: friend class boost::iterator_core_access; void increment() { handle = list->getNext(handle); } void decrement() { handle = list->getPrev(handle); } unsigned const& dereference() const { return handle; } bool equal(Iterator const& other) const { return handle == other.handle && list == other.list; } const VectorElementsList<T> *list; unsigned handle; }; Some ASM fun: vertex_count = 0; BOOST_FOREACH(unsigned handle, g.getVertices()) // initialization 013E1369 mov edi,dword ptr [___defaultmatherr+8 (13E5034h)] // end iterator handle: 0xFFFFFFFF 013E136F mov ebp,dword ptr [esp+0ACh] // begin iterator handle: 0x0 013E1376 lea esi,[esp+0A8h] // begin iterator list pointer 013E137D mov ebx,esi 013E137F nop // forever loop begin 013E1380 cmp ebp,edi 013E1382 jne main+238h (13E1388h) 013E1384 cmp ebx,esi 013E1386 je main+244h (13E1394h) 013E1388 lea eax,[esp+18h] 013E138C push eax // here iterator is incremented in ram 013E138D call boost::iterator_facade<detail::VectorElementsList<Graph<unsigned int,unsigned int>::VertexWrapper>::Iterator,unsigned int const ,boost::bidirectional_traversal_tag,unsigned int const &,int>::operator++ (13E18E0h) 013E1392 jmp main+230h (13E1380h) vertex_count++; // forever loop end It's easy to see that iterator handle is cached in EBP and it never gets incremented despite of a call to iterator operator++() function. I've replaced Itarator implmentation with one deriving from std::iterator and the issue persisted, so this is not iterator_facade fault. This problem exists only on msvc 2008 SP1 x86 and amd64 release builds. Debug builds on msvc 2008 and debug/release builds on msvc 2010 and gcc 4.4 (linux) works fine. Furthermore the BOOST_FOREACH block must be repeaded exacly 10 times. If it's repeaded 9 times, it's all OK. I guess that due to BOOST_FOREACH use of template trickery (const auto_any), compiler assumes that iterator handle is constant and never reads its real value again. I would be very happy to hear that my code is wrong, correct it and move on with BOOST_FOREACH, which I'm very found of (as opposed to BOOST_FOREVER :). May be related to: http://stackoverflow.com/questions/1275852/why-does-boost-foreach-not-work-sometimes-with-c-strings

    Read the article

  • Join 2 children tables with a parent tables without duplicated

    - by user1847866
    Problem I have 3 tables: People, Phones and Emails. Each person has an UNIQUE ID, and each person can have multiple numbers or multiple emails. Simplified it looks like this: +---------+----------+ | ID | Name | +---------+----------+ | 5000003 | Amy | | 5000004 | George | | 5000005 | John | | 5000008 | Steven | | 8000009 | Ashley | +---------+----------+ +---------+-----------------+ | ID | Number | +---------+-----------------+ | 5000005 | 5551234 | | 5000005 | 5154324 | | 5000008 | 2487312 | | 8000009 | 7134584 | | 5000008 | 8451384 | +---------+-----------------+ +---------+------------------------------+ | ID | Email | +---------+------------------------------+ | 5000005 | [email protected] | | 5000005 | [email protected] | | 5000008 | [email protected] | | 5000008 | [email protected] | | 5000008 | [email protected] | | 8000009 | [email protected] | | 5000004 | [email protected] | +---------+------------------------------+ I am trying to joining them together without duplicates. It works great, when I try to join only Emails with People or only Phones with People. SELECT People.Name, People.ID, Phones.Number FROM People LEFT OUTER JOIN Phones ON People.ID=Phones.ID ORDER BY Name, ID, Number; +----------+---------+-----------------+ | Name | ID | Number | +----------+---------+-----------------+ | Steven | 5000008 | 8451384 | | Steven | 5000008 | 24887312 | | John | 5000005 | 5551234 | | John | 5000005 | 5154324 | | George | 5000004 | NULL | | Ashley | 8000009 | 7134584 | | Amy | 5000003 | NULL | +----------+---------+-----------------+ SELECT People.Name, People.ID, Emails.Email FROM People LEFT OUTER JOIN Emails ON People.ID=Emails.ID ORDER BY Name, ID, Email; +----------+---------+------------------------------+ | Name | ID | Email | +----------+---------+------------------------------+ | Steven | 5000008 | [email protected] | | Steven | 5000008 | [email protected] | | Steven | 5000008 | [email protected] | | John | 5000005 | [email protected] | | John | 5000005 | [email protected] | | George | 5000004 | [email protected] | | Ashley | 8000009 | [email protected] | | Amy | 5000003 | NULL | +----------+---------+------------------------------+ However, when I try to join Emails and Phones on People - I get this: SELECT People.Name, People.ID, Phones.Number, Emails.Email FROM People LEFT OUTER JOIN Phones ON People.ID = Phones.ID LEFT OUTER JOIN Emails ON People.ID = Emails.ID ORDER BY Name, ID, Number, Email; +----------+---------+-----------------+------------------------------+ | Name | ID | Number | Email | +----------+---------+-----------------+------------------------------+ | Steven | 5000008 | 8451384 | [email protected] | | Steven | 5000008 | 8451384 | [email protected] | | Steven | 5000008 | 8451384 | [email protected] | | Steven | 5000008 | 24887312 | [email protected] | | Steven | 5000008 | 24887312 | [email protected] | | Steven | 5000008 | 24887312 | [email protected] | | John | 5000005 | 5551234 | [email protected] | | John | 5000005 | 5551234 | [email protected] | | John | 5000005 | 5154324 | [email protected] | | John | 5000005 | 5154324 | [email protected] | | George | 5000004 | NULL | [email protected] | | Ashley | 8000009 | 7134584 | [email protected] | | Amy | 5000003 | NULL | NULL | +----------+---------+-----------------+------------------------------+ What happens is - if a Person has 2 numbers, all his emails are shown twice (They can not be sorted! which means they can not be removed by @last) What I want: Bottom line, playing with the @last, I want to end up with somethig like this, but @last won't work if I don't arrange ORDER columns in the righ way - and this seems like a big problem..Orderin the email column. Because seen from the example above: Steven has 2 phone number and 3 emails. The JOIN Emails with Numbers happens with each email - thus duplicated values that can not be sorted (SORT BY does not work on them). **THIS IS WHAT I WANT** +----------+---------+-----------------+------------------------------+ | Name | ID | Number | Email | +----------+---------+-----------------+------------------------------+ | Steven | 5000008 | 8451384 | [email protected] | | | | 24887312 | [email protected] | | | | | [email protected] | | John | 5000005 | 5551234 | [email protected] | | | | 5154324 | [email protected] | | George | 5000004 | NULL | [email protected] | | Ashley | 8000009 | 7134584 | [email protected] | | Amy | 5000003 | NULL | NULL | +----------+---------+-----------------+------------------------------+ Now I'm told that it's best to keep emails and number in separated tables because one can have many emails. So if it's such a common thing to do, what isn't there a simple solution? I'd be happy with a PHP Solution aswell. What I know how to do by now that satisfies it, but is not as pretty. If I do it with GROUP_CONTACT I geat a satisfactory result, but it doesn't look as pretty: I can't put a "Email type = work" next to it. SELECT People.Ime, GROUP_CONCAT(DISTINCT Phones.Number), GROUP_CONCAT(DISTINCT Emails.Email) FROM People LEFT OUTER JOIN Phones ON People.ID=Phones.ID LEFT OUTER JOIN Emails ON People.ID=Emails.ID GROUP BY Name; +----------+----------------------------------------------+---------------------------------------------------------------------+ | Name | GROUP_CONCAT(DISTINCT Phones.Number) | GROUP_CONCAT(DISTINCT Emails.Email) | +----------+----------------------------------------------+---------------------------------------------------------------------+ | Steven | 8451384,24887312 | [email protected],[email protected],[email protected] | | John | 5551234,5154324 | [email protected],[email protected] | | George | NULL | [email protected] | | Ashley | 7134584 | [email protected] | | Amy | NULL | NULL | +----------+----------------------------------------------+---------------------------------------------------------------------+

    Read the article

  • Overflow - what am I doing wrong?

    - by ClarkeyBoy
    Hi, I have been working on trying to get a page to display a title at the top of the content pane, and then a scrollable list of products below that so that the title of the product range is displayed at all times. I am sure this is a very simple thing to do - but cannot figure it out. Currently the actual page (not the test page for which the code is given below) works ok in the sense that I set the heading div to 5% of the height of .content-container and then set the scrollable div to 95% with top: 5%, both with position: absolute applied. - however I would like to place some links in the heading div to different pages (1, 2, 3 etc), which I would like to centre vertically if they are shorter than the heading and expand the heading div to match the height of the heading or the links, whichever is smallest. Furthermore I would like the div below the heading to shrink so that it doesn't go below the bottom of the content div as the heading div gets taller. The point of this is because it is for a client who may, or may not, be happy with the heading sizes and so on - therefore the heading div height could easily change. Specifying heights so precisely means that changing the h1 height could mean 5 changes to the CSS file - something I want to avoid. The content pane currently has its height fixed to 80% of the page, with the header and footer being 10% each on top of that, so there is no scrollbar at the side of the page and the header / footer are always showing. This is something I would like to keep. In the code below, .content-container is the main content pane - this is contained in another div which is centred using the margin at 50% of the page width. .test-div is the div which contains the heading. .test-div-2 is an attempt to place a div below .test-div, in the hope that I can force .test-div-3 to extend to 100% of its' height but no further, and to display a scrollbar if the content exceeds the height. So far I have the following, but it doesn't do exactly what I would like it to: <div class="content-container"> <div class="test-div"> <h1 style="text-align: center;">Dogs</h1> </div> <div class="test-div-2"> <div class="test-div-3"> //Content here </div> </div> </div> .content-container { position: absolute; width: 100%; height: 100%; left: 0; right: 0; bottom: 0; overflow: auto; } .test-div { position: relative; padding: 0; margin: 0; } .test-div-2 { position: relative; background-color: #CCCCCC; } .test-div-3 { max-height: 100%; background-color: #999999; } Any help with this would be greatly appreciated. I would like to achieve this without the use of JavaScript / jQuery if possible - pure HTML / CSS solutions only please! Regards, Richard

    Read the article

  • Dig returns "status: REFUSED" for external queries?

    - by Mikey
    I can't seem to work out why my DNS isn't working properly, if I run dig from the nameserver it functions correctly: # dig ungl.org ; <<>> DiG 9.5.1-P2.1 <<>> ungl.org ;; global options: printcmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 24585 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 1 ;; QUESTION SECTION: ;ungl.org. IN A ;; ANSWER SECTION: ungl.org. 38400 IN A 188.165.34.72 ;; AUTHORITY SECTION: ungl.org. 38400 IN NS ns.kimsufi.com. ungl.org. 38400 IN NS r29901.ovh.net. ;; ADDITIONAL SECTION: ns.kimsufi.com. 85529 IN A 213.186.33.199 ;; Query time: 1 msec ;; SERVER: 127.0.0.1#53(127.0.0.1) ;; WHEN: Sat Mar 13 01:04:06 2010 ;; MSG SIZE rcvd: 114 but when I run it from another server in the same datacenter I receive: # dig @87.98.167.208 ungl.org ; <<>> DiG 9.5.1-P2.1 <<>> @87.98.167.208 ungl.org ; (1 server found) ;; global options: printcmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: REFUSED, id: 18787 ;; flags: qr rd; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0 ;; WARNING: recursion requested but not available ;; QUESTION SECTION: ;ungl.org. IN A ;; Query time: 1 msec ;; SERVER: 87.98.167.208#53(87.98.167.208) ;; WHEN: Sat Mar 13 01:01:35 2010 ;; MSG SIZE rcvd: 26 my zone file for this domain is $ttl 38400 ungl.org. IN SOA r29901.ovh.net. mikey.aol.com. ( 201003121 10800 3600 604800 38400 ) ungl.org. IN NS r29901.ovh.net. ungl.org. IN NS ns.kimsufi.com. ungl.org. IN A 188.165.34.72 localhost. IN A 127.0.0.1 www IN A 188.165.34.72 and the named.conf.options is default: options { directory "/var/cache/bind"; // If there is a firewall between you and nameservers you want // to talk to, you may need to fix the firewall to allow multiple // ports to talk. See http://www.kb.cert.org/vuls/id/800113 // If your ISP provided one or more IP addresses for stable // nameservers, you probably want to use them as forwarders. // Uncomment the following block, and insert the addresses replacing // the all-0's placeholder. // forwarders { // 0.0.0.0; // }; auth-nxdomain no; # conform to RFC1035 listen-on-v6 { ::1; }; listen-on { 127.0.0.1; }; allow-recursion { 127.0.0.1; }; }; named.conf.local: // // Do any local configuration here // // Consider adding the 1918 zones here, if they are not used in your // organization // include "/etc/bind/zones.rfc1918"; zone "eugl.eu" { type master; file "/etc/bind/eugl.eu"; notify no; }; zone "ungl.org" { type master; file "/etc/bind/ungl.org"; notify no; }; The server is running Ubuntu 9.10 and Bind 9, if anyone can shed some light on this for me it'd make me very happy! thanks

    Read the article

  • Feedback on TOC Generation Code

    - by vikramjb
    Hi All I wrote a small code to generate ToC or Hierachical Bullets like one sees in a word document. Like the following set 1. 2 3 3.1 3.1.1 4 5 and so on so forth, the code is working but I am not happy with the code I have written, I would appreciate if you guys could shed some light on how I can improve my C# code. You can download the project from Rapidshare Please do let me know if you need more info. I am making this a community wiki. private void frmMain_Load(object sender, EventArgs e) { this.EnableSubTaskButton(); } private void btnNewTask_Click(object sender, EventArgs e) { this.AddNodes(true); } private void AddNodes(bool IsParent) { TreeNode parentNode = tvToC.SelectedNode; string curNumber = "0"; if (parentNode != null) { curNumber = parentNode.Text.ToString(); } curNumber = this.getTOCReference(curNumber, IsParent); TreeNode childNode = new TreeNode(); childNode.Text = curNumber; this.tvToC.ExpandAll(); this.EnableSubTaskButton(); this.tvToC.SelectedNode = childNode; if (IsParent) { if (parentNode == null) { tvToC.Nodes.Add(childNode); } else { if (parentNode.Parent != null) { parentNode.Parent.Nodes.Add(childNode); } else { tvToC.Nodes.Add(childNode); } } } else { parentNode.Nodes.Add(childNode); } } private string getTOCReference(string curNumber, bool IsParent) { int lastnum = 0; int startnum = 0; string firsthalf = null; int dotpos = curNumber.IndexOf('.'); if (dotpos > 0) { if (IsParent) { lastnum = Convert.ToInt32(curNumber.Substring(curNumber.LastIndexOf('.') + 1)); lastnum++; firsthalf = curNumber.Substring(0, curNumber.LastIndexOf('.')); curNumber = firsthalf + "." + Convert.ToInt32(lastnum.ToString()); } else { lastnum++; curNumber = curNumber + "." + Convert.ToInt32(lastnum.ToString()); } } else { if (IsParent) { startnum = Convert.ToInt32(curNumber); startnum++; curNumber = Convert.ToString(startnum); } else { curNumber = curNumber + ".1"; } } return curNumber; } private void btnSubTask_Click(object sender, EventArgs e) { this.AddNodes(false); } private void EnableSubTaskButton() { if (tvToC.Nodes.Count == 0) { btnSubTask.Enabled = false; } else { btnSubTask.Enabled = true; } } private void btnTest1_Click(object sender, EventArgs e) { for (int i = 0; i < 100; i++) { this.AddNodes(true); } for (int i = 0; i < 5; i++) { this.AddNodes(false); } for (int i = 0; i < 5; i++) { this.AddNodes(true); } }

    Read the article

  • postfix 5.7.1 Relay access denied when sending mail with cron

    - by zensys
    Reluctant to ask because there is so much here about 'postfix relay access denied' but I cannot find my case: I use php (Zend Framework) to send emails outside my network using the Google mail server because I could not send mail outside my server (user: web). However when I sent out an email via cron (user: root, I believe), still using ZF, using the same mail config/credentials, I get the message: '5.7.1 Relay access denied' I guess I need to know one of two things: 1. How can I use the google smtp server from cron 2. What do I need to change in my config to send mail using my own server instead of google Though the answer to 2. is the more structural solution I assume, I am quite happy with an answer to 1. as well because I think Google is better at server maintaince (security/spam) than I am. Below my ZF application.ini mail section, main.cf and master.cf: application.ini: resources.mail.transport.type = smtp resources.mail.transport.auth = login resources.mail.transport.host = "smtp.gmail.com" resources.mail.transport.ssl = tls resources.mail.transport.port = 587 resources.mail.transport.username = [email protected] resources.mail.transport.password = xxxxxxx resources.mail.defaultFrom.email = [email protected] resources.mail.defaultFrom.name = "my company" main.cf: # Debian specific: Specifying a file name will cause the first # line of that file to be used as the name. The Debian default # is /etc/mailname. #myorigin = /etc/mailname smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h readme_directory = /usr/share/doc/postfix # TLS parameters smtpd_tls_cert_file = /etc/postfix/smtpd.cert smtpd_tls_key_file = /etc/postfix/smtpd.key smtpd_use_tls = yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache # See /usr/share/doc/postfix/TLS_README.gz in the postfix-doc package for # information on enabling SSL in the smtp client. myhostname = mail.second-start.nl mydomain = second-start.nl alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = relayhost = mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_command = procmail -a "$EXTENSION" mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all html_directory = /usr/share/doc/postfix/html message_size_limit = 30720000 virtual_alias_domains = virtual_alias_maps = proxy:mysql:/etc/postfix/mysql-virtual_forwardings.cf, mysql:/etc/postfix/mysql-virtual_email2email.cf virtual_mailbox_domains = proxy:mysql:/etc/postfix/mysql-virtual_domains.cf virtual_mailbox_maps = proxy:mysql:/etc/postfix/mysql-virtual_mailboxes.cf virtual_mailbox_base = /home/vmail virtual_uid_maps = static:5000 virtual_gid_maps = static:5000 smtpd_sasl_auth_enable = yes broken_sasl_auth_clients = yes smtpd_sasl_authenticated_header = yes # see under Spam smtpd_recipient_restrictions = permit_mynetworks, permit_sasl_authenticated, reject_unauth_destination proxy_read_maps = $local_recipient_maps $mydestination $virtual_alias_maps $virtual_alias_domains $virtual_mailbox_maps $virtual_mailbox_domains $relay_recipient_maps $relay_domains $canonical_maps $sender_canonical_maps $recipient_canonical_maps $relocated_maps $transport_maps $mynetworks $virtual_mailbox_limit_maps virtual_transport = dovecot dovecot_destination_recipient_limit = 1 # Spam disable_vrfy_command = yes smtpd_delay_reject = yes smtpd_helo_required = yes smtpd_helo_restrictions = permit_mynetworks, check_helo_access hash:/etc/postfix/helo_access, reject_non_fqdn_hostname, reject_invalid_hostname, permit smtpd_recipient_restrictions = permit_sasl_authenticated, reject_unauth_destination, reject_invalid_hostname, reject_non_fqdn_sender, reject_non_fqdn_recipient, reject_unknown_sender_domain, reject_unknown_recipient_domain, permit_mynetworks, reject_non_fqdn_hostname, reject_rbl_client sbl.spamhaus.org, reject_rbl_client zen.spamhaus.org, reject_rbl_client cbl.abuseat.org, reject_rbl_client bl.spamcop.net, permit smtpd_error_sleep_time = 1s smtpd_soft_error_limit = 10 smtpd_hard_error_limit = 20 master.cf: # ========================================================================== # service type private unpriv chroot wakeup maxproc command + args # (yes) (yes) (yes) (never) (100) # ========================================================================== smtp inet n - - - - smtpd #smtp inet n - - - 1 postscreen #smtpd pass - - - - - smtpd #dnsblog unix - - - - 0 dnsblog #tlsproxy unix - - - - 0 tlsproxy #submission inet n - - - - smtpd # -o smtpd_tls_security_level=encrypt # -o smtpd_sasl_auth_enable=yes # -o smtpd_client_restrictions=permit_sasl_authenticated,reject # -o milter_macro_daemon_name=ORIGINATING #smtps inet n - - - - smtpd # -o smtpd_tls_wrappermode=yes # -o smtpd_sasl_auth_enable=yes # -o smtpd_client_restrictions=permit_sasl_authenticated,reject # -o milter_macro_daemon_name=ORIGINATING #628 inet n - - - - qmqpd pickup fifo n - - 60 1 pickup cleanup unix n - - - 0 cleanup qmgr fifo n - n 300 1 qmgr #qmgr fifo n - - 300 1 oqmgr tlsmgr unix - - - 1000? 1 tlsmgr rewrite unix - - - - - trivial-rewrite bounce unix - - - - 0 bounce defer unix - - - - 0 bounce trace unix - - - - 0 bounce verify unix - - - - 1 verify flush unix n - - 1000? 0 flush proxymap unix - - n - - proxymap proxywrite unix - - n - 1 proxymap smtp unix - - - - - smtp # When relaying mail as backup MX, disable fallback_relay to avoid MX loops relay unix - - - - - smtp -o smtp_fallback_relay= # -o smtp_helo_timeout=5 -o smtp_connect_timeout=5 showq unix n - - - - showq error unix - - - - - error retry unix - - - - - error discard unix - - - - - discard local unix - n n - - local virtual unix - n n - - virtual lmtp unix - - - - - lmtp anvil unix - - - - 1 anvil scache unix - - - - 1 scache # # ==================================================================== # Interfaces to non-Postfix software. Be sure to examine the manual # pages of the non-Postfix software to find out what options it wants. # # Many of the following services use the Postfix pipe(8) delivery # agent. See the pipe(8) man page for information about ${recipient} # and other message envelope options. # ==================================================================== # # maildrop. See the Postfix MAILDROP_README file for details. # Also specify in main.cf: maildrop_destination_recipient_limit=1 # maildrop unix - n n - - pipe flags=DRhu user=vmail argv=/usr/bin/maildrop -d ${recipient} # # ==================================================================== # # Recent Cyrus versions can use the existing "lmtp" master.cf entry. # # Specify in cyrus.conf: # lmtp cmd="lmtpd -a" listen="localhost:lmtp" proto=tcp4 # # Specify in main.cf one or more of the following: # mailbox_transport = lmtp:inet:localhost # virtual_transport = lmtp:inet:localhost # # ==================================================================== # # Cyrus 2.1.5 (Amos Gouaux) # Also specify in main.cf: cyrus_destination_recipient_limit=1 # #cyrus unix - n n - - pipe # user=cyrus argv=/cyrus/bin/deliver -e -r ${sender} -m ${extension} ${user} # # ==================================================================== # Old example of delivery via Cyrus. # #old-cyrus unix - n n - - pipe # flags=R user=cyrus argv=/cyrus/bin/deliver -e -m ${extension} ${user} # # ==================================================================== # # See the Postfix UUCP_README file for configuration details. # uucp unix - n n - - pipe flags=Fqhu user=uucp argv=uux -r -n -z -a$sender - $nexthop!rmail ($recipient) # # Other external delivery methods. # ifmail unix - n n - - pipe flags=F user=ftn argv=/usr/lib/ifmail/ifmail -r $nexthop ($recipient) bsmtp unix - n n - - pipe flags=Fq. user=bsmtp argv=/usr/lib/bsmtp/bsmtp -t$nexthop -f$sender $recipient scalemail-backend unix - n n - 2 pipe flags=R user=scalemail argv=/usr/lib/scalemail/bin/scalemail-store ${nexthop} ${user} ${extension} mailman unix - n n - - pipe flags=FR user=list argv=/usr/lib/mailman/bin/postfix-to-mailman.py ${nexthop} ${user} dovecot unix - n n - - pipe flags=DRhu user=vmail:vmail argv=/usr/lib/dovecot/deliver -d ${recipient}

    Read the article

  • KVM machine does not start ssh, network is started, used to work

    - by lleto
    have been searching an pulling my hear out for the last 6 hours. I have a virtual machine that has been running fine for the last six months. I was happy ssh'ing into it and it was running a database and some small apps. Tonight ssh stopped working, so I decided to reboot the machine. I now have the following situation: virsh list --all states machine as running I can ping the machine and get a reply When I ssh to the machine I see "ssh: connect to host [myserver] port 22: Connection refused" nmap does not show port 22 as open I have tried to: - reboot the machine once more (no luck) - mount the filesystem and check /etc/ssh/sshd.conf (has not changed since working situation) - install virsh console, however this does not seem to work When I mount the fs directly using losetup the strange thing is that file dates seem to be frozen in /var/log/ around the time of the crash. If I look in /var/run/ I can see an sshd.pid, but the time is 6 hours ago (and numerous reboots). My virsh xml looks like this: <domain type='kvm' id='21'> <name>myserver</name> <uuid>09678c8d-a99b-1d18-a7af-88d027cc8f93</uuid> <memory>1048576</memory> <currentMemory>1048576</currentMemory> <vcpu>1</vcpu> <os> <type arch='x86_64' machine='pc-1.0'>hvm</type> <boot dev='hd'/> </os> <features> <acpi/> </features> <clock offset='utc'/> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>destroy</on_crash> <devices> <emulator>/usr/bin/kvm</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw'/> <source file='/dev/disk01/myserver'/> <target dev='hda' bus='ide'/> <alias name='ide0-0-0'/> <address type='drive' controller='0' bus='0' unit='0'/> </disk> <controller type='ide' index='0'> <alias name='ide0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <interface type='bridge'> <mac address='52:54:00:e3:13:86'/> <source bridge='br0'/> <target dev='vnet0'/> <model type='virtio'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> <serial type='pty'> <source path='/dev/pts/1'/> <target port='0'/> <alias name='serial0'/> </serial> <console type='pty' tty='/dev/pts/1'> <source path='/dev/pts/1'/> <target type='serial' port='0'/> <alias name='serial0'/> </console> <input type='mouse' bus='ps2'/> <graphics type='vnc' port='5900' autoport='yes' listen='127.0.0.1'> <listen type='address' address='127.0.0.1'/> </graphics> <video> <model type='cirrus' vram='9216' heads='1'/> <alias name='video0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </video> <memballoon model='virtio'> <alias name='balloon0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </memballoon> </devices> <seclabel type='dynamic' model='apparmor' relabel='yes'> <label>libvirt-09678c8d-a99b-1d18-a7af-88d027cc8f93</label> <imagelabel>libvirt-09678c8d-a99b-1d18-a7af-88d027cc8f93</imagelabel> </seclabel> </domain> I'm sort of lost as to where I can look to get the machine up and running again. On the same instance of kvm I have another server running which is working fine. Both are Ubuntu 12.04. All help is welcome....

    Read the article

  • SVN: Release branch headaches, how to merge in website revisions as and when cleared to go live?

    - by Pete Duncanson
    I need a sanity check here if we can, any ideas on correcting/changing the following are very welcome! We've been getting ourselves in knots of late with our SVN and are trying to correct it by putting a Trunk/Release system in place. We have a large website that we develop on and we store it all in SVN. Heres what we had in mind: We have trunk and a release branch All work gets checked into Trunk. When a feature is deemed ready for the next release it is merged into a Release branch. We only have one release branch and just tag "Latest" when we do a push to live We hope to be able to get all the files changed from Latest to Head to give us a zip that we can upload (any ideas on an easy way to do this via scripting?) So we set all this up and where very happy with ourselves. Except its not working and heres why. We work on lots a different features/fixes/problems at once and they don't all get nicely checked in feature complete (but always working at least). Then sometimes you have to wait for Clients to sign off. As a result you end up with revisions which are "ready for live" being scattered with ones which are "still being worked on" in trunk. That means that the completed revisions are not getting merged in sequentially but out of order. I thought SVN could handle this, clever little thing it is, but apparently not. Heres an example: Pete changes some CSS to make a new button look pretty (Revision 1) Dave add some CSS to the bottom of the same CSS file as Pete's for a new feature (Revision 2) Dave's mod gets the nod so he merges it into Release and commits it with a log message mentioning revision number and bug tracking id. Pete adds more buttons to finish this mod, no CSS changes here though (Revision 3) Pete then merges his mods (Revision 1 and 3) into the Head of Release (which has Daves merge in it) but this over-writes Daves CSS additions which now dissapear completely. This leads to the site being broken and the Release branch being pretty much useless. So we tried some other ideas like reverting Release back to "Latest" and then just merging in all the Revisions 1,2 and 3 in order. This worked fine until we had Revision 4 which was not ready for live and Revision 5 which was. Suddenly we are getting ourselves in knots again with exactly the same problem! Ok so take three. Revert to Latest, merge in Revision 5, then do any update back to Head. Tree conflicts galore! So thats a no no. I cracked in the end and built it all up manaually but its not something I want to do regular, ideally I want to script our deployment but can't while Release is in such a mess. HELP! What the heck are we doing wrong? I can't seem to find any solutions to this problem of wanting different none sequential Revisions in Release. If its not possible thats fine but how the heck are we meant to get stuff live easily. We can't branch for every single change, the site takes 30 minutes+ to check out it would take too long. Side note, we are using TortoiseSVN so can we keep command line examples to a minimum in any answers? Latest version of TSVN and SVN Version 1.6 so we have the funky merge tracking etc. EDIT: An excellent blog post which deals with the dev/release cycle (although using GIT but still relivant) thought everyone would like to read it if they found this question interesting. (http://nvie.com/git-model) EDIT 2: I wrote a blog post on how to show which branch you are working on in your website which others have asked me about (http://www.offroadcode.com/2010/5/14/which-svn-branch-are-you-working-on.aspx). Hope that helps. In the meantime we are looking at Kiln and hoping to make the switch next month (gulp!)

    Read the article

  • Unicast traffic between hosts on a switch leaving the switch by its uplink. Why?

    - by Rich Lafferty
    I have a weird thing happening on our network at my office which I can't quite get my head around. In particular I can't tell if it's a problem with a switch, or a problem with configuration. We have a Cisco SG300-52 switch (sw01) in the top of a rack in our server room, connected to another SG300-28 that acts as our core switch (core01). Both run layer 2 only, our firewalls do routing between VLANs. They have a dozen or so VLANs between them. Gi1 on sw01 is a trunk port connected to gi1 on core01. (Disclosure: There are other switches in our environment but I'm pretty sure I've isolated the problem down to these two. Happy to provide more info if necessary.) The behaviour I'm seeing is limited to one VLAN, vlan 12 -- or, at least, it's not happening on the other ones I checked (It's hard to guarantee the absence of packets), and it is: sw01 is forwarding, to core01, traffic which is between two hosts which are both plugged into sw01. (I noticed this because the IDS in our firewall gave a false positive on traffic which should not reach the firewall.) We noticed this mostly between our two dhcp/dns servers, net01 (10.12.0.10) and net02 (10.12.0.11). net01 is physical hardware and net02 is on a VMware ESX server. net01 is connected to gi44 on sw01 and net02's ESX server to gi11. [net01]----gi44-[sw01]-gi1----gi1-[core01] [net02]----gi11/ Let's see some interfaces! Remember, vlan 12 is the problem vlan. Of the others I explicitly verified that vlan 27 was not affected. Here's the two hosts' ports: esx01 contains net02. sw01#sh run int gi11 interface gigabitethernet11 description esx01 lldp med disable switchport trunk allowed vlan add 5-7,11-13,100 switchport trunk native vlan 27 ! sw01#sh run int gi44 interface gigabitethernet44 description net01-1 lldp med disable switchport mode access switchport access vlan 12 ! Here's the trunk on sw01. sw01#sh run int gi1 interface gigabitethernet1 description "trunk to core01" lldp med disable switchport trunk allowed vlan add 4-7,11-13,27,100 ! And the other end of the trunk on core01. interface gigabitethernet1 description sw01 macro description switch switchport trunk allowed vlan add 2-7,11-16,27,100 ! I have a monitor port on core01, thus: core01#sh run int gi12 interface gigabitethernet12 description "monitor port" port monitor GigabitEthernet 1 ! And the monitor port on core01 sees unicast traffic going between net01 and net02, both of which are on sw01! I've verified this with a monitor port on sw01 that sees the net01-net02 unicast traffic leaving via gi1 too. sw01 knows that both of those hosts are on ports that are not its trunk port: :) ratchet$ arp -a | grep net net02.2ndsiteinc.com (10.12.0.11) at 00:0C:29:1A:66:15 [ether] on eth0 net01.2ndsiteinc.com (10.12.0.10) at 00:11:43:D8:9F:94 [ether] on eth0 sw01#sh mac addr addr 00:0C:29:1A:66:15 Aging time is 300 sec Vlan Mac Address Port Type -------- --------------------- ---------- ---------- 12 00:0c:29:1a:66:15 gi11 dynamic sw01#sh mac addr addr 00:11:43:D8:9F:94 Aging time is 300 sec Vlan Mac Address Port Type -------- --------------------- ---------- ---------- 12 00:11:43:d8:9f:94 gi44 dynamic I also brought up an unused port on sw01 on vlan 12, but the unicast traffic was (as best as I could tell) not coming out that port. So it doesn't look like sw01 is pushing it out all its ports, just the right ports and also gi1! I've verified that sw01 is not filling up its address-table: sw01#sh mac addr count This may take some time. Capacity : 8192 Free : 7983 Used : 208 The full configs for both core01 and sw01 are available: core01, sw01. Finally, versions: sw01#sh ver SW version 1.1.2.0 ( date 12-Nov-2011 time 23:34:26 ) Boot version 1.0.0.4 ( date 08-Apr-2010 time 16:37:57 ) HW version V01 core01#sh ver SW version 1.1.2.0 ( date 12-Nov-2011 time 23:34:26 ) Boot version 1.1.0.6 ( date 11-May-2011 time 18:31:00 ) HW version V01 So my understanding is this: sw01 should take unicast traffic for net01 and send it only out net02's port, and vice versa; none of it should go out sw01's uplink. But core01, receiving traffic on gi1 for a host it knows is on gi1, is right in sending it out all of its ports. (That is: sw01 is misbehaving, but core01 is doing what it should given the circumstances.) My question is: Why is sw01 sending that unicast traffic out its uplink, gi1? (And pre-emptively: yes, I know SG300s leave much to be desired, and yes, we should have spanning-tree enabled, but that's where I'm at right now.)

    Read the article

  • Disable antialiasing for a specific GDI device context

    - by Jacob Stanley
    I'm using a third party library to render an image to a GDI DC and I need to ensure that any text is rendered without any smoothing/antialiasing so that I can convert the image to a predefined palette with indexed colors. The third party library i'm using for rendering doesn't support this and just renders text as per the current windows settings for font rendering. They've also said that it's unlikely they'll add the ability to switch anti-aliasing off any time soon. The best work around I've found so far is to call the third party library in this way (error handling and prior settings checks ommitted for brevity): private static void SetFontSmoothing(bool enabled) { int pv = 0; SystemParametersInfo(Spi.SetFontSmoothing, enabled ? 1 : 0, ref pv, Spif.None); } // snip Graphics graphics = Graphics.FromImage(bitmap) IntPtr deviceContext = graphics.GetHdc(); SetFontSmoothing(false); thirdPartyComponent.Render(deviceContext); SetFontSmoothing(true); This obviously has a horrible effect on the operating system, other applications flicker from cleartype enabled to disabled and back every time I render the image. So the question is, does anyone know how I can alter the font rendering settings for a specific DC? Even if I could just make the changes process or thread specific instead of affecting the whole operating system, that would be a big step forward! (That would give me the option of farming this rendering out to a separate process- the results are written to disk after rendering anyway) EDIT: I'd like to add that I don't mind if the solution is more complex than just a few API calls. I'd even be happy with a solution that involved hooking system dlls if it was only about a days work. EDIT: Background Information The third-party library renders using a palette of about 70 colors. After the image (which is actually a map tile) is rendered to the DC, I convert each pixel from it's 32-bit color back to it's palette index and store the result as an 8bpp greyscale image. This is uploaded to the video card as a texture. During rendering, I re-apply the palette (also stored as a texture) with a pixel shader executing on the video card. This allows me to switch and fade between different palettes instantaneously instead of needing to regenerate all the required tiles. It takes between 10-60 seconds to generate and upload all the tiles for a typical view of the world. EDIT: Renamed GraphicsDevice to Graphics The class GraphicsDevice in the previous version of this question is actually System.Drawing.Graphics. I had renamed it (using GraphicsDevice = ...) because the code in question is in the namespace MyCompany.Graphics and the compiler wasn't able resolve it properly. EDIT: Success! I even managed to port the PatchIat function below to C# with the help of Marshal.GetFunctionPointerForDelegate. The .NET interop team really did a fantastic job! I'm now using the following syntax, where Patch is an extension method on System.Diagnostics.ProcessModule: module.Patch( "Gdi32.dll", "CreateFontIndirectA", (CreateFontIndirectA original) => font => { font->lfQuality = NONANTIALIASED_QUALITY; return original(font); }); private unsafe delegate IntPtr CreateFontIndirectA(LOGFONTA* lplf); private const int NONANTIALIASED_QUALITY = 3; [StructLayout(LayoutKind.Sequential)] private struct LOGFONTA { public int lfHeight; public int lfWidth; public int lfEscapement; public int lfOrientation; public int lfWeight; public byte lfItalic; public byte lfUnderline; public byte lfStrikeOut; public byte lfCharSet; public byte lfOutPrecision; public byte lfClipPrecision; public byte lfQuality; public byte lfPitchAndFamily; public unsafe fixed sbyte lfFaceName [32]; }

    Read the article

  • Complex Rails queries across multiple tables, unions, and will_paginate. Solved.

    - by uberllama
    Hi folks. I've been working on a complex "user feed" type of functionality for a while now, and after experimenting with various union plugins, hacking named scopes, and brute force, have arrived at a solution I'm happy with. S.O. has been hugely helpful for me, so I thought I'd post it here in hopes that it might help others and also to get feedback -- it's very possible that I worked on this so long that I walked down an unnecessarily complicated road. For the sake of my example, I'll use users, groups, and articles. A user can follow other users to get a feed of their articles. They can also join groups and get a feed of articles that have been added to those groups. What I needed was a combined, pageable feed of distinct articles from a user's contacts and groups. Let's begin. user.rb has_many :articles has_many :contacts has_many :contacted_users, :through => :contacts has_many :memberships has_many :groups, :through => :memberships contact.rb belongs_to :user belongs_to :contacted_user, :class_name => "User", :foreign_key => "contacted_user_id" article.rb belongs_to :user has_many :submissions has_many :groups, :through => :submissions group.rb has_many :memberships has_many :users, :through => :memberships has_many :submissions has_many :articles, :through => :submissions Those are the basic models that define my relationships. Now, I add two named scopes to the Article model so that I can get separate feeds of both contact articles and group articles should I desire. article.rb # Get all articles by user's contacts named_scope :by_contacts, lambda {|user| {:joins => "inner join contacts on articles.user_id = contacts.contacted_user_id", :conditions => ["articles.published = 1 and contacts.user_id = ?", user.id]} } # Get all articles in user's groups. This does an additional query to get the user's group IDs, then uses those in an IN clause named_scope :by_groups, lambda {|user| {:select => "DISTINCT articles.*", :joins => :submissions, :conditions => {:submissions => {:group_id => user.group_ids}}} } Now I have to create a method that will provide a UNION of these two feeds into one. Since I'm using Rails 2.3.5, I have to use the construct_finder_sql method to render a scope into its base sql. In Rails 3.0, I could use the to_sql method. user.rb def feed "(#{Article.by_groups(self).send(:construct_finder_sql,{})}) UNION (#{Article.by_contacts(self).send(:construct_finder_sql,{})})" end And finally, I can now call this method and paginate it from my controller using will_paginate's paginate_by_sql method. HomeController.rb @articles = Article.paginate_by_sql(current_user.feed, :page => 1) And we're done! It may seem simple now, but it was a lot of work getting there. Feedback is always appreciated. In particular, it would be great to get away from some of the raw sql hacking. Cheers.

    Read the article

  • Installing Windows on HP Proliant Servers without SmartStart

    - by Fitzroy
    I have a PXE server for deploying Windows XP and Windows 7 to workstations. The process is as follows: Boot the workstation from the NIC. Workstation sends a DHCP request. DHCP server responds with an IP address and the location of the PXE server. Workstation downloads WinPE image file from PXE server via TFTP Workstation stores WinPE image file in memory and executes it. Once booted into WinPE, I connect to a network share to gain access to either the Windows XP or Windows 7 installation files. A custom script is launched to guide you through the process of formatting and partitioning the hard drive(s) (using DISKPART and FORMAT). Another custom script asks for details such as the hostname to assign to the workstation. The answers provided are used to build an unattended answer file (SIF [Setup Information File] for WinXP and XML for Win7). The Windows setup EXE is launched, passing the unattended answer file to it as a parameter. The Windows XP and Windows 7 installation sources have been customised to include the drivers for our Dell workstations. They also run a number of scripts upon first booting up to install software packages. This process works very well for our workstations and I would now like to use it for building our servers too. The vast majority of our servers are HP Proliant DL360 G6, DL380 G5 and DL380 G6. They’re running Windows Server 2003 (various editions) or 2008 (various editions). To date, we have always built the HP Proliant servers using the SmartStart CD provided. SmartStart does three useful things for us: Setup RAID with HP Array Configuration Utility (ACU). Installs and configures SNMP Installs various HP Tools for Windows (HP Array Configuration Utility, HP Array Diagnostic Utility, HP Proliant Integrated Management Log Viewer, etc) Using SmartStart I have never had to manually download and install Windows drivers for network, sound, video, etc. I'm not sure if this is because SmartStart copies drivers from the CD during setup, or whether Windows just has the drivers natively in its driver CAB. If I abandon the SmartStart CD in favour of my PXE server I would have to do the following: As I wont have access to ACU, I'll configure the RAID (before booting to the PXE server) by pressing F8 (during the boot process) to access Option ROM Configuration for Arrays (ORCA). Installation of SNMP and the HP Tools will have to be installed once the Windows installation is complete using the Proliant Support Pack. Is this method OK? Is there anything that the SmartStart CD does that I'll be unable to do by other means? Are there any disadvantages to not using the SmartStart CD? Many thanks. UPDATE 05/01/12 I’ve been reading through the SmartStart Scripting Toolkit documentation. The scripting toolkit contains command line tools which work within WinPE and can such things as configure BIOS settings, configure an array and setup ILO. I’m personally not too bothered about configuring BIOS settings as I rarely deviate from the defaults (unless the server is to be a Hyper-V host). I’m not too fussed about being able to configure the array from within WinPE, as I’m happy to just press F8 and use Option ROM Configuration for Arrays (ORCA). Although, if it’s easy enough to do, I will explore this further, as it saves time if everything can be configured from within WinPE. One of the nice features all the tools possess is that you can pass input files to them. EG. Configure one server to your requirements, capture its configuration to a file (using the appropriate tool), you can then use the tool on other servers passing the input file with the captured configuration. Array controller drivers appear to be included with the toolkit along with example of how to incorporate them within a WinPE build. I suppose WinPE won’t be able to see logical volumes (I.E 2x physical disks in a RAID 1 configuration) without the array controller drivers? I mentioned in my post that SmartStart normally installs a bunch of Windows HP tools for you. I’ve had a look today, and if you run the SmartStart CD from within Windows all the tools can be installed. Therefore I can do this after the Windows installation is complete. The SmartStart CD appears to contain a lot Windows drivers. I can customise my Windows 2008 source to incorporate these drivers. However, I understand that incorporating an array controller driver is a little different to most drivers. I believe that you have to provide the driver during the very early stages of the Windows setup. I’m working through the Scripting Toolkit documentation to try and work this out...

    Read the article

  • c - fork() and wait()

    - by Joe
    Hi there, I need to use the fork() and wait() functions to complete an assignment. We are modelling non-deterministic behaviour and need the program to fork() if there is more than one possible transition. In order to try and work out how fork and wait work, I have just made a simple program. I think I understand now how the calls work and would be fine if the program only branched once because the parent process could use the exit status from the single child process to determine whether the child process reached the accept state or not. As you can see from the code that follows though, I want to be able to handle situations where there must be more than one child processes. My problem is that you seem to only be able to set the status using an _exit function once. So, as in my example the exit status that the parent process tests for shows that the first child process issued 0 as it's exit status, but has no information on the second child process. I tried simply not _exit()-ing on a reject, but then that child process would carry on, and in effect there would seem to be two parent processes. Sorry for the waffle, but I would be grateful if someone could tell me how my parent process could obtain the status information on more than one child process, or I would be happy for the parent process to only notice accept status's from the child processes, but in that case I would successfully need to exit from the child processes which have a reject status. My test code is as follows: #include <stdio.h> #include <unistd.h> #include <stdlib.h> #include <errno.h> #include <sys/wait.h> int main(void) { pid_t child_pid, wpid, pid; int status = 0; int i; int a[3] = {1, 2, 1}; for(i = 1; i < 3; i++) { printf("i = %d\n", i); pid = getpid(); printf("pid after i = %d\n", pid); if((child_pid = fork()) == 0) { printf("In child process\n"); pid = getpid(); printf("pid in child process is %d\n", pid); /* Is a child process */ if(a[i] < 2) { printf("Should be accept\n"); _exit(1); } else { printf("Should be reject\n"); _exit(0); } } } if(child_pid > 0) { /* Is the parent process */ pid = getpid(); printf("parent_pid = %d\n", pid); wpid = wait(&status); if(wpid != -1) { printf("Child's exit status was %d\n", status); if(status > 0) { printf("Accept\n"); } else { printf("Complete parent process\n"); if(a[0] < 2) { printf("Accept\n"); } else { printf("Reject\n"); } } } } return 0; } Many thanks Joe

    Read the article

  • Set up linux box for hosting a-z

    - by microchasm
    I am in the process of reinstalling the OS on a machine that will be used to host a couple of apps for our business. The apps will be local only; access from external clients will be via vpn only. The prior setup used a hosting control panel (Plesk) for most of the admin, and I was looking at using another similar piece of software for the reinstall - but I figured I should finally learn how it all works. I can do most of the things the software would do for me, but am unclear on the symbiosis of it all. This is all an attempt to further distance myself from the land of Configuration Programmer/Programmer, if at all possible. I can't find a full walkthrough anywhere for what I'm looking for, so I thought I'd put up this question, and if people can help me on the way I will edit this with the answers, and document my progress/pitfalls. Hopefully someday this will help someone down the line. The details: CentOS 5.5 x86_64 httpd: Apache/2.2.3 mysql: 5.0.77 (to be upgraded) php: 5.1 (to be upgraded) The requirements: SECURITY!! Secure file transfer Secure client access (SSL Certs and CA) Secure data storage Virtualhosts/multiple subdomains Local email would be nice, but not critical The Steps: Download latest CentOS DVD-iso (torrent worked great for me). Install CentOS: While going through the install, I checked the Server Components option thinking I was going to be using another Plesk-like admin. In hindsight, considering I've decided to try to go my own way, this probably wasn't the best idea. Basic config: Setup users, networking/ip address etc. Yum update/upgrade. Upgrade PHP/MySQL: To upgrade PHP and MySQL to the latest versions, I had to look to another repo outside CentOS. IUS looks great and I'm happy I found it! Add IUS repository to our package manager cd /tmp wget http://dl.iuscommunity.org/pub/ius/stable/Redhat/5/x86_64/epel-release-1-1.ius.el5.noarch.rpm rpm -Uvh epel-release-1-1.ius.el5.noarch.rpm wget http://dl.iuscommunity.org/pub/ius/stable/Redhat/5/x86_64/ius-release-1-4.ius.el5.noarch.rpm rpm -Uvh ius-release-1-4.ius.el5.noarch.rpm yum list | grep -w \.ius\. # list all the packages in the IUS repository; use this to find PHP/MySQL version and libraries you want to install Remove old version of PHP and install newer version from IUS rpm -qa | grep php # to list all of the installed php packages we want to remove yum shell # open an interactive yum shell remove php-common php-mysql php-cli #remove installed PHP components install php53 php53-mysql php53-cli php53-common #add packages you want transaction solve #important!! checks for dependencies transaction run #important!! does the actual installation of packages. [control+d] #exit yum shell php -v PHP 5.3.2 (cli) (built: Apr 6 2010 18:13:45) Upgrade MySQL from IUS repository /etc/init.d/mysqld stop rpm -qa | grep mysql # to see installed mysql packages yum shell remove mysql mysql-server #remove installed MySQL components install mysql51 mysql51-server mysql51-devel transaction solve #important!! checks for dependencies transaction run #important!! does the actual installation of packages. [control+d] #exit yum shell service mysqld start mysql -v Server version: 5.1.42-ius Distributed by The IUS Community Project Upgrade instructions courtesy of IUS wiki: http://wiki.iuscommunity.org/Doc/ClientUsageGuide Install rssh (restricted shell) to provide scp and sftp access, without allowing ssh login cd /tmp wget http://dag.wieers.com/rpm/packages/rssh/rssh-2.3.2-1.2.el5.rf.x86_64.rpm rpm -ivh rssh-2.3.2-1.2.el5.rf.x86_64.rpm useradd -m -d /home/dev -s /usr/bin/rssh dev passwd dev Edit /etc/rssh.conf to grant access to SFTP to rssh users. vi /etc/rssh.conf Uncomment or add: allowscp allowsftp This allows me to connect to the machine via SFTP protocol in Transmit (my FTP program of choice; I'm sure it's similar with other FTP apps). rssh instructions appropriated (with appreciation!) from http://www.cyberciti.biz/tips/linux-unix-restrict-shell-access-with-rssh.html Set up virtual interfaces ifconfig eth1:1 192.168.1.3 up #start up the virtual interface cd /etc/sysconfig/network-scripts/ cp ifcfg-eth1 ifcfg-eth1:1 #copy default script and match name to our virtual interface vi ifcfg-eth1:1 #modify eth1:1 script #ifcfg-eth1:1 | modify so it looks like this: DEVICE=eth1:1 IPADDR=192.168.1.3 NETMASK=255.255.255.0 NETWORK=192.168.1.0 ONBOOT=yes NAME=eth1:1 Add more Virtual interfaces as needed by repeating. Because of the ONBOOT=yes line in the ifcfg-eth1:1 file, this interface will be brought up when the system boots, or the network starts/restarts. service network restart Shutting down interface eth0: [ OK ] Shutting down interface eth1: [ OK ] Shutting down loopback interface: [ OK ] Bringing up loopback interface: [ OK ] Bringing up interface eth0: [ OK ] Bringing up interface eth1: [ OK ] ping 192.168.1.3 64 bytes from 192.168.1.3: icmp_seq=1 ttl=64 time=0.105 ms And this is where I'm at. I will keep editing this as I make progress. Any tips on how to Configure virtual interfaces/ip based virtual hosts for SSL, setting up a CA, or anything else would be appreciated.

    Read the article

  • Setup routing and iptables for new VPN connection to redirect **only** ports 80 and 443

    - by Steve
    I have a new VPN connection (using openvpn) to allow me to route around some ISP restrictions. Whilst it is working fine, it is taking all the traffic over the vpn. This is causing me issues for downloading (my internet connection is a lot faster than the vpn allows), and for remote access. I run an ssh server, and have a daemon running that allows me to schdule downloads via my phone. I have my existing ethernet connection on eth0, and the new VPN connection on tun0. I believe I need to setup the default route to use my existing eth0 connection on the 192.168.0.0/24 network, and set the default gateway to 192.168.0.1 (my knowledge is shaky as I haven't done this for a number of years). If that is correct, then I'm not exactly sure how to do it!. My current routing table is: Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface MSS Window irtt 0.0.0.0 10.51.0.169 0.0.0.0 UG 0 0 0 tun0 0 0 0 10.51.0.1 10.51.0.169 255.255.255.255 UGH 0 0 0 tun0 0 0 0 10.51.0.169 0.0.0.0 255.255.255.255 UH 0 0 0 tun0 0 0 0 85.25.147.49 192.168.0.1 255.255.255.255 UGH 0 0 0 eth0 0 0 0 169.254.0.0 0.0.0.0 255.255.0.0 U 1000 0 0 eth0 0 0 0 192.168.0.0 0.0.0.0 255.255.255.0 U 1 0 0 eth0 0 0 0 After fixing the routing, I believe I need to use iptables to configure prerouting or masquerading to force everything for destination port 80 or 443 over tun0. Again, I'm not exactly sure how to do this! Everything I've found on the internet is trying to do something far more complicated, and trying to sort the wood from the trees is proving difficult. Any help would be much appreciated. UPDATE So far, from the various sources, I've cobbled together the following: #!/bin/sh DEV1=eth0 IP1=`ifconfig|perl -nE'/dr:(\S+)/&&say$1'|grep 192.` GW1=192.168.0.1 TABLE1=internet TABLE2=vpn DEV2=tun0 IP2=`ifconfig|perl -nE'/dr:(\S+)/&&say$1'|grep 10.` GW2=`route -n | grep 'UG[ \t]' | awk '{print $2}'` ip route flush table $TABLE1 ip route flush table $TABLE2 ip route show table main | grep -Ev ^default | while read ROUTE ; do ip route add table $TABLE1 $ROUTE ip route add table $TABLE2 $ROUTE done ip route add table $TABLE1 $GW1 dev $DEV1 src $IP1 ip route add table $TABLE2 $GW2 dev $DEV2 src $IP2 ip route add table $TABLE1 default via $GW1 ip route add table $TABLE2 default via $GW2 echo "1" > /proc/sys/net/ipv4/ip_forward echo "1" > /proc/sys/net/ipv4/ip_dynaddr ip rule add from $IP1 lookup $TABLE1 ip rule add from $IP2 lookup $TABLE2 ip rule add fwmark 1 lookup $TABLE1 ip rule add fwmark 2 lookup $TABLE2 iptables -t nat -A POSTROUTING -o $DEV1 -j SNAT --to-source $IP1 iptables -t nat -A POSTROUTING -o $DEV2 -j SNAT --to-source $IP2 iptables -t nat -A PREROUTING -m state --state ESTABLISHED,RELATED -j CONNMARK --restore-mark iptables -A OUTPUT -m state --state ESTABLISHED,RELATED -j CONNMARK --restore-mark iptables -t nat -A PREROUTING -i $DEV1 -m state --state NEW -j CONNMARK --set-mark 1 iptables -t nat -A PREROUTING -i $DEV2 -m state --state NEW -j CONNMARK --set-mark 2 iptables -t nat -A PREROUTING -m connmark --mark 1 -j MARK --set-mark 1 iptables -t nat -A PREROUTING -m connmark --mark 2 -j MARK --set-mark 2 iptables -t nat -A PREROUTING -m state --state NEW -m connmark ! --mark 0 -j CONNMARK --save-mark iptables -t mangle -A PREROUTING -i $DEV2 -m state --state NEW -p tcp --dport 80 -j CONNMARK --set-mark 2 iptables -t mangle -A PREROUTING -i $DEV2 -m state --state NEW -p tcp --dport 443 -j CONNMARK --set-mark 2 route del default route add default gw 192.168.0.1 eth0 Now this seems to be working. Except it isn't! Connections to the blocked websites are going through, connections not on ports 80 and 443 are using the non-VPN connection. However port 80 and 443 connections that aren't to the blocked websites are using the non-VPN connection too! As the general goal has been reached, I'm relatively happy, but it would be nice to know why it isn't working exactly right. Any ideas? For reference, I now have 3 routing tables, main, internet, and vpn. The listing of them is as follows... Main: default via 192.168.0.1 dev eth0 10.38.0.1 via 10.38.0.205 dev tun0 10.38.0.205 dev tun0 proto kernel scope link src 10.38.0.206 85.removed via 192.168.0.1 dev eth0 169.254.0.0/16 dev eth0 scope link metric 1000 192.168.0.0/24 dev eth0 proto kernel scope link src 192.168.0.73 metric 1 Internet: default via 192.168.0.1 dev eth0 10.38.0.1 via 10.38.0.205 dev tun0 10.38.0.205 dev tun0 proto kernel scope link src 10.38.0.206 85.removed via 192.168.0.1 dev eth0 169.254.0.0/16 dev eth0 scope link metric 1000 192.168.0.0/24 dev eth0 proto kernel scope link src 192.168.0.73 metric 1 192.168.0.1 dev eth0 scope link src 192.168.0.73 VPN: default via 10.38.0.205 dev tun0 10.38.0.1 via 10.38.0.205 dev tun0 10.38.0.205 dev tun0 proto kernel scope link src 10.38.0.206 85.removed via 192.168.0.1 dev eth0 169.254.0.0/16 dev eth0 scope link metric 1000 192.168.0.0/24 dev eth0 proto kernel scope link src 192.168.0.73 metric 1

    Read the article

  • Rendering a Long Document on iPad

    - by benjismith
    I'm implementing a document viewer with highlighting/annotation capabilities for a custom document format on iPad. The documents are kind of long (100 to 200 pages, if printed on paper) and I've had a hard time finding the right approach. Here are the requirments: 1) Basic rich-text styling: control of left/right margins. Control of font name, size, foreground/background color, and line spacing. Bold, italics, underline, etc. 2) Selection and highlighting of arbitrary text regions (not limited to paragraph boundaries, like in Safari/UIWebView). 3) Customization of the Cut/Copy/Paste popup (what is that thing called anyhow? UIActionBar?) This is one of the essential requirements of the app. My first implementation was based on UIWebView. I just rendered the document as HTML with CSS for text styling. But I couldn't get the kind of text selection behavior I wanted (across paragraph boundaries) and the UIActionBar can't be customized from within UIWebView. So I started working on a javascript approach, faking the device text-selection behavior using JQuery to trap touch events and dynamically modifying the DOM to change the background color of selected regions of text. I built a fake UIActionBar control as a hidden DIV, positioning it and unhiding it whenever there was an active selection region. Not too shabby. The main problem is that it's SLOOOOOOOW. Scrolling through the document is nice and quick, but dynamically changing the DOM is not very snappy. Plus, I couldn't figure out how to recreate the magnifier loupe, so my fake text-selection GUI doesn't look quite the same as the native implementation. Also, I haven't yet implemented the communication bridge between the javascript layer and the objective-c layer (where the rest of the app lives), but it was shaping up to be a huge hassle. So I've been looking at CoreText, but there are precious few examples on the web. I spent a little time with this simple little demo: http://github.com/jonasschnelli/I7CoreTextExample/ It shows how to use CoreText to draw an NSAttributedText string into a UIView. But it has its own problems: It doesn't implement text-selection behavior, and it doesn't present a UIActionBar, so I don't have any idea how to make that happen. And, more importantly, it tries to draw the entire document all at once, with significant performance degradations for long documents. My documents can have thousands of paragraphs, and less than 1% of the document is ever on screen at a time. On the plus side, these documents already contain precise formatting information. I know the exact page-position of every line of text, so I don't need a layout engine. Does anyone know how to implement this sort of view using CoreText? I understand that a full-fledged implementation is overkill for a question like this, but I'm looking for a good CoreText example with a few basic requirements: 1) Precise layout & formatting control (using the formatting metrics and text styles I've already calculated). 2) Arbitrary selection of text. 3) Customization of the UIActionBar. 4) Efficient recycling of resources for off-screen objects. I'd be happy to implement my own recycling when text elements scroll off-screen, but wouldn't that require re-implementing UIScrollView? I'm brand-new to iPhone development, and still getting used to Objective-C, but I've been working in other languages (Java, C#, flex/actionscript, etc) for more than ten years, so I feel confident in my ability to get the work done, if only I had a better feel for the iPhone SDK and the common coding patterns for stuff like this. Is it just me, or does the SDK documentation really suck? Anyhow, thanks for your help!

    Read the article

  • What does the `forall` keyword in Haskell/GHC do?

    - by JUST MY correct OPINION
    I've been banging my head on this one for (quite literally) years now. I'm beginning to kinda/sorta understand how the foreach keyword is used in so-called "existential types" like this: data ShowBox = forall s. Show s => SB s (This despite the confusingly-worded explanations of it in the fragments found all around the web.) This is only a subset, however, of how foreach is used and I simply cannot wrap my mind around its use in things like this: runST :: forall a. (forall s. ST s a) -> a Or explaining why these are different: foo :: (forall a. a -> a) -> (Char,Bool) bar :: forall a. ((a -> a) -> (Char, Bool)) Or the whole RankNTypes stuff that breaks my brain when "explained" in a way that makes me want to do that Samuel L. Jackson thing from Pulp Fiction. (Don't follow that link if you're easily offended by strong language.) The problem, really, is that I'm a dullard. I can't fathom the chicken scratchings (some call them "formulae") of the elite mathematicians that created this language seeing as my university years are over two decades behind me and I never actually had to put what I learnt into use in practice. I also tend to prefer clear, jargon-free English rather than the kinds of language which are normal in academic environments. Most of the explanations I attempt to read on this (the ones I can find through search engines) have these problems: They're incomplete. They explain one part of the use of this keyword (like "existential types") which makes me feel happy until I read code that uses it in a completely different way (like runST, foo and bar above). They're densely packed with assumptions that I've read the latest in whatever branch of discrete math, category theory or abstract algebra is popular this week. (If I never read the words "consult the paper whatever for details of implementation" again, it will be too soon.) They're written in ways that frequently turn even simple concepts into tortuously twisted and fractured grammar and semantics. (I suspect that the last two items are the biggest problem. I wouldn't know, though, since I'm too much a dullard to comprehend them.) It's been asked why Haskell never really caught on in industry. I suspect, in my own humble, unintelligent way, that my experience in figuring out one stupid little keyword -- a keyword that is increasingly ubiquitous in the libraries being written these days -- are also part of the answer to that question. It's hard for a language to catch on when even its individual keywords cause years-long quests to comprehend. Years-long quests which end in failure. So... On to the actual question. Can anybody completely explain the foreach keyword in clear, plain English (or, if it exists somewhere, point to such a clear explanation which I've missed) that doesn't assume I'm a mathematician steeped in the jargon?

    Read the article

  • How do I get jqGrid to work using ASP.NET + JSON on the backend?

    - by briandus
    Hi friends, ok, I'm back. I totally simplified my problem to just three simple fields and I'm still stuck on the same line using the addJSONData method. I've been stuck on this for days and no matter how I rework the ajax call, the json string, blah blah blah...I can NOT get this to work! I can't even get it to work as a function when adding one row of data manually. Can anyone PLEASE post a working sample of jqGrid that works with ASP.NET and JSON? Would you please include 2-3 fields (string, integer and date preferably?) I would be happy to see a working sample of jqGrid and just the manual addition of a JSON object using the addJSONData method. Thanks SO MUCH!! If I ever get this working, I will post a full code sample for all the other posting for help from ASP.NET, JSON users stuck on this as well. Again. THANKS!! tbl.addJSONData(objGridData); //err: tbl.addJSONData is not a function!! Here is what Firebug is showing when I receive this message: • objGridData Object total=1 page=1 records=5 rows=[5] ? Page "1" Records "5" Total "1" Rows [Object ID=1 PartnerID=BCN, Object ID=2 PartnerID=BCN, Object ID=3 PartnerID=BCN, 2 more... 0=Object 1=Object 2=Object 3=Object 4=Object] (index) 0 (prop) ID (value) 1 (prop) PartnerID (value) "BCN" (prop) DateTimeInserted (value) Thu May 29 2008 12:08:45 GMT-0700 (Pacific Daylight Time) * There are three more rows Here is the value of the variable tbl (value) 'Table.scroll' <TABLE cellspacing="0" cellpadding="0" border="0" style="width: 245px;" class="scroll grid_htable"><THEAD><TR><TH class="grid_sort grid_resize" style="width: 55px;"><SPAN> </SPAN><DIV id="jqgh_ID" style="cursor: pointer;">ID <IMG src="http://localhost/DNN5/js/jQuery/jqGrid-3.4.3/themes/sand/images/sort_desc.gif"/></DIV></TH><TH class="grid_resize" style="width: 90px;"><SPAN> </SPAN><DIV id="jqgh_PartnerID" style="cursor: pointer;">PartnerID </DIV></TH><TH class="grid_resize" style="width: 100px;"><SPAN> </SPAN><DIV id="jqgh_DateTimeInserted" style="cursor: pointer;">DateTimeInserted </DIV></TH></TR></THEAD></TABLE> Here is the complete function: $('table.scroll').jqGrid({ datatype: function(postdata) { mtype: "POST", $.ajax({ url: 'EDI.asmx/GetTestJSONString', type: "POST", contentType: "application/json; charset=utf-8", data: "{}", dataType: "text", //not json . let me try to parse success: function(msg, st) { if (st == "success") { var gridData; //strip of "d:" notation var result = JSON.parse(msg); for (var property in result) { gridData = result[property]; break; } var objGridData = eval("(" + gridData + ")"); //creates an object with visible data and structure var tbl = jQuery('table.scroll')[0]; alert(objGridData.rows[0].PartnerID); //displays the correct data //tbl.addJSONData(objGridData); //error received: addJSONData not a function //error received: addJSONData not a function (This uses eval as shown in the documentation) //tbl.addJSONData(eval("(" + objGridData + ")")); //the line below evaluates fine, creating an object and visible data and structure //var objGridData = eval("(" + gridData + ")"); //BUT, the same thing will not work here //tbl.addJSONData(eval("(" + gridData + ")")); //FIREBUG SHOWS THIS AS THE VALUE OF gridData: // "{"total":"1","page":"1","records":"5","rows":[{"ID":1,"PartnerID":"BCN","DateTimeInserted":new Date(1214412777787)},{"ID":2,"PartnerID":"BCN","DateTimeInserted":new Date(1212088125000)},{"ID":3,"PartnerID":"BCN","DateTimeInserted":new Date(1212088125547)},{"ID":4,"PartnerID":"EHG","DateTimeInserted":new Date(1235603192033)},{"ID":5,"PartnerID":"EMDEON","DateTimeInserted":new Date(1235603192000)}]}" } } }); }, jsonReader: { root: "rows", //arry containing actual data page: "page", //current page total: "total", //total pages for the query records: "records", //total number of records repeatitems: false, id: "ID" //index of the column with the PK in it }, colNames: [ 'ID', 'PartnerID', 'DateTimeInserted' ], colModel: [ { name: 'ID', index: 'ID', width: 55 }, { name: 'PartnerID', index: 'PartnerID', width: 90 }, { name: 'DateTimeInserted', index: 'DateTimeInserted', width: 100}], rowNum: 10, rowList: [10, 20, 30], imgpath: 'http://localhost/DNN5/js/jQuery/jqGrid-3.4.3/themes/sand/images', pager: jQuery('#pager'), sortname: 'ID', viewrecords: true, sortorder: "desc", caption: "TEST Example")};

    Read the article

  • Code in FOR loop not executed

    - by androniennn
    I have a ProgressDialog that retrieves in background data from database by executing php script. I'm using gson Google library. php script is working well when executed from browser: {"surveys":[{"id_survey":"1","question_survey":"Are you happy with the actual government?","answer_yes":"50","answer_no":"20"}],"success":1} However, ProgressDialog background treatment is not working well: @Override protected Void doInBackground(Void... params) { String url = "http://192.168.1.4/tn_surveys/get_all_surveys.php"; HttpGet getRequest = new HttpGet(url); Log.d("GETREQUEST",getRequest.toString()); try { DefaultHttpClient httpClient = new DefaultHttpClient(); Log.d("URL1",url); HttpResponse getResponse = httpClient.execute(getRequest); Log.d("GETRESPONSE",getResponse.toString()); final int statusCode = getResponse.getStatusLine().getStatusCode(); Log.d("STATUSCODE",Integer.toString(statusCode)); Log.d("HTTPSTATUSOK",Integer.toString(HttpStatus.SC_OK)); if (statusCode != HttpStatus.SC_OK) { Log.w(getClass().getSimpleName(), "Error " + statusCode + " for URL " + url); return null; } HttpEntity getResponseEntity = getResponse.getEntity(); Log.d("RESPONSEENTITY",getResponseEntity.toString()); InputStream httpResponseStream = getResponseEntity.getContent(); Log.d("HTTPRESPONSESTREAM",httpResponseStream.toString()); Reader inputStreamReader = new InputStreamReader(httpResponseStream); Gson gson = new Gson(); this.response = gson.fromJson(inputStreamReader, Response.class); } catch (IOException e) { getRequest.abort(); Log.w(getClass().getSimpleName(), "Error for URL " + url, e); } return null; } @Override protected void onPostExecute(Void result) { super.onPostExecute(result); Log.d("HELLO","HELLO"); StringBuilder builder = new StringBuilder(); Log.d("STRINGBUILDER","STRINGBUILDER"); for (Survey survey : this.response.data) { String x= survey.getQuestion_survey(); Log.d("QUESTION",x); builder.append(String.format("<br>ID Survey: <b>%s</b><br> <br>Question: <b>%s</b><br> <br>Answer YES: <b>%s</b><br> <br>Answer NO: <b>%s</b><br><br><br>", survey.getId_survey(), survey.getQuestion_survey(),survey.getAnswer_yes(),survey.getAnswer_no())); } Log.d("OUT FOR","OUT"); capitalTextView.setText(Html.fromHtml(builder.toString())); progressDialog.cancel(); } HELLO Log is displayed. STRINGBUILDER Log is displayed. QUESTION Log is NOT displayed. OUT FOR Log is displayed. Survey Class: public class Survey { int id_survey; String question_survey; int answer_yes; int answer_no; public Survey() { this.id_survey = 0; this.question_survey = ""; this.answer_yes=0; this.answer_no=0; } public int getId_survey() { return id_survey; } public String getQuestion_survey() { return question_survey; } public int getAnswer_yes() { return answer_yes; } public int getAnswer_no() { return answer_no; } } Response Class: public class Response { ArrayList<Survey> data; public Response() { data = new ArrayList<Survey>(); } } Any help please concerning WHY the FOR loop is not executed. Thank you for helping.

    Read the article

  • What is the fastest way to Initialize a multi-dimensional array to non-default values in .NET?

    - by AMissico
    How do I initialize a multi-dimensional array of a primitive type as fast as possible? I am stuck with using multi-dimensional arrays. My problem is performance. The following routine initializes a 100x100 array in approx. 500 ticks. Removing the int.MaxValue initialization results in approx. 180 ticks just for the looping. Approximately 100 ticks to create the array without looping and without initializing to int.MaxValue. Routines similiar to this are called a few hundred-thousand to several million times during a "run". The array size will not change during a run and arrays are created one-at-a-time, used, then discarded, and a new array created. A "run" which may last from one minute (using 10x10 arrays) to forty-five minutes (100x100). The application creates arrays of int, bool, and struct. There can be multiple "runs" executing at same time, but are not because performance degrades terribly. I am using 100x100 as a base-line. I am open to suggestions on how to optimize this non-default initialization of an array. One idea I had is to use a smaller primitive type when available. For instance, using byte instead of int, saves 100 ticks. I would be happy with this, but I am hoping that I don't have to change the primitive data type. public int[,] CreateArray(Size size) { int[,] array = new int[size.Width, size.Height]; for (int x = 0; x < size.Width; x++) { for (int y = 0; y < size.Height; y++) { array[x, y] = int.MaxValue; } } return array; } Down to 450 ticks with the following: public int[,] CreateArray1(Size size) { int iX = size.Width; int iY = size.Height; int[,] array = new int[iX, iY]; for (int x = 0; x < iX; x++) { for (int y = 0; y < iY; y++) { array[x, y] = int.MaxValue; } } return array; } Down to approximately 165 ticks after a one-time initialization of 2800 ticks. (See my answer below.) If I can get stackalloc to work with multi-dimensional arrays, I should be able to get the same performance without having to intialize the private static array. private static bool _arrayInitialized5; private static int[,] _array5; public static int[,] CreateArray5(Size size) { if (!_arrayInitialized5) { int iX = size.Width; int iY = size.Height; _array5 = new int[iX, iY]; for (int x = 0; x < iX; x++) { for (int y = 0; y < iY; y++) { _array5[x, y] = int.MaxValue; } } _arrayInitialized5 = true; } return (int[,])_array5.Clone(); } Down to approximately 165 ticks without using the "clone technique" above. (See my answer below.) I am sure I can get the ticks lower, if I can just figure out the return of CreateArray9. public unsafe static int[,] CreateArray8(Size size) { int iX = size.Width; int iY = size.Height; int[,] array = new int[iX, iY]; fixed (int* pfixed = array) { int count = array.Length; for (int* p = pfixed; count-- > 0; p++) *p = int.MaxValue; } return array; }

    Read the article

< Previous Page | 126 127 128 129 130 131 132 133 134 135 136 137  | Next Page >