Search Results

Search found 3121 results on 125 pages for 'leaving employee'.

Page 112/125 | < Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >

  • MS Access MSChart.Graph.8 not printing

    - by Tanj
    Software: Microsoft Access 2007 SP2 Database File Version: Access 2000 I have an access program that I inherited from a previous employee. It uses forms for reports and since I don't have much experience in access I have continued to do this. I have created a copy of the program for another project and modified it to suit. I am having trouble getting more then one chart to print. All the charts display in form view, they all have the same properties (excepting data, position, etc.) For some reason they are not printing. They don't even show up in the print preview. I am thinking it must be something with the graphs themselves as they sometimes lose all information. I have to open the graphs in edit mode and change the data source from column to row and back again so that it gets redrawn. (Refresh doesn't fix it) So right now I don't even have a clue as to where to look so ideas are welcome. Edit #1 It seems to be a problem with linking to an unbound form. Subform Field Linker: Can't build a link between unbound forms. The query for the main form is SELECT tTest.ixTest, tMotorTypes.ixMotorType, tMotorTypes.asMotorType, tMotorTypes.fDeprecated, tTestType.asTest, tTest.asSerialNum, tTest.asOrderNum, tTest.asFrameNum, tTest.asRotorNum, tTest.asOperator, tTest.iStation, tTest.dtTestDate, tTest.ixTestType FROM tMotorTypes INNER JOIN (tTestType INNER JOIN tTest ON tTestType.ixTestType=tTest.ixTestType) ON tMotorTypes.ixMotorType=tTest.ixMotorType; The query for the chart is: SELECT qGraphRSTTemperatures.Frequency, qGraphRSTTemperatures.[Drive End], qGraphRSTTemperatures.[Non Drive End], qGraphRSTTemperatures.[Air In], qGraphRSTTemperatures.Core FROM qGraphRSTTemperatures ORDER BY qGraphRSTTemperatures.ixTemperature; Query qGraphRSTTemperatures: SELECT tElectricalData.dblFrequency AS Frequency, tTemperatures.dblDrvEnd AS [Drive End], tTemperatures.dblNonDrvEnd AS [Non Drive End], tTemperatures.dblAirIn AS [Air In], tTemperatures.dblCore AS Core, tSubTest.ixTest, tTemperatures.ixTemperature FROM (tSubTest INNER JOIN tElectricalData ON tSubTest.ixSubTest = tElectricalData.ixSubTest) LEFT JOIN tTemperatures ON tElectricalData.ixElectrical = tTemperatures.ixElectrical WHERE (((tSubTest.ixSubTestType)=5)) ORDER BY tSubTest.ixTest, tTemperatures.ixTemperature; So how come, in the form view it shows the graph with the correct data when linked thus: Child field: ixTest Master field: ixTest but won't print the graph. The graph will print if I remove the links, but then I have all the data from chart query as it is not limited by ixTest. edit #2 It seems to be a data retrieval/rendering issue in printing. Is there anything in printing that changes the context of records with respect to parent/child relationships?

    Read the article

  • pix 501, static route to d-link router (different subnet)

    - by ra170
    I have pix 501 cisco firewall with internal ip 192.168.10.1. I have connected d-link router (dir-655) to pix 501. The d-link router has internal ip 192.168.0.1 The picture would like something like that: |pix 501| has 192.168.10.1 ip |DIR-655| has 192.168.0.1 ip 1. |cable modem|----|pix 501|-------|DIR-655|-----PC 2. PC--------|pix 501|---------|DIR-655| | | |cable modem| When I'm on the wireless network (dir-655) with assigned ip of 192.168.0.x I can cross the subnet and connect to my firewall 192.168.10.1. (pic. 1) The problem is that if I'm on the 192.168.10.x network I can't connect to anything over at 192.168.0.x network. (pic.2) I've tried entering a static route like this: `route inside 192.168.0.0 255.255.255.0 192.168.10.1 1` I also tried assigning static ip to wan interface on DIR-655 to 192.168.10.30 and then tried this: route inside 192.168.0.0 255.255.255.0 192.168.10.30 1 But still, can't connect to 192.168.0.1 or anything on that subnet. Is there a way to setup a static route? Would adding a separate router between PIX 501 and DIR-655 help? I would think that static route like this should take care of it, but it doesn't. This is my route config and nat: (config)# sh route outside 0.0.0.0 0.0.0.0 (outside_IP) 1 DHCP static outside (outside_IP) 255.255.248.0 (outside_IP) 1 CONNECT static inside 192.168.0.0 255.255.255.0 192.168.10.1 1 OTHER static inside 192.168.10.0 255.255.255.0 192.168.10.1 1 CONNECT static or (route inside 192.168.0.0 255.255.255.0 192.168.10.30 1) (config)# sh nat nat (inside) 1 192.168.1.0 255.255.255.0 0 0 nat (inside) 1 192.168.10.0 255.255.255.0 0 0 nat (inside) 1 0.0.0.0 0.0.0.0 0 0 I ended up turning DIR-655 into an Access Point (turning off DHCP and pluging cable from PIX lan interface into one of the LAN interfaces on DIR-655, and leaving WAN port empty), that works as far as DIR-655 being on the same subnet now, and I can access every machine. However the question is, why can't I simply route between those two? would router between these two help? One of the reasons is, that the PIX 501 has only 10 licences, so now I'm using almost all of them. (I have few computers, iphones, ps3, print server, etc.) I would really appreciate some help! Thanks.

    Read the article

  • HTTPS request to a specific load-balanced virtual host (using Shibboleth for SSO)?

    - by Gary S. Weaver
    In one environment, we have three servers load balanced that have a single Tomcat instance on each, fronted by two different Apache virtual hosts. Each of those two virtual hosts (served by all three servers) has its own different load balancer. Internally, the first host (we'll call it barfoo) is served by port 443 (HTTPS) with its cert and the second host (we'll call it foobar) is served by port 1443 (HTTPS). When you hit foobar, it goes to the load balancer which is using IP affinity for that host, so you can easily test login/HTTPS on one of the servers serving foobar, but not the others (because you keep getting that server for the lifetime of the LB session, iirc). In addition, each of the servers are using Shibboleth v2 for authN/SSO, using mod_shib (iirc). So, a normal request to foobar hits the LB, is directed to the 3rd server (and will do that from then on for as long as the LB session lasts), then Apache, then to the Shibboleth SP which looks at the request, makes you login via negotiation with the Shibboleth IdP, then you hit Apache again which in turn hits Tomcat, renders, and returns the response. (I'm leaving out some steps there.) We'd like to hit one of the individual servers (foobar-03.acme.org which we'll say has IP 1.2.3.4) via HTTPS (skipping the load balancer), so we at first try putting this in /etc/hosts: 1.2.3.4 foobar.acme.org But since foobar.acme.org is a secondary virtual host running on 1443, it attempts to get barfoo.acme.org rather than foobar.acme.org at port 1443 and see that the cert for barfoo.acme.org is invalid for this case since it doesn't match the request's host, foobar.acme.org. I thought an ssh tunnel might be easy enough, so I tried: ssh -L 7777:foobar-03.acme.org:1443 [email protected] I tried just hitting https://localhost:7777/webappname in a browser, but when the Shibboleth login is over, it again tries to redirect to barfoo.acme.org, which is the default host for 443, and we get into an infinite redirect loop. I then tried setting up an SSH tunnel with privileged port 443 locally going to 443 of foobar-03.acme.org as the hostname for that virtual host: sudo ssh -L 443:foobar-03.acme.org:1443 [email protected] I also edited /etc/hosts to add: 127.0.0.1 foobar.acme.org This finally worked and I was able to get the browser to hit the individual HTTPS host at https://foobar.acme.org/webappname, bypassing the load balancer. This was a bit of a pain and wouldn't work for everyone, due to the requirement to use the local 443 port and ssh to the server. Is there an easier way to browse to and log into an individual host in this case?

    Read the article

  • How can I change how OS X's 'say' command pronounces a word?

    - by jwhitlock
    OS X's say command is useful for some tasks (such as Skype's 'notify me when a contact comes online), but it is pronouncing some names incorrectly. Is there a way to teach say to pronounce a word differently? For example, try: say "Hi, Joel Spolsky" The 'ol' sounds like 'ball' rather than 'old'. I'd like to add an exception that say "Pronounce Spolsky like this", rather than try to teach new linguistic rules. I bet there is a way since it can pronounce "iphone" as Apple wants. Update - After some research, here's what I've learned: Text-to-speech is split between turning the text to phonemes, and then the phonemes are turned into audio using a voice. Changing the voice doesn't effect the phonemes. The Speech Synthesis Manager has some functions for turning text to phonemes, and a method for registering a speech dictionary that will add new text-phoneme maps. However, Apple's speech dictionary must be in a binary form - I didn't find any plist XML. Using dtrace while running say, I found some interesting files opened in /System/Library/PrivateFrameworks/SpeechDictionary.framework/Resources. This is probably the speech dictionary, but they are all binary, except for Homophones, which is XML. Adding entries to Homophones does nothing - it is probably used in speech-to-text. They are also code signed by Apple - changing them may prevent some programs from working. PrefixDictionary CartNames CartLite SymbolDictionary Homophones There are ways to add text versions of application interface elements so VoiceOver works, a lot of which a developer gets for free, but there are tricky bits. The standard here appears to be to use a phonetic spelling as needed. My guesses are: say is a light layer of code on top of the Speech Synthesis Manager. It would be easy for the Apple devs to add a command line option to take the path to a speech dictionary plist for alternate phoneme mapping, but they didn't. It may be a useful open-source project to write a better say. Skype probably uses Speech Synthesis Manager directly, leaving no hooks to change the way my friend's names are pronounced, other than spelling them phonetically, which is silly. The easiest way to make a command line version of say is how JRobert suggested. Here's my quick implementation, using Doug Harris's spelling suggestion: #!/bin/sh echo $@ | tr '[A-Z]' '[a-z]' | sed "s/spolsky/spowlsky/g" | /usr/bin/say Finally, some fun command line stuff: # Apple is weird sqlite3 /System/Library/PrivateFrameworks/SpeechDictionary.framework/Resources/Tuples .dump # Get too much information about what files are being opened sudo dtrace -n 'syscall::open*:entry { printf("%s %s",execname,copyinstr(arg0)); }' # Just fun say -v bad "Joel Spolsky Spolsky Spolsky Spolsky Spolsky, Joel Spolsky Spolsky Spolsky Spolsky Spolsky" echo "scale=1000; 4*a(1)" | bc -l | say

    Read the article

  • protobuf-net NOT faster than binary serialization?

    - by Ashish Gupta
    I wrote a program to serialize a 'Person' class using XMLSerializer, BinaryFormatter and ProtoBuf. I thought protobuf-net should be faster than the other two. Protobuf serialization was faster than XMLSerialization but much slower than the binary serialization. Is my understanding incorrect? Please make me understand this. Thank you for the help. Following is the output:- Person got created using protocol buffer in 347 milliseconds Person got created using XML in 1462 milliseconds Person got created using binary in 2 milliseconds Code below using System; using System.Collections.Generic; using System.Linq; using System.Text; using ProtoBuf; using System.IO; using System.Diagnostics; using System.Runtime.Serialization.Formatters.Binary; namespace ProtocolBuffers { class Program { static void Main(string[] args) { string XMLSerializedFileName = "PersonXMLSerialized.xml"; string ProtocolBufferFileName = "PersonProtocalBuffer.bin"; string BinarySerializedFileName = "PersonBinary.bin"; var person = new Person { Id = 12345, Name = "Fred", Address = new Address { Line1 = "Flat 1", Line2 = "The Meadows" } }; Stopwatch watch = Stopwatch.StartNew(); watch.Start(); using (var file = File.Create(ProtocolBufferFileName)) { Serializer.Serialize(file, person); } watch.Stop(); Console.WriteLine(watch.ElapsedMilliseconds.ToString()); Console.WriteLine("Person got created using protocol buffer in " + watch.ElapsedMilliseconds.ToString() + " milliseconds " ); watch.Reset(); watch.Start(); System.Xml.Serialization.XmlSerializer x = new System.Xml.Serialization.XmlSerializer(person.GetType()); using (TextWriter w = new StreamWriter(XMLSerializedFileName)) { x.Serialize(w, person); } watch.Stop(); Console.WriteLine(watch.ElapsedMilliseconds.ToString()); Console.WriteLine("Person got created using XML in " + watch.ElapsedMilliseconds.ToString() + " milliseconds"); watch.Reset(); watch.Start(); using (Stream stream = File.Open(BinarySerializedFileName, FileMode.Create)) { BinaryFormatter bformatter = new BinaryFormatter(); //Console.WriteLine("Writing Employee Information"); bformatter.Serialize(stream, person); } watch.Stop(); Console.WriteLine(watch.ElapsedMilliseconds.ToString()); Console.WriteLine("Person got created using binary in " + watch.ElapsedMilliseconds.ToString() + " milliseconds"); Console.ReadLine(); } } [ProtoContract] [Serializable] public class Person { [ProtoMember(1)] public int Id {get;set;} [ProtoMember(2)] public string Name { get; set; } [ProtoMember(3)] public Address Address {get;set;} } [ProtoContract] [Serializable] public class Address { [ProtoMember(1)] public string Line1 {get;set;} [ProtoMember(2)] public string Line2 {get;set;} } }

    Read the article

  • why java application not working after applying "web look and feel" theme?

    - by Vasu
    I have developed "Employee Management System" java project .For improving the ui appearance i have integrated "web look and feel" into my application.Theme is applied correctly. But here the problem arises: At first i have runned the java application without connecting to oracle data base,application have runned and worked perfectly. But when i connected the application to oracle database and runned again the application is taking more time to open and getting strucked. Code: For applying theme try { WebLookAndFeel.install(); }catch(Exception ex){ ex.printStackTrace(); } Code for Connecting DataBase: if (con == null) { File sd = new File(""); File in = new File(sd.getAbsolutePath() + File.separator + "conf.properties"); File dir = new File(sd.getAbsolutePath() + File.separator + "conf.properties"); if (!dir.exists()) { // dir.mkdir(); dir.createNewFile(); Properties pro = new Properties(); pro.load(new FileInputStream(in)); pro.setProperty("driverclass", "oracle.jdbc.driver.OracleDriver"); pro.setProperty("url", "jdbc:oracle:thin:@192.168.1.1:1521:main"); pro.setProperty("username", "gb16"); pro.setProperty("passwd", "gb16"); try { FileOutputStream out = new FileOutputStream(in); pro.store(out, "Human Management System initialization properties"); out.flush(); out.close();} catch(Exception e) { e.printStackTrace(); } } else { // System.out.println("Already exists "); } Properties pro = new Properties(); pro.load(new FileInputStream(in)); Class.forName(pro.getProperty("driverclass")); con = DriverManager.getConnection(pro.getProperty("url"), pro.getProperty("username"), pro.getProperty("passwd")); st = con.createStatement(ResultSet.TYPE_SCROLL_SENSITIVE, ResultSet.CONCUR_UPDATABLE); st = con.createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE, ResultSet.CONCUR_UPDATABLE); } else { return con.createStatement(ResultSet.TYPE_SCROLL_SENSITIVE,ResultSet.CONCUR_UPDATABLE); } without the theme the application with connected to database working correctly. Please help me in solving this issue. Thanks in advance..

    Read the article

  • Can Remote Desktop Services be deployed and administered by PowerShell alone, without a Domain in WIndows Server 2012 and 2012 R2?

    - by Warren P
    Windows Server 2008 R2 allowed deployment of Terminal Server (Remote Desktop Services) without a domain, and without any insistence on domains. This was very useful, especially for standalone virtual or cloud deployments of a server that is managed remotely for a remote client who has no need or desire for any ActiveDirectory or Domain features. This has become steadily more and more difficult as Microsoft restricts its technologies further and further in each Windows release. With Windows Server 2012, configuring licensing for Remote Desktop Services, is more difficult when not on a domain, but possible still. With Windows Server 2012 R2 (at least in the preview) the barriers are now severe: The Add/Remove Roles and Features wizard in Windows Server 2012 R2 has a special RDS deployment mode that has a rule that says if you aren't on a domain you can't deploy. It tells you to create or join a domain first. This of course comes in direct conflict with the fact that an Active Directory domain controller should not be the same machine as a terminal server machine. So Microsoft's technology is not such much a Cloud Operating System as a Cluster of Unwanted Nodes, needed to support the one machine I actually WANT to deploy. This is gross, and so I am trying to find a workaround. However if you skip that wizard and just go check the checkboxes in the main Roles/Features wizard, you can deploy the features, but the UI is not there to configure them, and when you go back to the RDS configuration page on the roles wizard, you get a message saying you can not administer your Remote Desktop Services system when you are logged in as a Local-Computer Administrator, because although you have all admin priveleges you could have (in your workgroup based system), the RDS configuration UI will not accept those credentials and let you continue. My question in brief is, can I still somehow, obtain the following end result: I need to allow 10-20 users per system to have an RDS (TS) session. I do not need any of the fancy pants RDS options, unless Microsoft somehow depends on those features being present. I believe I need the "RDS Session Host" as this is the guts of "Terminal Server". Microsoft says it is "full Windows desktop for Remote Desktop Services client. I need to configure licensing so that the Grace Period does not expire leaving my RDS non functional, so this probably means I need a way to configure TS CALs. If all of the above could technically be done with the judicious use of the PowerShell, I am prepared to even consider developing all the PowerShell scripts I would need to do the above. I'm not asking someone to write that for me. What I'm asking is, does anyone know if there is a technical impediment to what I want to do above, other than the deliberate crippling of the 2012 R2 UI for Workgroup users? Would the underlying technologies all still work if I manipulate and control them from a PowerShell script? Obviously a 1 word Yes or No answer isn't that useful to anyone, so the question is really, yes or no, and why? In the case the answer is Yes, then how.

    Read the article

  • Squid on windows loadbalancing only to one server

    - by Martin L.
    After thousands of googles and trying days i cant get the load balancer/failover in squid on windows to work. Iam using squid 2.7. My webservers are 2 single NIC lighttpd and one dual nic lighttpd. server1 in this example is running squid on port 80 and lighttpd on port 8080 (just to test) Requirements: All 3 webservers running lighttpd should be balanced two option for load balancing: Best would be if server1 is busy server2 takes over, if server2 is busy server3 takes over, etc.. Round robin style evenly distributed load. Eg server1 takes first call, server2 second etc.. All requests should be treated the same way (no url rewriting or so on) Sent host headers have to be redirected to every server as http host header, speaking of "server1", "server1.company.internal" and "10.211.1.1". My approach: acl all src all acl manager proto cache_object http_port 80 accel defaultsite=server1.company.internal vhost #reverse proxy entries cache_peer 10.211.2.1 parent 8080 0 no-query originserver round-robin login=PASS name=server1_nic1 cache_peer 10.211.1.2 parent 80 0 no-query originserver round-robin login=PASS name=server2_nic1 cache_peer 10.211.2.3 parent 8080 0 no-query originserver round-robin login=PASS name=server3_nic1 cache_peer 10.211.2.4 parent 8080 0 no-query originserver round-robin login=PASS name=server3_nic2 #decl of names of squid host acl registered_name_hostdomain dstdomain server1.company.internal acl registered_name_host dstdomain server1 #ip of squid host acl registered_name_ip dstdomain 10.211.2.1 # access: redirects the correct squid hostname http_access allow registered_name_hostdomain http_access allow registered_name_host http_access allow registered_name_ip http_access deny all cache_peer_access server1_nic1 allow registered_name_hostdomain cache_peer_access server1_nic1 allow registered_name_host cache_peer_access server1_nic1 allow registered_name_ip cache_peer_access server2_nic1 allow registered_name_hostdomain cache_peer_access server2_nic1 allow registered_name_host cache_peer_access server2_nic1 allow registered_name_ip cache_peer_access server3_nic1 allow registered_name_hostdomain cache_peer_access server3_nic1 allow registered_name_host cache_peer_access server3_nic1 allow registered_name_ip cache_peer_access server3_nic2 allow registered_name_hostdomain cache_peer_access server3_nic2 allow registered_name_host cache_peer_access server3_nic2 allow registered_name_ip cache_peer_access server1_nic1 deny all cache_peer_access server2_nic1 deny all cache_peer_access server3_nic1 deny all cache_peer_access server3_nic2 deny all never_direct allow all Problems: Load balancer does not load balance other than to first server. Only if the first server is killed in any way the second will take over. I have seen the others working at some point, but definitely not as the intended load balancing described above. If the cache_peer_access is not defined sometimes the wrong hostname is sent to the backend webserver and this always depends on the defaultsite= parameter. Probably because the host header on the request to squid is not set and its replaced by defaultsite. Leaving out defaultsite didnt solve the problem. The only workaround i found for this is the current approach with cache_peer_access. Questions: Does the cache_peer_access influence the round-robin? Is there a better workaround to pass the host header to the backed webservers? Which parameters do increase the speed of load balancing or does anyone have a better approach? -Martin

    Read the article

  • Get Confirm value in vb.net

    - by user1805641
    I have a hidden asp Button in a Repeater. In the VB.NET code behind I use the Rerpeater_ItemCommand to get the click event within the Repeater. There's a check if user is already recording a project. If yes and he wants to start a new one, a confirm box should appear asking "Are you sure?" How can I access the click value from confirm? <asp:Repeater ID="Repeater1" runat="server" OnItemCommand="Repeater1_ItemCommand"> <ItemTemplate> <div class="tile user_view user_<%# Eval("employeeName") %>"> <div class="tilesheight"></div> <div class="element"> <asp:Button ID="Button1" CssClass="hiddenbutton" runat="server" /> Index: <asp:Label ID="Label1" runat="server" Text='<%# Eval("index") %>' /><br /> <hr class="hr" /> customer: <asp:Label ID="CustomerLabel" runat="server" Text='<%# Eval("customer") %>' /><br /> <hr class ="hr" /> order: <asp:Label ID="OrderNoLabel" runat="server" Text='<%# Eval("orderNo") %>' /><br /> <asp:Label ID="DescriptionLabel" runat="server" Text='<%# Eval("description") %>' /><br /> <hr class="hr" /> </div> </div> </ItemTemplate> </asp:Repeater> code behind: If empRecs.Contains(projects.Item(index.Text).employeeID) Then 'Catch index of recording order i = empRecs.IndexOf(projects.Item(index.Text).employeeID) Page.ClientScript.RegisterStartupScript(Me.GetType, "confirm", "confirm('Order " & empRecs(i + 2) & " already recording. Would you like to start a new one?')",True) 'If users clicks ok insertData() End If Other solutions are using the Click Event and a hidden field. But the problem is, I don't want the confirmbox to appear every time the button is clicked. Only when empRecs conatins an employee. Thanks for helping

    Read the article

  • How to replace invalid characters in XML using Javascript or PhP

    - by Raind
    Hi, Need help here for the following: Running PhP, javascript, MySQL, XML. 1) Retrieving file from MySQL and stored it onto XML file. 2) Use javascript function to load XML file (that stored those data). 3) It produces invalid characters in XML file. STEP 1 : Sample of the code in PhP - Loading MySQL DB to store data onto XML file $file= fopen("MapDeals2.xml", "w"); $_xml ="\n"; $_xml .="\n"; while($row1_ThisWeek = mysql_fetch_array($result1_ThisWeek)) { $rRName = $row1_ThisWeek['Retailer_Name']; $rRAddress = $row1_ThisWeek['Retailer_Address1']; $rRAddressPostCode = $row1_ThisWeek['Retailer_AddressPostCode1']; } $_xml .= "<DEAL>\n"; $_xml .= "<DealDescription>" . $d_Description . "</DealDescription>\n"; $_xml .= "<DealURL>" . $d_URL . "</DealURL>\n"; $_xml .= "<DealRName>" . $rRName . "</DealRName>\n"; $_xml .= "<DealRAddress>" . $rRAddress . "</DealRAddress>\n"; $_xml .= "<DealRPostCode>" . $rRAddressPostCode . "</DealRPostCode>\n"; $_xml .= "</DEAL>\n"; } } $_xml .="\n"; fwrite($file, $_xml); fclose($file); STEP 2 : Sample of the code in Javscript - Loading XML file xhttp.open("GET","Test2.xml", false); xhttp.send(""); xmlDoc=xhttp.responseXML; var x=xmlDoc.getElementsByTagName("Employee"); parser = new DOMParser(); xmlDoc = parser.parseFromString("MapDeals2.xml", "text/xml"); for (i=0;i"; . . . } Is there a solution for the above? Looking forward to hear from you soon. Cheers

    Read the article

  • Why is my concurrency capacity so low for my web app on a LAMP EC2 instance?

    - by AMF
    I come from a web developer background and have been humming along building my PHP app, using the CakePHP framework. The problem arose when I began the ab (Apache Bench) testing on the Amazon EC2 instance in which the app resides. I'm getting pretty horrendous average page load times, even though I'm running a c1.medium instance (2 cores, 2GB RAM), and I think I'm doing everything right. I would run: ab -n 200 -c 20 http://localhost/heavy-but-view-cached-page.php Here are the results: Concurrency Level: 20 Time taken for tests: 48.197 seconds Complete requests: 200 Failed requests: 0 Write errors: 0 Total transferred: 392111200 bytes HTML transferred: 392047600 bytes Requests per second: 4.15 [#/sec] (mean) Time per request: 4819.723 [ms] (mean) Time per request: 240.986 [ms] (mean, across all concurrent requests) Transfer rate: 7944.88 [Kbytes/sec] received While the ab test is running, I run VMStat, which shows that Swap stays at 0, CPU is constantly at 80-100% (although I'm not sure I can trust this on a VM), RAM utilization ramps up to about 1.6G (leaving 400M free). Load goes up to about 8 and site slows to a crawl. Here's what I think I'm doing right on the code side: In Chrome browser uncached pages typically load in 800-1000ms, and cached pages load in 300-500ms. Not stunning, but not terrible either. Thanks to view caching, there might be at most one DB query per page-load to write session data. So we can rule out a DB bottleneck. I have APC on. I am using Memcached to serve the view cache and other site caches. xhprof code profiler shows that cached pages take up 10MB-40MB in memory and 100ms - 1000ms in wall time. Pages that would be the worst offenders would look something like this in xhprof: Total Incl. Wall Time (microsec): 330,143 microsecs Total Incl. CPU (microsecs): 320,019 microsecs Total Incl. MemUse (bytes): 36,786,192 bytes Total Incl. PeakMemUse (bytes): 46,667,008 bytes Number of Function Calls: 5,195 My Apache config: KeepAlive On MaxKeepAliveRequests 100 KeepAliveTimeout 3 <IfModule mpm_prefork_module> StartServers 5 MinSpareServers 5 MaxSpareServers 10 MaxClients 120 MaxRequestsPerChild 1000 </IfModule> Is there something wrong with the server? Some gotcha with the EC2? Or is it my code? Some obvious setting I should look into? Too many DNS lookups? What am I missing? I really want to get to 1,000 concurrency capacity, but at this rate, it ain't gonna happen.

    Read the article

  • ODBC: when is the best time to create my database?

    - by mawg
    I have a windows program which generates PGP forms which will be filled in later. Those PHP forms will populate a database. It looks very much like MySql, but I can't be certain, so let's call it ODBC. And, yes, it does have to be a windows program. There will also be PHP forms which query the database - examine which tables and fields it contains and then generates forms which can be used to search the database (e.g, it finds a table with fields "employee_name", etc and generates a form which lets you search based on employee name. Let's call that design time and run time. At design time, some manager or IT guy or similar gets to define the nature of the database and at runtime 1) a worker fills in the form daily and 2) management can extract reports. Here's my question: given that the database is defined at "design time" (and populated at run time), where and how is best to do so? 1 I could use an ODBC interface from the windows program, but I am having difficulty finding something good to work with Delphi. Things like ADO and firebird tend to expect you to already have a database and allow you to manipulate it, but I can find no code example of how to create a database and some tables, so ... 2 I could used DOS commands from Delphi in my windows program. I just tried and got a response to MySql --version, but am not sure if MySql etc are more interactive. That is, can I use a script file or a very long stacked command with semicolons and returns separating? e.g 'CREATE DATABASE db; CREATE TABLE t1;' 3) Since the best way to work with databases seems to be PHP, perhaps my windows program could spit out a PHP page which would, when run in a browser, create the database. I have tried to make this as uncomplicated as I can, but please feel free to ask questions. It may be that there are several valid ways, but there is probably one 'better' solution in terms of ease of implementation or maintenance. Better scratch option 3. What if the user later wants to come back and have the windows program change the input form? It needs to update the database too.

    Read the article

  • Bridging and iptables SNAT conflict

    - by sad_admin
    Hello I am working on a setup here and have it working with one minor exception. Devices on one side of my bridge aren't getting SNAT'd to the Internet. The Diagram / Overview: Primary_Network (Site_A) | | Internet ------- Linux_Bridge_GW (GW) | | Secondary/CoLo Site (Site_B) Here is the setup: 1.) Site_A has all the production servers and workstations. 2.) Site_B has a set of servers that we would like to fail-over to and also serve our internet facing services from. 3.) GW has two interfaces that are trunked and carrying the appropriate VLAN traffic (allow layer-2 propagation of traffic between sites) //this all works perfectly fine. 4.) The problem that is being encountered is, hosts from Site_B have their default GW at Site_A (same subnet) GW does not have IPs on the VLANs that are being passed. 5.) All hosts at Site_A can reach the Internet without problem. 6.) GW has an addresses on a subnet that is ONLY for Internet destined traffic. (This was done so that Websense would not have to parse unnecessary traffic. We use this VLAN as the monitor port's source on the switch where Websense is sitting). What I think is happening: 1.) Packet/Frame comes in on physdev at Site_B destined for Internet. 2.) Kernel sees packet, and forwards it out the other side of the bridge to that host's default GW. 3.) Site_A (containing core-network's Default-GW) sees that packet is destined for a host it doesn't know about, so it sends it to it's default GW (the linux bridge, since it's Internet bound). 4.) The kernel says "Hey, I've seen you before" and therefore doesn't do SNAT'ing on the packet and sends it out to the Internet where it's black-holed. Why I think it's happening: 1.) A tcpdump on the internet facing NIC shows the packet leaving the interface with the private address as it's source. What I would like: 1.) Have the packet SNAT'd. 2.) Something like the below would be awesome a.) packet comes in from Site_B b.) kernel sees that the packet is NOT destined for itself or any private address c.) kernel says "OK, well since you're destined for the Internet I'm going to send you out this interface rather than forward you to your normal default GW that's WAAAY over there." d.) packet comes in from internet and is sent out the appropriate bridge physdev depending on which site the host it's destined for is at. Thanks for any assistance or guidance that you are willing to offer. Best Regards, Sad Admin

    Read the article

  • PHP MSSQL : How to display output when query return no row

    - by vamps
    i have a problem with my PHP-MSSQL query. i have a join table that need to give a result something be like this: Department Group A Group B Total A+B WORKHOUR A OTHOUR A WORKHOUR B OTHOUR B WORKHOUR OTHOUR HR 10 15 25 0 35 15 IT 5 5 5 5 Admin 12 12 12 12 the query will count how many employee as per given date (admin will enter data and once submitted, the query will give the above result). The problem is, the final output is a mess when there's no row to be displayed. the column is shifted to the right. i.e: only Group A in IT only Group B in Admin Department Group A Group B Total A+B WORKHOUR A OTHOUR A WORKHOUR B OTHOUR B WORKHOUR OTHOUR HR 10 15 25 0 35 15 IT 5 5 5 5 Admin 12 12 12 12 my question is, how to prevent this to happen? i've tried everything with While.... if else.. but the result is still the same. how to display output "0" if no rows to return? echo "0"; this is my QUERY: select DD.DPT_ID,DPT.DEPARTMENT_NAME,TU.EMP_GROUP, sum(DD.WORK_HOUR) AS WORK_HOUR, sum(DD.OT_HOUR) AS OT_HOUR FROM DEPARTMENT_DETAIL DD left join DEPARTMENT DPT ON (DD.DEPT_ID=DPT.DEPT_ID) LEFT JOIN TBL_USERS TU ON (TU.EMP_ID=DD.EMP_ID) WHERE DD_DATE>='2012-01-01' AND DD_DATE<='2012-01-31' AND TU.EMP_GROUP!=2 GROUP BY DD.DEPT_ID, DPT.DEPARTMENT_NAME,TU.EMP_GROUP ORDER BY DPT.DEPARTMENT_NAME this is one of the logic that i've used, but doesn't return the result that i want:: while($row = mssql_fetch_array($displayResult)) { if ((!$row["WORK_HOUR"])&&(!$row["OT_HOUR"])) { echo "<td >"; echo "empty"; echo "&nbsp;</td>"; echo "<td >"; echo "empty"; echo "&nbsp;</td>"; } else { echo "<td>"; echo $row["WORK_HOUR"]; echo "&nbsp;</td>"; echo "<td>"; echo $row["OT_HOUR"]; echo "&nbsp;</td>"; } } please help. i've been doing this for 2 days. @__@

    Read the article

  • How do I create links in the cells of a PHP generated table?

    - by typoknig
    I have a table generated from some PHP code that lists a SMALL amount of important information for employees. I want to make it so each row, or at least one element in each row can be clicked on so the user will be redirected to ALL of the information (pulled from MySQL database) related to the employee who was clicked on. I am not sure how would be the best way to go about this, but I am open to suggestions. I would like to stick to PHP and/or JavaScript. Below is the code for my table: <table> <tr> <td id="content_heading" width="25px">ID</td> <td id="content_heading" width="150px">Last Name</td> <td id="content_heading" width="150px">First Name</td> <td id="content_heading" width="75px">SSN</td> </tr> <?php $user = 'user'; $pass = 'pass'; $server = 'localhost'; $link = mysql_connect($server, $user, $pass); if (!$link){ die('Could not connect to database!' . mysql_error()); } mysql_select_db('mydb', $link); $query = "SELECT * FROM employees"; $result = mysql_query($query); mysql_close($link); $num = mysql_num_rows($result); for ($i = 0; $i < $num; $i++){ $row = mysql_fetch_array($result); $class = (($i % 2) == 0) ? "table_odd_row" : "table_even_row"; echo "<tr class=".$class.">"; echo "<td>".$row[id]."</td>"; echo "<td>".$row[l_name]."</td>"; echo "<td>".$row[f_name]."</td>"; echo "<td>".$row[ssn]."</td>"; echo "</tr>"; } ?> </table>

    Read the article

  • Create unique links for indvidual elements of a PHP generated table

    - by typoknig
    I have a table generated from some PHP code that lists a SMALL amount of important information for employees. I want to make it so each row, or at least one element in each row can be clicked on so the user will be redirected to ALL of the information (pulled from MySQL database) related to the employee who was clicked on. I am not sure how would be the best way to go about this, but I am open to suggestions. I would like to stick to PHP and/or JavaScript. Below is the code for my table: <table> <tr> <td id="content_heading" width="25px">ID</td> <td id="content_heading" width="150px">Last Name</td> <td id="content_heading" width="150px">First Name</td> <td id="content_heading" width="75px">SSN</td> </tr> <?php $user = 'user'; $pass = 'pass'; $server = 'localhost'; $link = mysql_connect($server, $user, $pass); if (!$link){ die('Could not connect to database!' . mysql_error()); } mysql_select_db('mydb', $link); $query = "SELECT * FROM employees"; $result = mysql_query($query); mysql_close($link); $num = mysql_num_rows($result); for ($i = 0; $i < $num; $i++){ $row = mysql_fetch_array($result); $class = (($i % 2) == 0) ? "table_odd_row" : "table_even_row"; echo "<tr class=".$class.">"; echo "<td>".$row[id]."</td>"; echo "<td>".$row[l_name]."</td>"; echo "<td>".$row[f_name]."</td>"; echo "<td>".$row[ssn]."</td>"; echo "</tr>"; } ?> </table>

    Read the article

  • css issue on hover- shaky effect

    - by Sarika Thapaliya
    <style type="text/css"> .linkcontainer{border-right: solid 0.2px white;margin-right:1px} .hardlink{color: #FFF !important; border: 1px solid transparent; } .hardlink:hover{ background:url("/_layouts/images/bgximg.png") repeat-x -0px -489px; display:inline-block; background-color:#21374C; border:0.2px solid #5badff; line-height:20px; text-decoration:none !important;} </style> <div style="padding-bottom:3px;background:transparent; color:white!important; float:left; margin-right:20px; line-height:42px;"> <span class="linkcontainer"> <a class="hardlink" style="padding:0 10px;" href="http://hronline">HROnline</a> </span> <span class="linkcontainer"> <a class="hardlink" style="padding:0 10px; " href="http://hronline/ec">Employee Center</a> </span> <span class="linkcontainer"> <a class="hardlink" style="padding:0 10px; " href="http://hronline/businesscommunities">Business Communities</a> </span> <span class="linkcontainer"> <a class="hardlink" style="padding:0 10px;" href="http://hronline/internalservices">Internal Services</a> </span> <span class="linkcontainer"> <a class="hardlink" style="padding:0 10px;" href="http://hronline/policiesprocedures">Policies&procedures</a> </span> <span class="linkcontainer"> <a class="hardlink" style="padding:0 10px;" href="http://hronline/qualitybestpractices">Best Practices</a> </span> </div> I added a right border to the span that contain menu links. When I hover on each menu links, it also has some background. This is causing jerky effect on the whole container.. What is causing the shaky effect on hover? I don't seem to figure it out--again..

    Read the article

  • Resource id #45 [on hold]

    - by user2916506
    What is this error? I am trying to connect from PHP to Traffic Live mysql and do a POST and at some id's I get this error. With the biggest part of contacts my code works just fine but it keeps getting this trouble maker. What can I do? Here is the trouble making code:if ($sales_datemodified[$h] $traffic_datemodified[$t]) { //if ($traffic_id[$t] != 22033) { $postrequestclient = $clients - post("staff/employee", null, '{"@class": "com.sohnar.trafficlite.transfer.trafficcompany.TrafficEmployeeTO", "id": ' . $traffic_id[$t] . ',"locationId":' . $traffic_locationid[$t] . ', "departmentId": ' . $traffic_departmentid[$t] . ', "ownerCompanyId": ' . $traffic_ownercompanyid[$t] . ',"userId": ' . $traffic_userid[$t] . ',"userName": "' . $traffic_username[$t] . '","employeeDetails": {"id": ' . $traffic_id[$t] . ',"jobTitle": "' . $sales_title[$h] . '","costPerHour": { "amountString": ' . $traffic_amountString[$t] . ', "currencyType": "' . $traffic_currencyType[$t] . '"},"hoursWorkedPerDayMinutes": ' . $traffic_hwpdm[$t] . ', "personalDetails": {"id": ' . $traffic_persdetId[$t] . ', "firstName": "' . $sales_firstname[$h] . '","middleName": "' . $traffic_middleName[$t] . '","lastName": "' . $sales_lastname[$h] . '", "emailAddress": "' . $traffic_username[$t] . '","workPhone": "' . $sales_phone[$h] . '","mobilePhone": "' . $sales_mobilephone[$h] . '" }}}') - setAuth(user, password); $postresponseclient = $postrequestclient - send() - json(); I don't have any errors in my code. That commented "if" is for excluding the trouble making contact on which I get this error. If I uncomment that the program runs fine without any problems. The problem is that this test is made on a small number of contacts and if I add more contacts to the test I get more errors of this kind. this should be a synchronization job so excluding all the "bad" contacts won't work

    Read the article

  • Oracle Enhances Oracle Social Cloud with Next-Generation User Experience

    - by Richard Lefebvre
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Today’s enterprise must meet the technology standards of today’s consumer. According to a recent IDG Enterprise report, enterprises that invest in consumerized, easy-to-use technologies experience a 56 percent increase in employee productivity and a 46 percent increase in customer satisfaction. In order to deliver that simple and intuitive experience across even the most advanced social management capabilities, Oracle today introduced Social Station, an innovative new workspace within Oracle Social Cloud’s Social Relationship Management (SRM) platform. With Social Station, users benefit from a personalized and intuitive user experience that helps increase both the productivity and performance of social business practices. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} News Facts Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Oracle today introduced Social Station, an innovative new workspace within Oracle Social Cloud’s Social Relationship Management (SRM) platform that helps organizations socially enable the way they do business. With an advanced yet intuitive user interface, Social Station delivers a compelling user experience that improves productivity and helps users more easily deliver on social objectives. To help users quickly and easily build out and configure their social workspaces, Social Station provides drag-and-drop capabilities that allow users to personalize their workspace with different social modules. With a new Custom Analytics module that mixes and matches more than 120 metrics with thousands of customizable reporting options, users can customize their view of social data and access constantly refreshed updates that support real-time understanding. One-click sharing capabilities and annotation functionality within the new Custom Analytics module also drives productivity by improving sharing and collaboration across teams, departments, and executives. Multiview layout capabilities further allows visibility into social insights by offering users the flexibility to monitor conversations by network, stream, metric, graph type, date range, and relative time period. Social Station also includes an Enhanced Calendar module that provides a clear visual representation of content, posts, networks, and views, helping users easily and efficiently understand information and toggle between various functions and views. To support different user personas and social business needs, Oracle plans to continue building out Social Station with additional modules, including content curation, influencer engagement, and command center creation. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • How to Use Windows’ Advanced Search Features: Everything You Need to Know

    - by Chris Hoffman
    You should never have to hunt down a lost file on modern versions of Windows — just perform a quick search. You don’t even have to wait for a cartoon dog to find your files, like on Windows XP. The Windows search indexer is constantly running in the background to make quick local searches possible. This enables the kind of powerful search features you’d use on Google or Bing — but for your local files. Controlling the Indexer By default, the Windows search indexer watches everything under your user folder — that’s C:\Users\NAME. It reads all these files, creating an index of their names, contents, and other metadata. Whenever they change, it notices and updates its index. The index allows you to quickly find a file based on the data in the index. For example, if you want to find files that contain the word “beluga,” you can perform a search for “beluga” and you’ll get a very quick response as Windows looks up the word in its search index. If Windows didn’t use an index, you’d have to sit and wait as Windows opened every file on your hard drive, looked to see if the file contained the word “beluga,” and moved on. Most people shouldn’t have to modify this indexing behavior. However, if you store your important files in other folders — maybe you store your important data a separate partition or drive, such as at D:\Data — you may want to add these folders to your index. You can also choose which types of files you want to index, force Windows to rebuild the index entirely, pause the indexing process so it won’t use any system resources, or move the index to another location to save space on your system drive. To open the Indexing Options window, tap the Windows key on your keyboard, type “index”, and click the Indexing Options shortcut that appears. Use the Modify button to control the folders that Windows indexes or the Advanced button to control other options. To prevent Windows from indexing entirely, click the Modify button and uncheck all the included locations. You could also disable the search indexer entirely from the Programs and Features window. Searching for Files You can search for files right from your Start menu on Windows 7 or Start screen on Windows 8. Just tap the Windows key and perform a search. If you wanted to find files related to Windows, you could perform a search for “Windows.” Windows would show you files that are named Windows or contain the word Windows. From here, you can just click a file to open it. On Windows 7, files are mixed with other types of search results. On Windows 8 or 8.1, you can choose to search only for files. If you want to perform a search without leaving the desktop in Windows 8.1, press Windows Key + S to open a search sidebar. You can also initiate searches directly from Windows Explorer — that’s File Explorer on Windows 8. Just use the search box at the top-right of the window. Windows will search the location you’ve browsed to. For example, if you’re looking for a file related to Windows and know it’s somewhere in your Documents library, open the Documents library and search for Windows. Using Advanced Search Operators On Windows 7, you’ll notice that you can add “search filters” form the search box, allowing you to search by size, date modified, file type, authors, and other metadata. On Windows 8, these options are available from the Search Tools tab on the ribbon. These filters allow you to narrow your search results. If you’re a geek, you can use Windows’ Advanced Query Syntax to perform advanced searches from anywhere, including the Start menu or Start screen. Want to search for “windows,” but only bring up documents that don’t mention Microsoft? Search for “windows -microsoft”. Want to search for all pictures of penguins on your computer, whether they’re PNGs, JPEGs, or any other type of picture file? Search for “penguin kind:picture”. We’ve looked at Windows’ advanced search operators before, so check out our in-depth guide for more information. The Advanced Query Syntax gives you access to options that aren’t available in the graphical interface. Creating Saved Searches Windows allows you to take searches you’ve made and save them as a file. You can then quickly perform the search later by double-clicking the file. The file functions almost like a virtual folder that contains the files you specify. For example, let’s say you wanted to create a saved search that shows you all the new files created in your indexed folders within the last week. You could perform a search for “datecreated:this week”, then click the Save search button on the toolbar or ribbon. You’d have a new virtual folder you could quickly check to see your recent files. One of the best things about Windows search is that it’s available entirely from the keyboard. Just press the Windows key, start typing the name of the file or program you want to open, and press Enter to quickly open it. Windows 8 made this much more obnoxious with its non-unified search, but unified search is finally returning with Windows 8.1.     

    Read the article

  • Cannot Install/Start MySQL Server

    - by Peezy Bro
    Okay, I decided to migrate from MySQL Server 5.5.37 to Percona Server 5.6. I ended up removing MySQL Server by the following: sudo apt-get --purge remove mysql-server mysql-server-5.5 mysql-server-core-5.5 mysql-client mysql-client-core-5.5 mysql-common sudo apt-get autoremove sudo apt-get autoclean rm -rf /var/lib/mysql rm -rf /etc/mysql Now here is my problem, when I try to install MySQL Server 5.6 it goes through its process and when it asks me for a password, it comes up with Cannot set MySQL "root" password. After it "installs" MySQL wont start up and I get permission denied?. Reading package lists... Done Building dependency tree Reading state information... Done 0 upgraded, 0 newly installed, 0 to remove and 35 not upgraded. brandon@brandon-DB:~$ sudo apt-get install mysql-server Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: libdbd-mysql-perl libdbi-perl libmysqlclient18 libterm-readkey-perl mysql-client-5.5 mysql-client-core-5.5 mysql-common mysql-server-5.5 mysql-server-core-5.5 Suggested packages: libmldbm-perl libnet-daemon-perl libplrpc-perl libsql-statement-perl tinyca mailx The following NEW packages will be installed: libdbd-mysql-perl libdbi-perl libmysqlclient18 libterm-readkey-perl mysql-client-5.5 mysql-client-core-5.5 mysql-common mysql-server mysql-server-5.5 mysql-server-core-5.5 0 upgraded, 10 newly installed, 0 to remove and 35 not upgraded. Need to get 0 B/8,955 kB of archives. After this operation, 96.3 MB of additional disk space will be used. Do you want to continue? [Y/n] y Preconfiguring packages ... Selecting previously unselected package mysql-common. (Reading database ... 167760 files and directories currently installed.) Preparing to unpack .../mysql-common_5.5.37-0ubuntu0.14.04.1_all.deb ... Unpacking mysql-common (5.5.37-0ubuntu0.14.04.1) ... Selecting previously unselected package libmysqlclient18:amd64. Preparing to unpack .../libmysqlclient18_5.5.37-0ubuntu0.14.04.1_amd64.deb ... Unpacking libmysqlclient18:amd64 (5.5.37-0ubuntu0.14.04.1) ... Selecting previously unselected package libdbi-perl. Preparing to unpack .../libdbi-perl_1.630-1_amd64.deb ... Unpacking libdbi-perl (1.630-1) ... Selecting previously unselected package libdbd-mysql-perl. Preparing to unpack .../libdbd-mysql-perl_4.025-1_amd64.deb ... Unpacking libdbd-mysql-perl (4.025-1) ... Selecting previously unselected package libterm-readkey-perl. Preparing to unpack .../libterm-readkey-perl_2.31-1_amd64.deb ... Unpacking libterm-readkey-perl (2.31-1) ... Selecting previously unselected package mysql-client-core-5.5. Preparing to unpack .../mysql-client-core-5.5_5.5.37-0ubuntu0.14.04.1_amd64.deb ... Unpacking mysql-client-core-5.5 (5.5.37-0ubuntu0.14.04.1) ... Selecting previously unselected package mysql-client-5.5. Preparing to unpack .../mysql-client-5.5_5.5.37-0ubuntu0.14.04.1_amd64.deb ... Unpacking mysql-client-5.5 (5.5.37-0ubuntu0.14.04.1) ... Selecting previously unselected package mysql-server-core-5.5. Preparing to unpack .../mysql-server-core-5.5_5.5.37-0ubuntu0.14.04.1_amd64.deb ... Unpacking mysql-server-core-5.5 (5.5.37-0ubuntu0.14.04.1) ... Processing triggers for man-db (2.6.7.1-1) ... Setting up mysql-common (5.5.37-0ubuntu0.14.04.1) ... Selecting previously unselected package mysql-server-5.5. (Reading database ... 168116 files and directories currently installed.) Preparing to unpack .../mysql-server-5.5_5.5.37-0ubuntu0.14.04.1_amd64.deb ... Unpacking mysql-server-5.5 (5.5.37-0ubuntu0.14.04.1) ... Selecting previously unselected package mysql-server. Preparing to unpack .../mysql-server_5.5.37-0ubuntu0.14.04.1_all.deb ... Unpacking mysql-server (5.5.37-0ubuntu0.14.04.1) ... Processing triggers for ureadahead (0.100.0-16) ... Processing triggers for man-db (2.6.7.1-1) ... Setting up libmysqlclient18:amd64 (5.5.37-0ubuntu0.14.04.1) ... Setting up libdbi-perl (1.630-1) ... Setting up libdbd-mysql-perl (4.025-1) ... Setting up libterm-readkey-perl (2.31-1) ... Setting up mysql-client-core-5.5 (5.5.37-0ubuntu0.14.04.1) ... Setting up mysql-client-5.5 (5.5.37-0ubuntu0.14.04.1) ... Setting up mysql-server-core-5.5 (5.5.37-0ubuntu0.14.04.1) ... Setting up mysql-server-5.5 (5.5.37-0ubuntu0.14.04.1) ... start: Job failed to start invoke-rc.d: initscript mysql, action "start" failed. dpkg: error processing package mysql-server-5.5 (--configure): subprocess installed post-installation script returned error exit status 1 dpkg: dependency problems prevent configuration of mysql-server: mysql-server depends on mysql-server-5.5; however: Package mysql-server-5.5 is not configured yet. dpkg: error processing package mysql-server (--configure): dependency problems - leaving unconfigured Processing triggers for libc-bin (2.19-0ubuntu6) ... No apport report written because the error message indicates its a followup error from a previous failure. Processing triggers for ureadahead (0.100.0-16) ... Errors were encountered while processing: mysql-server-5.5 mysql-server E: Sub-process /usr/bin/dpkg returned an error code (1) I have all my database/tables dumped and on a seperate HDD. This is also a Dev Machine and not my main Production Machine. I also backed up the MySQL_Config and MySQL_Data.

    Read the article

  • Bye Bye Year of the Dragon, Hello BPM

    - by Ajay Khanna
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} As 2012 fades and we usher in a New Year, let’s look back at some of the hottest BPM trends and those we’ll be seeing more of in the coming months. BPM is as much about people as it is about technology. As people adopt new ways of engagement, new channels of communications and new devices to interact , the changes are reflected in BPM practices. As Social and Mobile have become an integral part of our personal and professional lives, we’ll see tighter integration of social and mobile with BPM, and more use cases emerging for smarter process management in 2013. And with products and services becoming less differentiated, organizations will strive to differentiate on Customer Experience. Concepts like Pace Layered Architecture and Dynamic Case Management will provide more flexibility and agility to IT groups and knowledge workers. Take a look at some of these capabilities we showcased (see video) at Oracle OpenWorld 2012. Some of these trends that will continue to gain momentum in 2013: Social networks and social media have provided a new way for businesses to engage with customers. A prospect is likely to reach out to their social network before making any purchase. Companies are increasingly engaging with customers in social networks to influence their purchasing decisions, as well as listening to customers via tools like sentiment analysis to see what customers think about a particular product or process. These insights are valuable as companies look to improve their processes. Inside organizations, workers are using social tools to engage with each other to design new products and processes. Social collaboration tools are being used to resolve issues where an employee needs consultation to reach a decision. Oracle BPM Suite includes social interaction as an integral part of its process design and work management to empower today’s business users. Ubiquitous smart mobile devices are trending as a tool of choice for many workers. Many companies are adopting the policy of “Bring Your Own Device,” and the device of choice is a tablet. Devices like smart phones and tablets not only provide mobility to workers and customers, but they also provide additional important information – the context. By integrating the mobile context (location, photos, and preferences) into your processes, organizations can make much more informed decisions, as well as offer more personalized service to customers. Using Oracle ADF Mobile, you can easily create user interfaces for mobile devices and also capture location data for process execution. Customer experience was at the forefront of trending topics in 2012. Organizations are trying to understand their customers better and offer them more personalized and differentiated services. Customer experience is paramount when companies design sales and support processes. Companies are looking to BPM to consistently and efficiently orchestrate customer facing processes across disparate systems, departments and channels of communication. Oracle BPM Suite provides just the right capabilities for organizations to design and deliver an excellent customer experience. Pace Layered Architecture strategy is gaining traction as a way to maximize agility and minimize disruption in organizations. It provides a framework to manage the evolution of your information system when different pieces of it are changing at different rates and need to be updated independent of one another. Oracle Fusion Middleware and Oracle BPM Suite are designed with this in mind. The database layer, integration layer, application layer, and process layer should not be required to change at the same time. Most of the business changes to policy or process can be done at the process layer without disrupting the whole infrastructure. By understanding the type of change needed at a particular level, organizations can become much more agile and efficient. Adaptive Case Management proposes more flexibility to manage processes or cases that do not follow a structured process flow. In such situations, the knowledge worker managing the case needs to evaluate what step should occur next because the sequence of steps can’t be predetermined. Another characteristic is that it requires much more collaboration than straight-through process. As simple processes become automated, and customers adopt more and more self-service, cases that reach the case workers are much more complex and need more investigation. Oracle BPM suite includes comprehensive adaptive case management capability to manage such unstructured and complex processes. Smart BPM or making your BPM intelligent has been the holy grail for BPM practitioners who imagined that one day BPM would become one with Business Intelligence, Business Activity Monitoring and Complex Event Processing, making it much more responsive and helpful in organizational decision making. In 2013, organizations will begin to deploy these intelligent BPM solutions. Oracle offers an integrated solution that brings together the powerful functionality of BI, BAM, event processing, and Real Time Decisions to help organizations create smart process based solutions. In order to help customers reach their BPM goals faster and remove risks associated with BPM initiatives, Oracle has introduced Oracle Process Accelerators, pre-built best practices applications built on Oracle BPM Suite that are fully production grade and ready to deploy. These are exiting times for BPM practitioners and there is so much to look forward to in 2013. We wish you a very happy and prosperous New Year 2013. Happy BPMing!

    Read the article

  • Forcing an External Activation with Service Broker

    - by Davide Mauri
    In these last days I’ve been working quite a lot with Service Broker, a technology I’m really happy to work with, since it can give a lot of satisfaction. The scale-out solution one can easily build is simply astonishing. I’m helping a company to build a very scalable and – yet almost inexpensive – invoicing system that has to be able to scale out using commodity hardware. To offload the work from the main server to satellite “compute nodes” (yes, I’ve borrowed this term from PDW) we’re using Service Broker and the External Activator application available in the SQL Server Feature Pack. For those who are not used to work with SSB, the External Activation is a feature that allows you to intercept the arrival of a message in a queue right from your application code. http://msdn.microsoft.com/en-us/library/ms171617.aspx (Look for “Event-Based Activation”) In order to make life even more easier, Microsoft released the External Activation application that saves you even from writing even this code. http://blogs.msdn.com/b/sql_service_broker/archive/tags/external+activator/ The External Activator application can be configured to execute your own application so that each time a message – an invoice in my case – arrives in the target queue, the invoking application is executed and the invoice is calculated. The very nice feature of External Activator is that it can automatically execute as many configured application in order to process as many messages as your system can handle.  This also a lot of create a scale-out solution, leaving to the developer only a fraction of the problems that usually came with asynchronous programming. Developers are also shielded from Service Broker since everything can be encapsulated in Stored Procedures, so that – for them – developing such scale-out asynchronous solution is not much more complex than just executing a bunch of Stored Procedures. Now, if everything works correctly, you don’t have to bother of anything else. You put messages in the queue and your application, invoked by the External Activator, process them. But what happen if for some reason your application fails to process the messages. For examples, it crashes? The message is safe in the queue so you just need to process it again. But your application is invoked by the External Activator application, so now the question is, how do you wake up that app? Service Broker will engage the activation process only if certain conditions are met: http://msdn.microsoft.com/en-us/library/ms171601.aspx But how we can invoke the activation process manually, without having to wait for another message to arrive (the arrival of a new message is a condition that can fire the activation process)? The “trick” is to do manually with the activation process does: sending a system message to a queue in charge of handling External Activation messages: declare @conversationHandle uniqueidentifier; declare @n xml = N' <EVENT_INSTANCE>   <EventType>QUEUE_ACTIVATION</EventType>   <PostTime>' + CONVERT(CHAR(24),GETDATE(),126) + '</PostTime>   <SPID>' + CAST(@@SPID AS VARCHAR(9)) + '</SPID>   <ServerName>[your_server_name]</ServerName>   <LoginName>[your_login_name]</LoginName>   <UserName>[your_user_name]</UserName>   <DatabaseName>[your_database_name]</DatabaseName>   <SchemaName>[your_queue_schema_name]</SchemaName>   <ObjectName>[your_queue_name]</ObjectName>   <ObjectType>QUEUE</ObjectType> </EVENT_INSTANCE>' begin dialog conversation     @conversationHandle from service        [<your_initiator_service_name>] to service          '<your_event_notification_service>' on contract         [http://schemas.microsoft.com/SQL/Notifications/PostEventNotification] with     encryption = off,     lifetime = 6000 ; send on conversation     @conversationHandle message type     [http://schemas.microsoft.com/SQL/Notifications/EventNotification] (@n) ;     end conversation @conversationHandle; That’s it! Put the code in a Stored Procedure and you can add to your application a button that says “Force Queue Processing” (or something similar) in order to start the activation process whenever you need it (which should not occur too frequently but it may happen). PS I know that the “fire-and-forget” (ending the conversation without waiting for an answer) technique is not a best practice, but in this case I don’t see how it can hurts so I decided to stay very close to the KISS principle []

    Read the article

  • Updating the managed debugging API for .NET v4

    - by Brian Donahue
    In any successful investigation, the right tools play a big part in collecting evidence about the state of the "crime scene" as it was before the detectives arrived. Unfortunately for the Crash Scene Investigator, we don't have the budget to fly out to the customer's site, chalk the outline, and eat their doughnuts. We have to rely on the end-user to collect the evidence for us, which means giving them the fingerprint dust and the evidence baggies and leaving them to it. With that in mind, the Red Gate support team have been writing tools that can collect vital clues with a minimum of fuss. Years ago we would have asked for a memory dump, where we used to get the customer to run CDB.exe and produce dumps that we could analyze in-house, but those dumps were pretty unwieldy (500MB files) and the debugger often didn't dump exactly where we wanted, or made five or more dumps. What we wanted was just the minimum state information from the program at the time of failure, so we produced a managed debugger that captured every first and second-chance exception and logged the stack and a minimal amount of variables from the memory of the application, which could all be exported as XML. This caused less inconvenience to the end-user because it is much easier to send a 65KB XML file in an email than a 500MB file containing all of the application's memory. We don't need to have the entire victim shipped out to us when we just want to know what was under the fingernails. The thing that made creating a managed debugging tool possible was the MDbg Engine example written by Microsoft as part of the Debugging Tools for Windows distribution. Since the ICorDebug interface is a bit difficult to understand, they had kindly created some wrappers that provided an event-driven debugging model that was perfect for our needs, but .NET 4 applications under debugging started complaining that "The debugger's protocol is incompatible with the debuggee". The introduction of .NET Framework v4 had changed the managed debugging API significantly, however, without an update for the MDbg Engine code! After a few hours of research, I had finally worked out that most of the version 4 ICorDebug interface still works much the same way in "legacy" v2 mode and there was a relatively easy fix for the problem in that you can still get a reference to legacy ICorDebug by changing the way the interface is created. In .NET v2, the interface was acquired using the CreateDebuggingInterfaceFromVersion method in mscoree.dll. In v4, you must first create IClrMetaHost, enumerate the runtimes, get an ICLRRuntimeInfo interface to the .NET 4 runtime from that, and use the GetInterface method in mscoree.dll to return a "legacy" ICorDebug interface. The rest of the MDbg Engine will continue working the old way. Here is how I had changed the MDbg Engine code to support .NET v4: private void InitFromVersion(string debuggerVersion){if( debuggerVersion.StartsWith("v1") ){throw new ArgumentException( "Can't debug a version 1 CLR process (\"" + debuggerVersion + "\"). Run application in a version 2 CLR, or use a version 1 debugger instead." );} ICorDebug rawDebuggingAPI=null;if (debuggerVersion.StartsWith("v4")){Guid CLSID_MetaHost = new Guid("9280188D-0E8E-4867-B30C-7FA83884E8DE"); Guid IID_MetaHost = new Guid("D332DB9E-B9B3-4125-8207-A14884F53216"); ICLRMetaHost metahost = (ICLRMetaHost)NativeMethods.ClrCreateInterface(CLSID_MetaHost, IID_MetaHost); IEnumUnknown runtimes = metahost.EnumerateInstalledRuntimes(); ICLRRuntimeInfo runtime = GetRuntime(runtimes, debuggerVersion); //Defined in metahost.hGuid CLSID_CLRDebuggingLegacy = new Guid(0xDF8395B5, 0xA4BA, 0x450b, 0xA7, 0x7C, 0xA9, 0xA4, 0x77, 0x62, 0xC5, 0x20);Guid IID_ICorDebug = new Guid("3D6F5F61-7538-11D3-8D5B-00104B35E7EF"); Object res;runtime.GetInterface(ref CLSID_CLRDebuggingLegacy, ref IID_ICorDebug, out res); rawDebuggingAPI = (ICorDebug)res; }elserawDebuggingAPI = NativeMethods.CreateDebuggingInterfaceFromVersion((int)CorDebuggerVersion.Whidbey,debuggerVersion);if (rawDebuggingAPI != null)InitFromICorDebug(rawDebuggingAPI);elsethrow new ArgumentException("Support for debugging version " + debuggerVersion + " is not yet implemented");} The changes above will ensure that the debugger can support .NET Framework v2 and v4 applications with the same codebase, but we do compile two different applications: one targeting v2 and the other v4. As a footnote I need to add that some missing native methods and wrappers, along with the EnumerateRuntimes method code, came from the Mindbg project on Codeplex. Another change is that when using the MDbgEngine.CreateProcess to launch a process in the debugger, do not supply a null as the final argument. This does not work any more because GetCORVersion always returns "v2.0.50727" as the function has been deprecated in .NET v4. What's worse is that on a system with only .NET 4, the user will be prompted to download and install .NET v2! Not nice! This works much better: proc = m_Debugger.CreateProcess(ProcessName, ProcessArgs, DebugModeFlag.Default,String.Format("v{0}.{1}.{2}",System.Environment.Version.Major,System.Environment.Version.Minor,System.Environment.Version.Build)); Microsoft "unofficially" plan on updating the MDbg samples soon, but if you have an MDbg-based application, you can get it working right now by changing one method a bit and adding a few new interfaces (ICLRMetaHost, IEnumUnknown, and ICLRRuntimeInfo). The new, non-legacy implementation of MDbg Engine will add new, interesting features like dump-file support and by association I assume garbage-collection/managed object stats, so it will be well worth looking into if you want to extend the functionality of a managed debugger going forward.

    Read the article

  • SQLAuthority News – Pluralsight Course Review – Practices for Software Startups – Part 2 of 2

    - by pinaldave
    This is the second part of the two part series of Practices for Software Startup Pluralsight Course. Please read the first part of this series over here. The course is written by Stephen Forte (Blog | Twitter). Stephen Forte is the Chief Strategy Officer of the venture backed company, Telerik. Personal Learning Schedule After these three sessions it was 6:30 am and time to do my own blog.  But for the rest of the day, I kept thinking about the course, and wanted to go back and finish.  I was wishing that I had woken up at 3 am so I could finish all at one go.  All day long I was digesting what I had learned.  At 10 pm, after my daughter had gone to bed, I sighed on again.  I was not disappointed by the long wait.  As I mentioned before, Stephen has started four to six companies, and all of them are very successful today. Here is the video I promised yesterday – it discusses the importance of Right Sizing Your Startup. The Heartbeat of Startup – Technology Stephen has combined all technology knowledge into one 30 minute session.  He discussed  how to start your project, how to deal with opinions, and how to deal with multiple ideas – every start up has multiple directions it can go. He spent a lot of time emphasized deciding which direction to go and how to decide which will be the best for you.  He called it a continuous development cycle. One of the biggest hazards for a start-up company is one person deciding the direction the company will go, until down the road another team member announces that there is a glitch in their part of the work and that everyone will have to start over.  Even though a team of two or five people can move quickly, often the decision has gone too long and cannot be easily fixed.   Stephen used an example from his own life:  he was biased for one type of technology, and his teammate for another.  In the end they opted for his teammate’s  choice , and in the end it was a good decision, even though he was unfamiliar with that particular program.  He argues that technology should not be a barrier to progress, that you cannot rely on your experience only.  This really spoke to me because I am a big fan of SQL, but I know there is more out there, and I should be more open to it.  I give my thanks to Stephen, I learned something in this module besides startups. Money, Success and Epic Win! The longest, but most interesting, the module was funding your start-up.  You need to fund the start-up right at the very beginning, if not done right you will run into trouble.  The good news is that a few years ago start-ups required a lot more money – think millions of dollars – but now start-ups can get off the ground for thousands.  Stephen used an example of a company that years ago would have needed a million dollars, but today could be started for $600.  It is true that things have changed, but you still need money.  For $600 you can start small and add dynamically, as needed.  But the truth is that if you have $600, $6000, or $6 million, it will be spent.  Don’t think of it as trying to save money, think of it as investing in your future.   You will need money, and you will need to (quickly) decide what you do with the money: shares, stakeholders, investing in a team, hiring a CEO.  This is so important because once you have money and start the company, the company IS your money.  It is your biggest currency – having a percentage of ownership in the company.  Investors will want percentages as repayment for their investment, and they will want a say in the business as well.  You will have to decide how far you will dilute your shares, and how the company will be divided, if at all.  If you don’t plan in advance, you will find that after gaining three or four investors, suddenly you are the minority owner in your own dream.  You need to understand funding carefully.  This single module is worth all the money you would have spent on the whole course alone.  I encourage everyone to listen to this single module even if they don’t watch any of the others.     Press End to Start the Game – Exists! The final module is exit strategies.  You did all this work, dealt with all political and legal issues.  What are you going to get out of it? The answer is simple: money.  Maybe you want your company to be bought out, for you talent to bring you a profit.  You can sell the company to someone and still head it.  Many options are available.  You could sell and still work as an employee but no longer own the company.  There are many exit strategies.  This is where all your hard work comes into play.  It is important not to feel fooled at any step.  There are so many good ideas that end up in the garbage because of poor planning, so that if you find yourself successful, you don’t want to blow it at this step!  The exit is important.  I thought that this aspect of the course was completely unique, and I loved Stephen’s point of view.  I was lost deep in thought after this module ended.  I actually took two hours worth of notes on this section alone – and it was only a three hour course.  I am planning on attending this course one more time next week, just to catch up on all the small bits of wisdom I’m sure I missed. Thank you Stephen for bringing your real world experience with us!  I recommend that everyone attends this course, even if they don’t want to begin their own start-up company. It was indeed a long day for me. Do not forget to read part 1 of this story and attend course Practices for Software Startup Pluralsight Course. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Best Practices, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

< Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >