Search Results

Search found 4519 results on 181 pages for 'feed the fire'.

Page 169/181 | < Previous Page | 165 166 167 168 169 170 171 172 173 174 175 176  | Next Page >

  • Benchmarking MySQL Replication with Multi-Threaded Slaves

    - by Mat Keep
    0 0 1 1145 6530 Homework 54 15 7660 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-US;} The objective of this benchmark is to measure the performance improvement achieved when enabling the Multi-Threaded Slave enhancement delivered as a part MySQL 5.6. As the results demonstrate, Multi-Threaded Slaves delivers 5x higher replication performance based on a configuration with 10 databases/schemas. For real-world deployments, higher replication performance directly translates to: · Improved consistency of reads from slaves (i.e. reduced risk of reading "stale" data) · Reduced risk of data loss should the master fail before replicating all events in its binary log (binlog) The multi-threaded slave splits processing between worker threads based on schema, allowing updates to be applied in parallel, rather than sequentially. This delivers benefits to those workloads that isolate application data using databases - e.g. multi-tenant systems deployed in cloud environments. Multi-Threaded Slaves are just one of many enhancements to replication previewed as part of the MySQL 5.6 Development Release, which include: · Global Transaction Identifiers coupled with MySQL utilities for automatic failover / switchover and slave promotion · Crash Safe Slaves and Binlog · Optimized Row Based Replication · Replication Event Checksums · Time Delayed Replication These and many more are discussed in the “MySQL 5.6 Replication: Enabling the Next Generation of Web & Cloud Services” Developer Zone article  Back to the benchmark - details are as follows. Environment The test environment consisted of two Linux servers: · one running the replication master · one running the replication slave. Only the slave was involved in the actual measurements, and was based on the following configuration: - Hardware: Oracle Sun Fire X4170 M2 Server - CPU: 2 sockets, 6 cores with hyper-threading, 2930 MHz. - OS: 64-bit Oracle Enterprise Linux 6.1 - Memory: 48 GB Test Procedure Initial Setup: Two MySQL servers were started on two different hosts, configured as replication master and slave. 10 sysbench schemas were created, each with a single table: CREATE TABLE `sbtest` (    `id` int(10) unsigned NOT NULL AUTO_INCREMENT,    `k` int(10) unsigned NOT NULL DEFAULT '0',    `c` char(120) NOT NULL DEFAULT '',    `pad` char(60) NOT NULL DEFAULT '',    PRIMARY KEY (`id`),    KEY `k` (`k`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 10,000 rows were inserted in each of the 10 tables, for a total of 100,000 rows. When the inserts had replicated to the slave, the slave threads were stopped. The slave data directory was copied to a backup location and the slave threads position in the master binlog noted. 10 sysbench clients, each configured with 10 threads, were spawned at the same time to generate a random schema load against each of the 10 schemas on the master. Each sysbench client executed 10,000 "update key" statements: UPDATE sbtest set k=k+1 WHERE id = <random row> In total, this generated 100,000 update statements to later replicate during the test itself. Test Methodology: The number of slave workers to test with was configured using: SET GLOBAL slave_parallel_workers=<workers> Then the slave IO thread was started and the test waited for all the update queries to be copied over to the relay log on the slave. The benchmark clock was started and then the slave SQL thread was started. The test waited for the slave SQL thread to finish executing the 100k update queries, doing "select master_pos_wait()". When master_pos_wait() returned, the benchmark clock was stopped and the duration calculated. The calculated duration from the benchmark clock should be close to the time it took for the SQL thread to execute the 100,000 update queries. The 100k queries divided by this duration gave the benchmark metric, reported as Queries Per Second (QPS). Test Reset: The test-reset cycle was implemented as follows: · the slave was stopped · the slave data directory replaced with the previous backup · the slave restarted with the slave threads replication pointer repositioned to the point before the update queries in the binlog. The test could then be repeated with identical set of queries but a different number of slave worker threads, enabling a fair comparison. The Test-Reset cycle was repeated 3 times for 0-24 number of workers and the QPS metric calculated and averaged for each worker count. MySQL Configuration The relevant configuration settings used for MySQL are as follows: binlog-format=STATEMENT relay-log-info-repository=TABLE master-info-repository=TABLE As described in the test procedure, the slave_parallel_workers setting was modified as part of the test logic. The consequence of changing this setting is: 0 worker threads:    - current (i.e. single threaded) sequential mode    - 1 x IO thread and 1 x SQL thread    - SQL thread both reads and executes the events 1 worker thread:    - sequential mode    - 1 x IO thread, 1 x Coordinator SQL thread and 1 x Worker thread    - coordinator reads the event and hands it to the worker who executes 2+ worker threads:    - parallel execution    - 1 x IO thread, 1 x Coordinator SQL thread and 2+ Worker threads    - coordinator reads events and hands them to the workers who execute them Results Figure 1 below shows that Multi-Threaded Slaves deliver ~5x higher replication performance when configured with 10 worker threads, with the load evenly distributed across our 10 x schemas. This result is compared to the current replication implementation which is based on a single SQL thread only (i.e. zero worker threads). Figure 1: 5x Higher Performance with Multi-Threaded Slaves The following figure shows more detailed results, with QPS sampled and reported as the worker threads are incremented. The raw numbers behind this graph are reported in the Appendix section of this post. Figure 2: Detailed Results As the results above show, the configuration does not scale noticably from 5 to 9 worker threads. When configured with 10 worker threads however, scalability increases significantly. The conclusion therefore is that it is desirable to configure the same number of worker threads as schemas. Other conclusions from the results: · Running with 1 worker compared to zero workers just introduces overhead without the benefit of parallel execution. · As expected, having more workers than schemas adds no visible benefit. Aside from what is shown in the results above, testing also demonstrated that the following settings had a very positive effect on slave performance: relay-log-info-repository=TABLE master-info-repository=TABLE For 5+ workers, it was up to 2.3 times as fast to run with TABLE compared to FILE. Conclusion As the results demonstrate, Multi-Threaded Slaves deliver significant performance increases to MySQL replication when handling multiple schemas. This, and the other replication enhancements introduced in MySQL 5.6 are fully available for you to download and evaluate now from the MySQL Developer site (select Development Release tab). You can learn more about MySQL 5.6 from the documentation  Please don’t hesitate to comment on this or other replication blogs with feedback and questions. Appendix – Detailed Results

    Read the article

  • DTracing TCP congestion control

    - by user12820842
    In a previous post, I showed how we can use DTrace to probe TCP receive and send window events. TCP receive and send windows are in effect both about flow-controlling how much data can be received - the receive window reflects how much data the local TCP is prepared to receive, while the send window simply reflects the size of the receive window of the peer TCP. Both then represent flow control as imposed by the receiver. However, consider that without the sender imposing flow control, and a slow link to a peer, TCP will simply fill up it's window with sent segments. Dealing with multiple TCP implementations filling their peer TCP's receive windows in this manner, busy intermediate routers may drop some of these segments, leading to timeout and retransmission, which may again lead to drops. This is termed congestion, and TCP has multiple congestion control strategies. We can see that in this example, we need to have some way of adjusting how much data we send depending on how quickly we receive acknowledgement - if we get ACKs quickly, we can safely send more segments, but if acknowledgements come slowly, we should proceed with more caution. More generally, we need to implement flow control on the send side also. Slow Start and Congestion Avoidance From RFC2581, let's examine the relevant variables: "The congestion window (cwnd) is a sender-side limit on the amount of data the sender can transmit into the network before receiving an acknowledgment (ACK). Another state variable, the slow start threshold (ssthresh), is used to determine whether the slow start or congestion avoidance algorithm is used to control data transmission" Slow start is used to probe the network's ability to handle transmission bursts both when a connection is first created and when retransmission timers fire. The latter case is important, as the fact that we have effectively lost TCP data acts as a motivator for re-probing how much data the network can handle from the sending TCP. The congestion window (cwnd) is initialized to a relatively small value, generally a low multiple of the sending maximum segment size. When slow start kicks in, we will only send that number of bytes before waiting for acknowledgement. When acknowledgements are received, the congestion window is increased in size until cwnd reaches the slow start threshold ssthresh value. For most congestion control algorithms the window increases exponentially under slow start, assuming we receive acknowledgements. We send 1 segment, receive an ACK, increase the cwnd by 1 MSS to 2*MSS, send 2 segments, receive 2 ACKs, increase the cwnd by 2*MSS to 4*MSS, send 4 segments etc. When the congestion window exceeds the slow start threshold, congestion avoidance is used instead of slow start. During congestion avoidance, the congestion window is generally updated by one MSS for each round-trip-time as opposed to each ACK, and so cwnd growth is linear instead of exponential (we may receive multiple ACKs within a single RTT). This continues until congestion is detected. If a retransmit timer fires, congestion is assumed and the ssthresh value is reset. It is reset to a fraction of the number of bytes outstanding (unacknowledged) in the network. At the same time the congestion window is reset to a single max segment size. Thus, we initiate slow start until we start receiving acknowledgements again, at which point we can eventually flip over to congestion avoidance when cwnd ssthresh. Congestion control algorithms differ most in how they handle the other indication of congestion - duplicate ACKs. A duplicate ACK is a strong indication that data has been lost, since they often come from a receiver explicitly asking for a retransmission. In some cases, a duplicate ACK may be generated at the receiver as a result of packets arriving out-of-order, so it is sensible to wait for multiple duplicate ACKs before assuming packet loss rather than out-of-order delivery. This is termed fast retransmit (i.e. retransmit without waiting for the retransmission timer to expire). Note that on Oracle Solaris 11, the congestion control method used can be customized. See here for more details. In general, 3 or more duplicate ACKs indicate packet loss and should trigger fast retransmit . It's best not to revert to slow start in this case, as the fact that the receiver knew it was missing data suggests it has received data with a higher sequence number, so we know traffic is still flowing. Falling back to slow start would be excessive therefore, so fast recovery is used instead. Observing slow start and congestion avoidance The following script counts TCP segments sent when under slow start (cwnd ssthresh). #!/usr/sbin/dtrace -s #pragma D option quiet tcp:::connect-request / start[args[1]-cs_cid] == 0/ { start[args[1]-cs_cid] = 1; } tcp:::send / start[args[1]-cs_cid] == 1 && args[3]-tcps_cwnd tcps_cwnd_ssthresh / { @c["Slow start", args[2]-ip_daddr, args[4]-tcp_dport] = count(); } tcp:::send / start[args[1]-cs_cid] == 1 && args[3]-tcps_cwnd args[3]-tcps_cwnd_ssthresh / { @c["Congestion avoidance", args[2]-ip_daddr, args[4]-tcp_dport] = count(); } As we can see the script only works on connections initiated since it is started (using the start[] associative array with the connection ID as index to set whether it's a new connection (start[cid] = 1). From there we simply differentiate send events where cwnd ssthresh (congestion avoidance). Here's the output taken when I accessed a YouTube video (where rport is 80) and from an FTP session where I put a large file onto a remote system. # dtrace -s tcp_slow_start.d ^C ALGORITHM RADDR RPORT #SEG Slow start 10.153.125.222 20 6 Slow start 138.3.237.7 80 14 Slow start 10.153.125.222 21 18 Congestion avoidance 10.153.125.222 20 1164 We see that in the case of the YouTube video, slow start was exclusively used. Most of the segments we sent in that case were likely ACKs. Compare this case - where 14 segments were sent using slow start - to the FTP case, where only 6 segments were sent before we switched to congestion avoidance for 1164 segments. In the case of the FTP session, the FTP data on port 20 was predominantly sent with congestion avoidance in operation, while the FTP session relied exclusively on slow start. For the default congestion control algorithm - "newreno" - on Solaris 11, slow start will increase the cwnd by 1 MSS for every acknowledgement received, and by 1 MSS for each RTT in congestion avoidance mode. Different pluggable congestion control algorithms operate slightly differently. For example "highspeed" will update the slow start cwnd by the number of bytes ACKed rather than the MSS. And to finish, here's a neat oneliner to visually display the distribution of congestion window values for all TCP connections to a given remote port using a quantization. In this example, only port 80 is in use and we see the majority of cwnd values for that port are in the 4096-8191 range. # dtrace -n 'tcp:::send { @q[args[4]-tcp_dport] = quantize(args[3]-tcps_cwnd); }' dtrace: description 'tcp:::send ' matched 10 probes ^C 80 value ------------- Distribution ------------- count -1 | 0 0 |@@@@@@ 5 1 | 0 2 | 0 4 | 0 8 | 0 16 | 0 32 | 0 64 | 0 128 | 0 256 | 0 512 | 0 1024 | 0 2048 |@@@@@@@@@ 8 4096 |@@@@@@@@@@@@@@@@@@@@@@@@@@ 23 8192 | 0

    Read the article

  • Reviewing Retail Predictions for 2011

    - by David Dorf
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} I've been busy thinking about what 2012 and beyond will look like for retail, and I have some interesting predictions to share.  But before I go there, let’s first review this year’s predictions before making new ones for 2012. 1. Alternate Payments We've seen several alternate payment schemes emerge over the last two years, and 2011 may be the year one of them takes hold. Any competition that can drive down fees will be good for everyone. I'm betting that Apple will add NFC chips to their next version of the iPhone, then enable payments in stores using iTunes accounts on the backend. Paypal will continue to make inroads, and Isis will announce a pilot. The iPhone 4S did not contain an NFC chip, so we’ll have to continuing waiting for the iPhone 5. PayPal announced its moving into in-store payments, and Google launched its wallet in selected cities.  Overall I think the payment scene is heating up and that trend will continue. 2. Engineered Systems The industry is moving toward purpose-built appliances that are optimized across the entire stack. Oracle calls these "engineered systems" and the first two examples are Exadata and Exalogic, but there are other examples from other vendors. These are particularly important to the retail industry because of the volume of data that must be processed. There should be continued adoption in 2011. Oracle reports that Exadata is its fasting growing product, and at the recent OpenWorld it announced the SuperCluster and Exalytics products, both continuing the engineered systems trend. SAP’s HANA continues to receive attention, and IBM also seems to be moving in this direction. 3. Social Analytics There are lots of tools that provide insight into how a brand is perceived across popular internet sites, but as far as I know, these tools are not industry specific. The next step needs to mine the data and determine how it should influence retail operations. The data needs to help retailers determine how they create promotions, which products to stock, and how to keep consumers engaged. Social data alone does not provide the answers, but its one more data point that will help retailers make better decisions. Look for some vendor consolidation to help make this happen. In March, Salesforce.com acquired leading social monitoring vendor Radian6 and followed up with acquisitions of Heroku and Model Metrics. The notion of Social CRM seems to be going more mainstream now. 4. 2-D Barcodes Look for more QRCodes on shelf-tags, in newspaper circulars, and on billboards. It's a great portal from the physical world into the digital one that buys us time until augmented reality matures further. Nobody wants to type "www", backslash, and ".com" on their phones. QRCodes are everywhere. ‘Nuff said. 5. In the words of Microsoft, "To the Cloud!" My favorite "cloud application" is Evernote. If you take notes on your work laptop, you will inevitably need those notes on your home PC. And if you manage to solve that problem, you'll need to access them from your mobile phone. Evernote stores your notes in the cloud and provides easy ways to access them. Being able to access a service from anywhere and not having to worry about backups, upgrades, etc. is great. Retailers will start to rely on cloud services, both public and private, in the coming year. There were no shortage of announcements in this area: Amazon’s cloud-based Kindle Fire, Apple’s iCloud, Oracle’s Public Cloud, etc. I saw an interesting presentation showing how BevMo moved their systems to the cloud.  Seems like retailers are starting to consider the cloud for specific uses. 6. F-CommerceTop of Form Move over "E" and "M" so we can introduce "F-Commerce," which should go mainstream in 2011. Already several retailers have created small stores on Facebook, and it won't be long before Facebook becomes a full-fledged channel in the omni-channel world of retail. The battle between Facebook and Google will heat up over retail, where both stand to make lots of money. JCPenney and ASOS both put their entire catalogs on Facebook, and lots of other retailers have connected Facebook to their e-commerce site. I still think selling from the newsfeed is the best approach, and several retailers are trying that approach as well. I just don’t see Google+ as a threat to Facebook, so I think that battle is over.  I called 2011 The Year of F-Commerce, and that was probably accurate. Its good to look back at predictions, but we also have to think about what was missed.  I didn't see Amazon entering the tablet business with such a splash, although in hindsight it was obvious. Nor did I think HP would fall so far so fast.  Look for my 2012 predictions coming soon.

    Read the article

  • Conducting Effective Web Meetings

    - by BuckWoody
    There are several forms of corporate communication. From immediate, rich communications like phones and IM messaging to historical transactions like e-mail, there are a lot of ways to get information to one or more people. From time to time, it's even useful to have a meeting. (This is where a witty picture of a guy sleeping in a meeting goes. I won't bother actually putting one here; you're already envisioning it in your mind) Most meetings are pointless, and a complete waste of time. This is the fault, completely and solely, of the organizer. It's because he or she hasn't thought things through enough to think about alternate forms of information passing. Here's the criteria for a good meeting - whether in-person or over the web: 100% of the content of a meeting should require the participation of 100% of the attendees for 100% of the time It doesn't get any simpler than that. If it doesn't meet that criteria, then don't invite that person to that meeting. If you're just conveying information and no one has the need for immediate interaction with that information (like telling you something that modifies the message), then send an e-mail. If you're a manager, and you need to get status from lots of people, pick up the phone.If you need a quick answer, use IM. I once had a high-level manager that called frequent meetings. His real need was status updates on various processes, so 50 of us would sit in a room while he asked each one of us questions. He believed this larger meeting helped us "cross pollinate ideas". In fact, it was a complete waste of time for most everyone, except in the one or two moments that they interacted with him. So I wrote some code for a Palm Pilot (which was a kind of SmartPhone but with no phone and no real graphics, but this was in the days when we had just discovered fire and the wheel, although the order of those things is still in debate) that took an average of the salaries of the people in the room (I guessed at it) and ran a timer which multiplied the number of people against the salaries. I left that running in plain sight for him, and when he asked about it, I explained how much the meetings were really costing the company. We had far fewer meetings after. Meetings are now web-enabled. I believe that's largely a good thing, since it saves on travel time and allows more people to participate, but I think the rule above still holds. And in fact, there are some other rules that you should follow to have a great meeting - and fewer of them. Be Clear About the Goal This is important in any meeting, but all of us have probably gotten an invite with a web link and an ambiguous title. Then you get to the meeting, and it's a 500-level deep-dive on something everyone expects you to know. This is unfair to the "expert" and to the participants. I always tell people that invite me to a meeting that I will be as detailed as I can - but the more detail they can tell me about the questions, the more detailed I can be in my responses. Granted, there are times when you don't know what you don't know, but the more you can say about the topic the better. There's another point here - and it's that you should have a clearly defined "win" for the meeting. When the meeting is over, and everyone goes back to work, what were you expecting them to do with the information? Have that clearly defined in your head, and in the meeting invite. Understand the Technology There are several web-meeting clients out there. I use them all, since I meet with clients all over the world. They all work differently - so I take a few moments and read up on the different clients and find out how I can use the tools properly. I do this with the technology I use for everything else, and it's important to understand it if the meeting is to be a success. If you're running the meeting, know the tools. I don't care if you like the tools or not, learn them anyway. Don't waste everyone else's time just because you're too bitter/snarky/lazy to spend a few minutes reading. Check your phone or mic. Check your video size. Install (and learn to use)  ZoomIT (http://technet.microsoft.com/en-us/sysinternals/bb897434.aspx). Format your slides or screen or output correctly. Learn to use the voting features of the meeting software, and especially it's whiteboard features. Figure out how multiple monitors work. Try a quick meeting with someone to test all this. Do this *before* you invite lots of other people to your meeting.   Use a WebCam I'm not a pretty man. I have a face fit for radio. But after attending a meeting with clients where one Microsoft person used a webcam and another did not, I'm convinced that people pay more attention when a face is involved. There are tons of studies around this, or you can take my word for it, but toss a shirt on over those pajamas and turn the webcam on. Set Up Early Whether you're attending or leading the meeting, don't wait to sign on to the meeting at the time when it starts. I can almost plan that a 10:00 meeting will actually start at 10:10 because the participants/leader is just now installing the web client for the meeting at 10:00. Sign on early, go on mute, and then wait for everyone to arrive. Mute When Not Talking No one wants to hear your screaming offspring / yappy dog / other cubicle conversations / car wind noise (are you driving in a desert storm or something?) while the person leading the meeting is trying to talk. I use the Lync software from Microsoft for my meetings, and I mute everyone by default, and then tell them to un-mute to talk to the group. Share Collateral If you have a PowerPoint deck, mail it out in case you have a tech failure. If you have a document, share it as an attachment to the meeting. Don't make people ask you for the information - that's why you're there to begin with. Even better, send it out early. "But", you say, "then no one will come to the meeting if they have the deck first!" Uhm, then don't have a meeting. Send out the deck and a quick e-mail and let everyone get on with their productive day. Set Actions At the Meeting A meeting should have some sort of outcome (see point one). That means there are actions to take, a follow up, or some deliverable. Otherwise, it's an e-mail. At the meeting, decide who will do what, when things are needed, and so on. And avoid, if at all possible, setting up another meeting, unless absolutely necessary. So there you have it. Whether it's on-premises or on the web, meetings are a necessary evil, and should be treated that way. Like politicians, you should have as few of them as are necessary to keep the roads paved and public libraries open.

    Read the article

  • Attaching a Command to the WP7 Application Bar.

    - by mbcrump
    One of the biggest problems that I’ve seen with people creating WP7 applications is how do you bind the application bar to a Relay Command. If your using MVVM then this is particular important. Let’s examine the code that one might add to start with.  <phone:PhoneApplicationPage.ApplicationBar> <shell:ApplicationBar IsVisible="True" IsMenuEnabled="True"> <shell:ApplicationBarIconButton x:Name="appbar_button1" IconUri="/icons/appbar.questionmark.rest.png" Text="About"> <i:Interaction.Triggers> <i:EventTrigger EventName="Click"> <GalaSoft_MvvmLight_Command:EventToCommand Command="{Binding DisplayAbout, Mode=OneWay}" /> </i:EventTrigger> </i:Interaction.Triggers> </shell:ApplicationBarIconButton> <shell:ApplicationBar.MenuItems> <shell:ApplicationBarMenuItem x:Name="menuItem1" Text="MenuItem 1"></shell:ApplicationBarMenuItem> <shell:ApplicationBarMenuItem x:Name="menuItem2" Text="MenuItem 2"></shell:ApplicationBarMenuItem> </shell:ApplicationBar.MenuItems> </shell:ApplicationBar> </phone:PhoneApplicationPage.ApplicationBar> Everything looks right. But we quickly notice that we have a squiggly line under our Interaction.Triggers. The problem is that the object is not a FrameworkObject. This same code would have worked perfect if this were a normal button. OK. Point has been proved. Let’s make the ApplicationBar support Commands. So, go ahead and create a new project using MVVM Light. If you want to check out the source and work along side this tutorial then click here.  7 Easy Steps to have binding on the Application Bar using MVVM Light (I might add that you don’t have to use MVVM Light to get this functionality, I just prefer it.) 1) Download MVVM Light if you don’t already have it and install the project templates. It is available at http://mvvmlight.codeplex.com/. 2) Click File-New Project and navigate to Silverlight for Windows Phone. Make sure you use the MVVM Light (WP7) Template. 3) Now that we have our project setup and ready to go let’s download a wrapper created by Nicolas Humann here, it is called Phone7.Fx. After you download it then extract it somewhere that you can find it. This wrapper will make our application bar/menu item bindable. 4) Right click References inside your WP7 project and add the .dll file to your project. 5) In your MainPage.xaml you will need to add the proper namespace to it. Don’t forget to build your project afterwards. xmlns:Preview="clr-namespace:Phone7.Fx.Preview;assembly=Phone7.Fx.Preview" 6) Now you can add the BindableAppBar to your MainPage.xaml with a few lines of code.  <Preview:BindableApplicationBar x:Name="AppBar" BarOpacity="1.0" > <Preview:BindableApplicationBarIconButton Command="{Binding DisplayAbout}" IconUri="/icons/appbar.questionmark.rest.png" Text="About" /> <Preview:BindableApplicationBar.MenuItems> <Preview:BindableApplicationBarMenuItem Text="Settings" Command="{Binding InputBox}" /> </Preview:BindableApplicationBar.MenuItems> </Preview:BindableApplicationBar> So your final MainPage.xaml will look similar to this: NOTE: The AppBar will be located inside of the Grid using this wrapper.   <!--LayoutRoot contains the root grid where all other page content is placed--> <Grid x:Name="LayoutRoot" Background="Transparent"> <Grid.RowDefinitions> <RowDefinition Height="Auto" /> <RowDefinition Height="*" /> </Grid.RowDefinitions> <!--TitlePanel contains the name of the application and page title--> <StackPanel x:Name="TitlePanel" Grid.Row="0" Margin="24,24,0,12"> <TextBlock x:Name="ApplicationTitle" Text="{Binding ApplicationTitle}" Style="{StaticResource PhoneTextNormalStyle}" /> <TextBlock x:Name="PageTitle" Text="{Binding PageName}" Margin="-3,-8,0,0" Style="{StaticResource PhoneTextTitle1Style}" /> </StackPanel> <!--ContentPanel - place additional content here--> <Grid x:Name="ContentGrid" Grid.Row="1"> <TextBlock Text="{Binding Welcome}" Style="{StaticResource PhoneTextNormalStyle}" HorizontalAlignment="Center" VerticalAlignment="Center" FontSize="40" /> </Grid> <Preview:BindableApplicationBar x:Name="AppBar" BarOpacity="1.0" > <Preview:BindableApplicationBarIconButton Command="{Binding DisplayAbout}" IconUri="/icons/appbar.questionmark.rest.png" Text="About" /> <Preview:BindableApplicationBar.MenuItems> <Preview:BindableApplicationBarMenuItem Text="Settings" Command="{Binding InputBox}" /> </Preview:BindableApplicationBar.MenuItems> </Preview:BindableApplicationBar> </Grid> 7) Let’s go ahead and create the RelayCommands and write them up to a MessageBox by editing our MainViewModel.cs file. public class MainViewModel : ViewModelBase { public string ApplicationTitle { get { return "MVVM LIGHT"; } } public string PageName { get { return "My page:"; } } public string Welcome { get { return "Welcome to MVVM Light"; } } public RelayCommand DisplayAbout { get; private set; } public RelayCommand InputBox { get; private set; } /// <summary> /// Initializes a new instance of the MainViewModel class. /// </summary> public MainViewModel() { if (IsInDesignMode) { // Code runs in Blend --> create design time data. } else { DisplayAbout = new RelayCommand(() => { MessageBox.Show("About box called!"); }); InputBox = new RelayCommand(() => { MessageBox.Show("settings button called"); }); } } If you run the project now you should get something similar to this (notice the AppBar at the bottom):  Now if you hit the question mark then you will get the following MessageBox: The MenuItem works as well so for Settings: As you can see, its pretty easy to add a Command to the ApplicationBar/MenuItem. If you want to look through the full source code then click here.   Subscribe to my feed

    Read the article

  • Visual Studio 2010 Productivity Tips and Tricks&ndash;Part 1: Extensions

    - by ToStringTheory
    I don’t know about you, but when it comes to development, I prefer my environment to be as free of clutter as possible.  It may surprise you to know that I have tried ReSharper, and did not like it, for the reason that I stated above.  In my opinion, it had too much clutter.  Don’t get me wrong, there were a couple of features that I did like about it (inversion of if blocks, code feedback), but for the most part, I actually felt that it was slowing me down. Introduction Another large factor besides intrusiveness/speed in my choice to dislike ReSharper would probably be that I have become comfortable with my current setup and extensions.  I believe I have a good collection, and am quite happy with what I can accomplish in a short amount of time.  I figured that I would share some of my tips/findings regarding Visual Studio productivity here, and see what you had to say. The first section of things that I would like to cover, are Visual Studio Extensions.  In case you have been living under a rock for the past several years, Extensions are available under the Tools menu in Visual Studio: The extension manager enables integrated access to the Microsoft Visual Studio Gallery online with access to a few thousand different extensions.  I have tried many extensions, but for reasons of lack reliability, usability, or features, have uninstalled almost all of them.  However, I have come across several that I find I can not do without anymore: NuGet Package Manager (Microsoft) Perspectives (Adam Driscoll) Productivity Power Tools (Microsoft) Web Essentials (Mads Kristensen) Extensions NuGet Package Manager To be honest, I debated on whether or not to put this in here.  Most people seem to have it, however, there was a time when I didn’t, and was always confused when blogs/posts would say to right click and “Add Package Reference…” which with one of the latest updates is now “Manage NuGet Packages”.  So, if you haven’t downloaded the NuGet Package Manager yet, or don’t know what it is, I would highly suggest downloading it now! Features Simply put, the NuGet Package Manager gives you a GUI and command line to access different libraries that have been uploaded to NuGet. Some of its features include: Ability to search NuGet for packages via the GUI, with information in the detail bar on the right. Quick access to see what packages are in a solution, and what packages have updates available, with easy 1-click updating. If you download a package that requires references to work on other NuGet packages, they will be downloaded and referenced automatically. Productivity Tip If you use any type of source control in Visual Studio as well as using NuGet packages, be sure to right-click on the solution and click "Enable NuGet Package Restore". What this does is add a NuGet package to the solution so that it will be checked in along side your solution, as well as automatically grab packages from NuGet on build if needed. This is an extremely simple system to use to manage your package references, instead of having to manually go into TFS and add the Packages folder. Perspectives I can't stand developing with just one monitor. Especially if it comes to debugging. The great thing about Visual Studio 2010, is that all of the panels and windows are floatable, and can dock to other screens. The only bad thing is, I don't use the same toolset with everything that I am doing. By this, I mean that I don't use all of the same windows for debugging a web application, as I do for coding a WPF application. Only thing is, Visual Studio doesn't save the screen positions for all of the undocked windows. So, I got curious one day and decided to check and see if there was an extension to help out. This is where I found Perspectives. Features Perspectives gives you the ability to configure window positions across any or your monitors, and then to save the positions in a profile. Perspectives offers a Panel to manage different presets/favorites, and a toolbar to add to the toolbars at the top of Visual Studio. Ability to 'Favorite' a profile to add it to the perspectives toolbar. Productivity Tip Take the time to setup profiles for each of your scenarios - debugging web/winforms/xaml, coding, maintenance, etc. Try to remember to use the profiles for a few days, and at the end of a week, you may find that your productivity was never better. Productivity Power Tools Ah, the Productivity Power Tools... Quite possibly one of my most used extensions, if not my most used. The tool pack gives you a variety of enhancements ranging from key shortcuts, interface tweaks, and completely new features to Visual Studio 2010. Features I don't want to bore you with all of the features here, so here are my favorite: Quick Find - Unobtrusive search box in upper-right corner of the code window. Great for searching in general, especially in a file. Solution Navigator - The 'Solution Explorer' on steroids. Easy to search for files, see defined members/properties/methods in files, and my favorite feature is the 'set as root' option. Updated 'Add Reference...' Dialog - This is probably my favorite enhancement period... The 'Add Reference...' dialog redone in a manner that resembles the Extension/Package managers. I especially love the ability to search through all of the references. "Ctrl - Click" for Definition - I am still getting used to this as I usually try to use my keyboard for everything, but I love the ability to hold Ctrl and turn property/methods/variables into hyperlinks, that you click on to see their definitions. Great for travelling down a rabbit hole in an application to research problems. While there are other commands/utilities, I find these to be the ones that I lean on the most for the usefulness. Web Essentials If you have do any type of web development in ASP .Net, ASP .Net MVC, even HTML, I highly suggest grabbing the Web Essentials right NOW! This extension alone is great for productivity in web development, and greatly decreases my development time on new features. Features Some of its best features include: CSS Previews - I say 'previews' because of the multiple kinds of previews in CSS that you get font-family, color, background/background-image previews. This is great for just tweaking UI slightly in different ways and seeing how they look in the CSS window at a glance. Live Preview - One word - awesome! This goes well with my multi-monitor setup. I put the site on one monitor in a Live Preview panel, and then as I make changes to CSS/cshtml/aspx/html, the preview window will update with each save/build automatically. For CSS, you can even turn on live-update, so as you are tweaking CSS, the style changes in real time. Great for tweaking colors or font-sizes. Outlining - Small, but I like to be able to collapse regions/declarations that are in the way of new work, or are just distracting. Commenting Shortcuts - I don't know why it wasn't included by default, but it is nice to have the key shortcuts for commenting working in the CSS editor as well. Productivity Tip When working on a site, hit CTRL-ALT-ENTER to launch the Live Preview window. Dock it to another monitor. When you make changes to the document/css, just save and glance at the other monitor. No need to alt tab, then alt tab before continuing editing. Conclusion These extensions are only the most useful and least intrusive - ones that I use every day. The great thing about Visual Studio 2010 is the extensibility options that it gives developers to utilize. Have an extension that you use that isn't intrusive, but isn't listed here? Please, feel free to comment. I love trying new things, and am always looking for new additions to my toolset of the most useful. Finally, please keep an eye out for Part 2 on key shortcuts in Visual Studio. Also, if you are visiting my site (http://tostringtheory.com || http://geekswithblogs.net/tostringtheory) from an actual browser and not a feed, please let me know what you think of the new styling!

    Read the article

  • To SYNC or not to SYNC – Part 3

    - by AshishRay
    I can't believe it has been almost a year since my last blog post. I know, that's an absolute no-no in the blogosphere. And I know that "I have been busy" is not a good excuse. So - without trying to come up with an excuse - let me state this - my apologies for taking such a long time to write the next Part. Without further ado, here goes. This is Part 3 of a multi-part blog article where we are discussing various aspects of setting up Data Guard synchronous redo transport (SYNC). In Part 1 of this article, I debunked the myth that Data Guard SYNC is similar to a two-phase commit operation. In Part 2, I discussed the various ways that network latency may or may not impact a Data Guard SYNC configuration. In this article, I will talk in details regarding why Data Guard SYNC is a good thing. I will also talk about distance implications for setting up such a configuration. So, Why Good? Why is Data Guard SYNC a good thing? Because, at the end of the day, this gives you the assurance of zero data loss - it doesn’t matter what outage may befall your primary system. Befall! Boy, that sounds theatrical. But seriously - think about this - it minimizes your data risks. That’s a big deal. Whether you have an outage due to bad disks, faulty hardware components, hardware / software bugs, physical data corruptions, power failures, lightning that takes out significant part of your data center, fire that melts your assets, water leakage from the cooling system, human errors such as accidental deletion of online redo log files - it doesn’t matter - you can have that “Om - peace” look on your face and then you can failover to the standby system, without losing a single bit of data in your Oracle database. You will be a hero, as shown in this not so imaginary conversation: IT Manager: Well, what’s the status? You: John is doing the trace analysis on the storage array. IT Manager: So? How long is that gonna take? You: Well, he is stuck, waiting for a response from <insert your not-so-favorite storage vendor here>. IT Manager: So, no root cause yet? You: I told you, he is stuck. We have escalated with their Support, but you know how long these things take. IT Manager: Darn it - the site is down! You: Not really … IT Manager: What do you mean? You: John is stuck, but Sreeni has already done a failover to the Data Guard standby. IT Manager: Whoa, whoa - wait! Failover means we lost some data, why did you do this without letting the Business group know? You: We didn’t lose any data. Remember, we had set up Data Guard with SYNC? So now, any problems on the production – we just failover. No data loss, and we are up and running in minutes. The Business guys don’t need to know. IT Manager: Wow! Are we great or what!! You: I guess … Ok, so you get it - SYNC is good. But as my dear friend Larry Carpenter says, “TANSTAAFL”, or "There ain't no such thing as a free lunch". Yes, of course - investing in Data Guard SYNC means that you have to invest in a low-latency network, you have to monitor your applications and database especially in peak load conditions, and you cannot under-provision your standby systems. But all these are good and necessary things, if you are supporting mission-critical apps that are supposed to be running 24x7. The peace of mind that this investment will give you is priceless, especially if you are serious about HA. How Far Can We Go? Someone may say at this point - well, I can’t use Data Guard SYNC over my coast-to-coast deployment. Most likely - true. So how far can you go? Well, we have customers who have deployed Data Guard SYNC over 300+ miles! Does this mean that you can also deploy over similar distances? Duh - no! I am going to say something here that most IT managers don’t like to hear - “It depends!” It depends on your application design, application response time / throughput requirements, network topology, etc. However, because of the optimal way we do SYNC, customers have been able to stretch Data Guard SYNC deployments over longer distances compared to traditional, storage-centric ways of doing this. The MAA Database 10.2 best practices paper Data Guard Redo Transport & Network Configuration, and Oracle Database 11.2 High Availability Best Practices Manual talk about some of these SYNC-related metrics. For example, a test deployment of Data Guard SYNC over 330 miles with 10ms latency showed an impact less than 5% for a busy OLTP application. Even if you can’t deploy Data Guard SYNC over your WAN distance, or if you already have an ASYNC standby located 1000-s of miles away, here’s another nifty way to boost your HA. Have a local standby, configured SYNC. How local is “local”? Again - it depends. One customer runs a local SYNC standby across the campus. Another customer runs it across 15 miles in another data center. Both of these customers are running Data Guard SYNC as their HA standard. If a localized outage affects their primary system, no problem! They have all the data available on the standby, to which they can failover. Very fast. In seconds. Wait - did I say “seconds”? Yes, Virginia, there is a Santa Claus. But you have to wait till the next blog article to find out more. I assure you tho’ that this time you won’t have to wait for another year for this.

    Read the article

  • Using IIS Logs for Performance Testing with Visual Studio

    - by Tarun Arora
    In this blog post I’ll show you how you can play back the IIS Logs in Visual Studio to automatically generate the web performance tests. You can also download the sample solution I am demo-ing in the blog post. Introduction Performance testing is as important for new websites as it is for evolving websites. If you already have your website running in production you could mine the information available in IIS logs to analyse the dense zones (most used pages) and performance test those pages rather than wasting time testing & tuning the least used pages in your application. What are IIS Logs To help with server use and analysis, IIS is integrated with several types of log files. These log file formats provide information on a range of websites and specific statistics, including Internet Protocol (IP) addresses, user information and site visits as well as dates, times and queries. If you are using IIS 7 and above you will find the log files in the following directory C:\Interpub\Logs\ Walkthrough 1. Download and Install Log Parser from the Microsoft download Centre. You should see the LogParser.dll in the install folder, the default install location is C:\Program Files (x86)\Log Parser 2.2. LogParser.dll gives us a library to query the iis log files programmatically. By the way if you haven’t used Log Parser in the past, it is a is a powerful, versatile tool that provides universal query access to text-based data such as log files, XML files and CSV files, as well as key data sources on the Windows operating system such as the Event Log, the Registry, the file system, and Active Directory. More details… 2. Create a new test project in Visual Studio. Let’s call it IISLogsToWebPerfTestDemo.   3.  Delete the UnitTest1.cs class that gets created by default. Right click the solution and add a project of type class library, name it, IISLogsToWebPerfTestEngine. Delete the default class Program.cs that gets created with the project. 4. Under the IISLogsToWebPerfTestEngine project add a reference to Microsoft.VisualStudio.QualityTools.WebTestFramework – c:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\PublicAssemblies\Microsoft.VisualStudio.QualityTools.WebTestFramework.dll LogParser also called MSUtil - c:\users\tarora\documents\visual studio 2010\Projects\IisLogsToWebPerfTest\IisLogsToWebPerfTestEngine\obj\Debug\Interop.MSUtil.dll 5. Right click IISLogsToWebPerfTestEngine project and add a new classes – IISLogReader.cs The IISLogReader class queries the iis logs using the log parser. using System; using System.Collections.Generic; using System.Text; using MSUtil; using LogQuery = MSUtil.LogQueryClassClass; using IISLogInputFormat = MSUtil.COMIISW3CInputContextClassClass; using LogRecordSet = MSUtil.ILogRecordset; using Microsoft.VisualStudio.TestTools.WebTesting; using System.Diagnostics; namespace IisLogsToWebPerfTestEngine { // By making use of log parser it is possible to query the iis log using select queries public class IISLogReader { private string _iisLogPath; public IISLogReader(string iisLogPath) { _iisLogPath = iisLogPath; } public IEnumerable<WebTestRequest> GetRequests() { LogQuery logQuery = new LogQuery(); IISLogInputFormat iisInputFormat = new IISLogInputFormat(); // currently these columns give us suffient information to construct the web test requests string query = @"SELECT s-ip, s-port, cs-method, cs-uri-stem, cs-uri-query FROM " + _iisLogPath; LogRecordSet recordSet = logQuery.Execute(query, iisInputFormat); // Apply a bit of transformation while (!recordSet.atEnd()) { ILogRecord record = recordSet.getRecord(); if (record.getValueEx("cs-method").ToString() == "GET") { string server = record.getValueEx("s-ip").ToString(); string path = record.getValueEx("cs-uri-stem").ToString(); string querystring = record.getValueEx("cs-uri-query").ToString(); StringBuilder urlBuilder = new StringBuilder(); urlBuilder.Append("http://"); urlBuilder.Append(server); urlBuilder.Append(path); if (!String.IsNullOrEmpty(querystring)) { urlBuilder.Append("?"); urlBuilder.Append(querystring); } // You could make substitutions by introducing parameterized web tests. WebTestRequest request = new WebTestRequest(urlBuilder.ToString()); Debug.WriteLine(request.UrlWithQueryString); yield return request; } recordSet.moveNext(); } Console.WriteLine(" That's it! Closing the reader"); recordSet.close(); } } }   6. Connect the dots by adding the project reference ‘IisLogsToWebPerfTestEngine’ to ‘IisLogsToWebPerfTest’. Right click the ‘IisLogsToWebPerfTest’ project and add a new class ‘WebTest1Coded.cs’ The WebTest1Coded.cs inherits from the WebTest class. By overriding the GetRequestMethod we can inject the log files to the IISLogReader class which uses Log parser to query the log file and extract the web requests to generate the web test request which is yielded back for play back when the test is run. namespace IisLogsToWebPerfTest { using System; using System.Collections.Generic; using System.Text; using Microsoft.VisualStudio.TestTools.WebTesting; using Microsoft.VisualStudio.TestTools.WebTesting.Rules; using IisLogsToWebPerfTestEngine; // This class is a coded web performance test implementation, that simply passes // the path of the iis logs to the IisLogReader class which does the heavy // lifting of reading the contents of the log file and converting them to tests. // You could have multiple such classes that inherit from WebTest and implement // GetRequestEnumerator Method and pass differnt log files for different tests. public class WebTest1Coded : WebTest { public WebTest1Coded() { this.PreAuthenticate = true; } public override IEnumerator<WebTestRequest> GetRequestEnumerator() { // substitute the highlighted path with the path of the iis log file IISLogReader reader = new IISLogReader(@"C:\Demo\iisLog1.log"); foreach (WebTestRequest request in reader.GetRequests()) { yield return request; } } } }   7. Its time to fire the test off and see the iis log playback as a web performance test. From the Test menu choose Test View Window you should be able to see the WebTest1Coded test show up. Highlight the test and press Run selection (you can also debug the test in case you face any failures during test execution). 8. Optionally you can create a Load Test by keeping ‘WebTest1Coded’ as the base test. Conclusion You have just helped your testing team, you now have become the coolest developer in your organization! Jokes apart, log parser and web performance test together allow you to save a lot of time by not having to worry about what to test or even worrying about how to record the test. If you haven’t already, download the solution from here. You can take this to the next level by using LogParser to extract the log files as part of an end of day batch to a database. See the usage trends by user this solution over a longer term and have your tests consume the web requests now stored in the database to generate the web performance tests. If you like the post, don’t forget to share … Keep RocKiNg!

    Read the article

  • Getting Started With Tailoring Business Processes

    - by Richard Bingham
    In this article, and for the sake of simplicity, we will use the term “On-Premise” to mean a deployment where you have design-time development access to the instance, including administration of the technology components, the applications filesystem, and the database. In reality this might be a local development instance that is then supported by a team who can deploy your customizations to the restricted production instance equivalents. Tools Overview Firstly let’s look at the Design-Time tools within JDeveloper for customizing and extending the artifacts of a Business Process. In essence this falls into two buckets; SOA Composite Editor for working with BPEL processes, and the BPM Studio. The SOA Composite Editor As a standard extension to JDeveloper, this graphical design tool should be familiar to anyone previously worked with Oracle SOA Server. With easy-to-use modeling capability, backed-up by full XML source-view (for read-only), it provides everything that is needed to implement the technical design. In simple terms, once deployed to the remote SOA Server the composite components (like Mediator) leverage the Event Delivery Network (EDN) for interaction with the application logic. If you are customizing an existing Fusion Applications BPEL process then be aware that it does support MDS-based customization layers just like Page Composer where different customizations are used based on the run-time context, like for a specific Product or Business Unit. This also makes them safe from patching and upgrades, although only a single active version of the composite is available at run-time. This is defined by a field on the composite record, available in Enterprise Manager. Obviously if you wish to fire different activities and tasks based on the user context then you can should include switches to fork the flows in your custom BPEL process. Figure 1 – A BPEL process in Composite Editor The following describes the simplified steps for making customizations to BPEL processes. This is the most common method of changing the business processes of Fusion Applications, as over 400 BPEL-based composite applications are provided out-of-the-box. Setup your local Fusion Applications JDeveloper environment. The SOA Composite Editor should be installed as part of the Fusion Applications extension. If there are problems you can also find it under the ‘Check for Updates’ help menu option. Since SOA Server is not part of the JDeveloper integrated WebLogic Server, setup a standalone WebLogic environment for deploying and testing. Obviously you might use a Fusion Applications development instance also. Package the existing standard Fusion Applications SOA Composite using Enterprise Manager and export it as a complete SOA Archive (SAR) file, resulting in a local .jar file. You may need to ask your system administrator for this. Import the exported SAR .jar file into JDeveloper using the File menu, under the option ‘SOA Archive into SOA Project’. In JDeveloper set the appropriate customization layer values, and then change from the default role to the Fusion Applications Customization Developer role. Make the customizations and save the application project. Finally redeploy the composite application, either to a direct Application Server connection, or as a fresh SAR (jar) file that can then be re-imported and deployed via Enterprise Manager. The Business Process Management (BPM) Suite In addition to the relatively low-level development environment associated with BPEL process creation, Oracle provides a suite of products that allow business process adjustments to be made without the need for some of the programming skills.  The aim is to abstract much of the technical implementation and to provide a Business Analyst tools for immediately implementing organization changes. Obviously there are some limitations on what they can do, however the BPM Suite functionality increases with each release and for the majority of the cases the tools remains as applicable as its developer-orientated sister. At the current time business processes must be explicitly coded to support just one of these use-cases, either BPEL for developer use or BPM for business analyst use. That said, they both run on the same SOA Server in much the same way. The components bundled in each SOA Composite Application can be verified by inspection through Enterprise Manager. Figure 2 – A BPM Process in JDeveloper BPM Suite. BPM processes are written in a standard notation (BPMN) and the modeling tools are very similar to that of BPEL. The steps to deploy a custom BPM process are also essentially much the same, since the BPM process is bundled into a SOA Composite just like a BPEL process. As such the SOA Composite Editor  actually has support for both artifacts and even allows use of them together, such as a calling a BPM process as a partnerlink from a BPEL process. For more details see the references below. Business Analyst Tooling In addition to using JDeveloper extensions for BPM development, there are run-time tools that Business Analysts can use to make adjustments, so that without high costs of an IT project the system can be tuned to match changes to the business operation. The first tool to consider is the BPM Composer, deployed with the middleware SOA Server and accessible online, and for Fusion Applications it is under the Business Process icon on the homepage of the Application Composer. Figure 3 – Business Process Composer showing a CRM process flow. The key difference between this and using JDeveloper is that the BPM Composer has a Business Catalog prepopulated with features and functions that can be used, mostly through registered WebServices. This means no coding or complex interface development is required, simply drag-drop-configure. The items in the business catalog are seeded by either Oracle (as a BPM Template) or added to by your own custom development. You cannot create or generate catalog content from BPM Composer directly. As per the screenshot you can see the Business Catalog content in the BPM Project browser region. In addition, other online tools for use by Business Analysts include the BPM Worklist application for editing business rules and approval management configuration, plus the SOA Composer which focuses on non-approval business rules and domain value maps. At the current time there are only a handful of BPM processes shipped with Fusion Applications HCM and CRM, including on-boarding workers and processing customer registrations.  This also means a limited number of associated BPM Templates provided out-of-the-box, therefore a limited Business Catalog. That said, BPM-based extension is a powerful capability to leverage and will most likely develop going forwards, especially for use in SaaS deployments where full design-time JDeveloper access is not available. Further Reading For BPEL – Fusion Applications Extensibility Guide – Section 12 For BPM – Fusion Applications Extensibility Guide – Section 7 The product-specific documentation and implementation guides for Fusion Applications Fusion Middleware Developers Guide for SOA Suite Modeling and Implementation Guide for Oracle Business Process Management User’s Guide for Oracle Business Process Composer Oracle University courses on BPM Suite and SOA Development

    Read the article

  • Simplifying Human Capital Management with Mobile Applications

    - by HCM-Oracle
    By Aaron Green If you're starting to think 'mobility' is a recurring theme in your reading, you'd be right. For those who haven't started to build organisational capabilities to leverage it, it's fair to say you're late to the party. The good news: better late than never. Research firm eMarketer says the worldwide smartphone audience will total 1.75 billion this year, while communications technology and services provider Ericsson suggests smartphones will triple to 5.6 billion globally by 2019. It should be no surprise, smart phone adoption is reaching the farthest corners of the globe; the subsequent impact of enterprise applications enabled by these devices is driving business performance improvement and will continue to do so. Companies using advanced workforce analytics can add significantly to the bottom line, while impacting customer satisfaction, quality and productivity. It's a statement that makes most business leaders sit forward in their chairs. Achieving these three standards is like sipping The Golden Elixir for the business world. No-one would argue their importance. So what are 'advanced workforce analytics?' Simply, they're unprecedented access to workforce trends and performance markers. Many are made possible by a mobile world and the enterprise applications that come with it on smart devices. Some refer to it as 'the consumerisation of IT'. As this phenomenon has matured and become more widely appreciated it has impacted the spectrum of functional units within an enterprise differently, but powerfully. Whether it's sales, HR, marketing, IT, or operations, all have benefited from a more mobile approach. It has been the catalyst for improvement in, and management of, the employee experience. The net result of which is happier customers. The obvious benefits but the lesser realised impact Most people understand that mobility allows for greater efficiency and productivity, collaboration and flexibility, but how that translates into business outcomes within the various functional groups is lesser known. In actuality mobility has helped galvanise partnerships between cross-functional groups within the enterprise. Where in some quarters it was once feared mobility could fragment a workforce, its rallying cry of support is coming from what you might describe as an unlikely source - HR. As the bedrock of an enterprise, it is conceivable HR might contemplate the possible negative impact of a mobile workforce that no-longer sits in an office, at the same desks every day. After all, who would know what they were doing or saying? How would they collaborate? It's reasonable to see why HR might have a legitimate claim to try and retain as much 'perceived control' as possible. The reality however is mobility has emancipated human capital and its management. Mobility and enterprise applications are expediting decision making. Google calls it Zero Moment of Truth, or ZMOT. It enables smoother operation and can contribute to faster growth. From a collaborative perspective, with the growing use of enterprise social media, which in many cases is being driven by HR, workforce planning and the tangible impact of change is much easier to map. This in turn provides a platform from which individuals and teams can thrive. With more agility and ability to anticipate, staff satisfaction and retention is higher, and real time feedback constant. The management team can save time, energy and costs with more accurate data, which is then intelligently applied across the workforce to truly engage with staff, customers and partners. From a human capital management (HCM) perspective, mobility can help you close the loop on true talent management. It can enhance what managers can offer and what employees can provide in return. It can create nested relationships and powerful partnerships. IT and HR - partners and stewards of mobility One effect of enterprise mobility is an evolution in the nature of the relationship between HR and IT from one of service provision to partnership. The reason for the dynamic shift is largely due to the 'bring your own device' (BYOD) movement, which is transitioning to a 'bring your own application' (BYOA) scenario. As enterprise technology has in some ways reverse-engineered its solutions to help manage this situation, the partnership between IT (the functional owner) and HR (the strategic enabler) is deeply entrenched. And it has to be. The CIO and the HR leader are faced with compliance and regulatory issues and concerns around information security and personal privacy on a daily basis, complicated by global reach and varied domestic legislation. There are tens of thousands of new mobile apps entering the market each month and, unlike many consumer applications which get downloaded but are often never opened again after initial perusal, enterprise applications are being relied upon by functional groups, not least by HR to enhance people management. It requires a systematic approach across all applications in use within the enterprise in order to ensure they're used to best effect. No turning back, and no desire to With real time analytics on performance and the ability for immediate feedback, there is no turning back for managers. In my experience with Oracle, our customers' operational efficiency is at record levels. It's clear as a result of the combination of individual KPIs and organisational goals, CIOs have been able to give HR leaders the ability to build predictive models that feed into an enterprise organisations' evolving strategy. It also helps them ensure regulatory compliance much more easily. Once an arduous task, with mobile enabled automation and quality data, compliance is simpler. Their world has changed for the better. For the CIO, mobility also assists them to optimise performance. While it doesn't come without challenges, mobile-enabled applications and the native experience users have with them means employees don't need high-level technical expertise to train users. It reduces the training and engagement required from the IT team so they can focus on other things that deliver value to the bottom line; all the while lowering the cost of assets and related maintenance work by simplifying processes. Rewards of a mobile enterprise outweigh risks With mobile tools allowing us to increasingly integrate our personal and professional lives, terms like "office hours" are becoming irrelevant, so work/life balance is a cultural must. Enterprises are expected to offer tools that enable workers to access information from anywhere, at any time, from any device. Employees want simplicity and convenience but it doesn't stop at private enterprise. This is a societal shift. Governments, which traditionally have been known to be slower to adopt newer technology, are also offering support for local businesses to go mobile. Several state government websites have advice on how to create mobile apps and more. And as recently as last week the Victorian Minister for Technology Gordon Rich-Phillips unveiled his State government's ICT roadmap for the next two years, which details an increased use of the public cloud, as well as mobile communications, and improved access to online data-sets. Tech giants are investing significantly in solutions designed to simplify mobile deployment and enablement. The mobility trend is creating a wave of change in the industry and driving transformation in the enterprise. If you're not on that wave, the business risk continues to rise as your competitiveness drops. Aaron is the Vice President of HCM Strategy at Oracle Corporation where he is responsible for researching and identifying emerging trends in the practice of Human Resources and works to deliver industry-leading technology solutions. Other responsibilities include, ownership of Oracle's innovative HCM solutions across JAPAC and enabling organisations to transform and modernise their workforce tools. Follow him on Twitter @aaronjgreen

    Read the article

  • SharePoint logging to a list

    - by Norgean
    I recently worked in an environment with several servers. Locating the correct SharePoint log file for error messages, or development trace calls, is cumbersome. And once the solution hit the cloud, it got even worse, as we had no access to the log files at all. Obviously we are not the only ones with this problem, and the current trend seems to be to log to a list. This had become an off-hour project, so rather than do the sensible thing and find a ready-made solution, I decided to do it the hard way. So! Fire up Visual Studio, create yet another empty SharePoint solution, and start to think of some requirements. Easy on/offI want to be able to turn list-logging on and off.Easy loggingFor me, this means being able to use string.Format.Easy filteringLet's have the possibility to add some filtering columns; category and severity, where severity can be "verbose", "warning" or "error". Easy on/off Well, that's easy. Create a new web feature. Add an event receiver, and create the list on activation of the feature. Tear the list down on de-activation. I chose not to create a new content type; I did not feel that it would give me anything extra. I based the list on the generic list - I think a better choice would have been the announcement type. Approximately: public void CreateLog(SPWeb web)         {             var list = web.Lists.TryGetList(LogListName);             if (list == null)             {                 var listGuid = web.Lists.Add(LogListName, "Logging for the masses", SPListTemplateType.GenericList);                 list = web.Lists[listGuid];                 list.Title = LogListTitle;                 list.Update();                 list.Fields.Add(Category, SPFieldType.Text, false);                 var stringColl = new StringCollection();                 stringColl.AddRange(new[]{Error, Information, Verbose});                 list.Fields.Add(Severity, SPFieldType.Choice, true, false, stringColl);                 ModifyDefaultView(list);             }         }Should be self explanatory, but: only create the list if it does not already exist (d'oh). Best practice: create it with a Url-friendly name, and, if necessary, give it a better title. ...because otherwise you'll have to look for a list with a name like "Simple_x0020_Log". I've added a couple of fields; a field for category, and a 'severity'. Both to make it easier to find relevant log messages. Notice that I don't have to call list.Update() after adding the fields - this would cause a nasty error (something along the lines of "List locked by another user"). The function for deleting the log is exactly as onerous as you'd expect:         public void DeleteLog(SPWeb web)         {             var list = web.Lists.TryGetList(LogListTitle);             if (list != null)             {                 list.Delete();             }         } So! "All" that remains is to log. Also known as adding items to a list. Lots of different methods with different signatures end up calling the same function. For example, LogVerbose(web, message) calls LogVerbose(web, null, message) which again calls another method which calls: private static void Log(SPWeb web, string category, string severity, string textformat, params object[] texts)         {             if (web != null)             {                 var list = web.Lists.TryGetList(LogListTitle);                 if (list != null)                 {                     var item = list.AddItem(); // NOTE! NOT list.Items.Add… just don't, mkay?                     var text = string.Format(textformat, texts);                     if (text.Length > 255) // because the title field only holds so many chars. Sigh.                         text = text.Substring(0, 254);                     item[SPBuiltInFieldId.Title] = text;                     item[Degree] = severity;                     item[Category] = category;                     item.Update();                 }             } // omitted: Also log to SharePoint log.         } By adding a params parameter I can call it as if I was doing a Console.WriteLine: LogVerbose(web, "demo", "{0} {1}{2}", "hello", "world", '!'); Ok, that was a silly example, a better one might be: LogError(web, LogCategory, "Exception caught when updating {0}. exception: {1}", listItem.Title, ex); For performance reasons I use list.AddItem rather than list.Items.Add. For completeness' sake, let us include the "ModifyDefaultView" function that I deliberately skipped earlier.         private void ModifyDefaultView(SPList list)         {             // Add fields to default view             var defaultView = list.DefaultView;             var exists = defaultView.ViewFields.Cast<string>().Any(field => String.CompareOrdinal(field, Severity) == 0);               if (!exists)             {                 var field = list.Fields.GetFieldByInternalName(Severity);                 if (field != null)                     defaultView.ViewFields.Add(field);                 field = list.Fields.GetFieldByInternalName(Category);                 if (field != null)                     defaultView.ViewFields.Add(field);                 defaultView.Update();                   var sortDoc = new XmlDocument();                 sortDoc.LoadXml(string.Format("<Query>{0}</Query>", defaultView.Query));                 var orderBy = (XmlElement) sortDoc.SelectSingleNode("//OrderBy");                 if (orderBy != null && sortDoc.DocumentElement != null)                     sortDoc.DocumentElement.RemoveChild(orderBy);                 orderBy = sortDoc.CreateElement("OrderBy");                 sortDoc.DocumentElement.AppendChild(orderBy);                 field = list.Fields[SPBuiltInFieldId.Modified];                 var fieldRef = sortDoc.CreateElement("FieldRef");                 fieldRef.SetAttribute("Name", field.InternalName);                 fieldRef.SetAttribute("Ascending", "FALSE");                 orderBy.AppendChild(fieldRef);                   fieldRef = sortDoc.CreateElement("FieldRef");                 field = list.Fields[SPBuiltInFieldId.ID];                 fieldRef.SetAttribute("Name", field.InternalName);                 fieldRef.SetAttribute("Ascending", "FALSE");                 orderBy.AppendChild(fieldRef);                 defaultView.Query = sortDoc.DocumentElement.InnerXml;                 //defaultView.Query = "<OrderBy><FieldRef Name='Modified' Ascending='FALSE' /><FieldRef Name='ID' Ascending='FALSE' /></OrderBy>";                 defaultView.Update();             }         } First two lines are easy - see if the default view includes the "Severity" column. If it does - quit; our job here is done.Adding "severity" and "Category" to the view is not exactly rocket science. But then? Then we build the sort order query. Through XML. The lines are numerous, but boring. All to achieve the CAML query which is commented out. The major benefit of using the dom to build XML, is that you may get compile time errors for spelling mistakes. I say 'may', because although the compiler will not let you forget to close a tag, it will cheerfully let you spell "Name" as "Naem". Whichever you prefer, at the end of the day the view will sort by modified date and ID, both descending. I added the ID as there may be several items with the same time stamp. So! Simple logging to a list, with sensible a view, and with normal functionality for creating your own filterings. I should probably have added some more views in code, ready filtered for "only errors", "errors and warnings" etc. And it would be nice to block verbose logging completely, but I'm not happy with the alternatives. (yetanotherfeature or an admin page seem like overkill - perhaps just removing it as one of the choices, and not log if it isn't there?) Before you comment - yes, try-catches have been removed for clarity. There is nothing worse than having a logging function that breaks your site!

    Read the article

  • Part 3 of 4 : Tips/Tricks for Silverlight Developers.

    - by mbcrump
    Part 1 | Part 2 | Part 3 | Part 4 I wanted to create a series of blog post that gets right to the point and is aimed specifically at Silverlight Developers. The most important things I want this series to answer is : What is it?  Why do I care? How do I do it? I hope that you enjoy this series. Let’s get started: Tip/Trick #11) What is it? Underline Text in a TextBlock. Why do I care? I’ve seen people do some crazy things to get underlined text in a Silverlight Application. In case you didn’t know there is a property for that. How do I do it: On a TextBlock you have a property called TextDecorations. You can easily set this property in XAML or CodeBehind with the following snippet: <UserControl x:Class="SilverlightApplication19.MainPage" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" mc:Ignorable="d" d:DesignHeight="300" d:DesignWidth="400"> <Grid x:Name="LayoutRoot" Background="White"> <TextBlock Name="txtTB" Text="MichaelCrump.NET" TextDecorations="Underline" /> </Grid> </UserControl> or you can do it in CodeBehind… txtTB.TextDecorations = TextDecorations.Underline;   Tip/Trick #12) What is it? Get the browser information from a Silverlight Application. Why do I care? This will allow you to program around certain browser conditions that otherwise may not be aware of. How do I do it: It is very easy to extract Browser Information out a Silverlight Application by using the BrowserInformation class. You can copy/paste this code snippet to have access to all of them. string strBrowserName = HtmlPage.BrowserInformation.Name; string strBrowserMinorVersion = HtmlPage.BrowserInformation.BrowserVersion.Minor.ToString(); string strIsCookiesEnabled = HtmlPage.BrowserInformation.CookiesEnabled.ToString(); string strPlatform = HtmlPage.BrowserInformation.Platform; string strProductName = HtmlPage.BrowserInformation.ProductName; string strProductVersion = HtmlPage.BrowserInformation.ProductVersion; string strUserAgent = HtmlPage.BrowserInformation.UserAgent; string strBrowserVersion = HtmlPage.BrowserInformation.BrowserVersion.ToString(); string strBrowserMajorVersion = HtmlPage.BrowserInformation.BrowserVersion.Major.ToString(); Tip/Trick #13) What is it? Always check the minRuntimeVersion after creating a new Silverlight Application. Why do I care? Whenever you create a new Silverlight Application and host it inside of an ASP.net website you will notice Visual Studio generates some code for you as shown below. The minRuntimeVersion value is set by the SDK installed on your system. Be careful, if you are playing with beta’s like “Lightswitch” because you will have a higher version of the SDK installed. So when you create a new Silverlight 4 project and deploy it your customers they will get a prompt telling them they need to upgrade Silverlight. They also will not be able to upgrade to your version because its not released to the public yet. How do I do it: Open up the .aspx or .html file Visual Studio generated and look for the line below. Make sure it matches whatever version you are actually targeting. Tip/Trick #14) What is it? The VisualTreeHelper class provides useful methods for involving nodes in a visual tree. Why do I care? It’s nice to have the ability to “walk” a visual tree or to get the rendered elements of a ListBox. I have it very useful for debugging my Silverlight application. How do I do it: Many examples exist on the web, but say that you have a huge Silverlight application and want to find the parent object of a control.  In the code snippet below, we would get 3 MessageBoxes with (StackPanel first, Grid second and UserControl Third). This is a tiny application, but imagine how helpful this would be on a large project. <UserControl x:Class="SilverlightApplication18.MainPage" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" mc:Ignorable="d" d:DesignHeight="300" d:DesignWidth="400"> <Grid x:Name="LayoutRoot" Background="White"> <StackPanel> <Button Content="Button" Height="23" Name="button1" VerticalAlignment="Top" Width="75" Click="button1_Click" /> </StackPanel> </Grid> </UserControl> private void button1_Click(object sender, RoutedEventArgs e) { DependencyObject obj = button1; while ((obj = VisualTreeHelper.GetParent(obj)) != null) { MessageBox.Show(obj.GetType().ToString()); } } Tip/Trick #15) What is it? Add ChildWindows to your Silverlight Application. Why do I care? ChildWindows are a great way to direct a user attention to a particular part of your application. This could be used when saving or entering data. How do I do it: Right click your Silverlight Application and click Add then New Item. Select Silverlight Child Window as shown below. Add an event and call the ChildWindow with the following snippet below: private void button1_Click(object sender, RoutedEventArgs e) { ChildWindow1 cw = new ChildWindow1(); cw.Show(); } Your main application can still process information but this screen forces the user to select an action before proceeding. The code behind of the ChildWindow will look like the following: namespace SilverlightApplication18 { public partial class ChildWindow1 : ChildWindow { public ChildWindow1() { InitializeComponent(); } private void OKButton_Click(object sender, RoutedEventArgs e) { this.DialogResult = true; //TODO: Add logic to save what the user entered. } private void CancelButton_Click(object sender, RoutedEventArgs e) { this.DialogResult = false; } } } Thanks for reading and please come back for Part 4.  Subscribe to my feed CodeProject

    Read the article

  • A Simple Collapsible Menu with jQuery

    - by Vincent Maverick Durano
    In this post I'll demonstrate how to make a simple collapsible menu using jQuery. To get started let's go ahead and fire up Visual Studio and create a new WebForm.  Now let's build our menu by adding some div, p and anchor tags. Since I'm using a masterpage then the ASPX mark-up should look something like this:   1: <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> 2: <div id="Menu"> 3: <p>CARS</p> 4: <div class="section"> 5: <a href="#">Car 1</a> 6: <a href="#">Car 2</a> 7: <a href="#">Car 3</a> 8: <a href="#">Car 4</a> 9: </div> 10: <p>BIKES</p> 11: <div class="section"> 12: <a href="#">Bike 1</a> 13: <a href="#">Bike 2</a> 14: <a href="#">Bike 3</a> 15: <a href="#">Bike 4</a> 16: <a href="#">Bike 5</a> 17: <a href="#">Bike 6</a> 18: <a href="#">Bike 7</a> 19: <a href="#">Bike 8</a> 20: </div> 21: <p>COMPUTERS</p> 22: <div class="section"> 23: <a href="#">Computer 1</a> 24: <a href="#">Computer 2</a> 25: <a href="#">Computer 3</a> 26: <a href="#">Computer 4</a> 27: </div> 28: <p>OTHERS</p> 29: <div class="section"> 30: <a href="#">Other 1</a> 31: <a href="#">Other 2</a> 32: <a href="#">Other 3</a> 33: <a href="#">Other 4</a> 34: </div> 35: </div> 36: </asp:Content>   As you can see there's nothing fancy about the mark up above.. Now lets go ahead create a simple CSS to set the look and feel our our Menu. Just for for the simplicity of this demo, add the following CSS below under the <head> section of the page or if you are using master page then add it a the content head. Here's the CSS below:   1: <asp:Content ID="Content1" ContentPlaceHolderID="HeadContent" runat="server"> 2: <style type="text/css"> 3: #Menu{ 4: width:300px; 5: } 6: #Menu > p{ 7: background-color:#104D9E; 8: color:#F5F7FA; 9: margin:0; 10: padding:0; 11: border-bottom-style: solid; 12: border-bottom-width: medium; 13: border-bottom-color:#000000; 14: cursor:pointer; 15: } 16: #Menu .section{ 17: padding-left:5px; 18: background-color:#C0D9FA; 19: } 20: a{ 21: display:block; 22: color:#0A0A07; 23: } 24: </style> 25: </asp:Content>   Now let's add the collapsible effects on our menu using jQuery. To start using jQuery then register the following script at the very top of the <head> section of the page or if you are using master page then add it the very top of  the content head section.   <script type="text/javascript" src="https://ajax.googleapis.com/ajax/libs/jquery/1.4.4/jquery.min.js" ></script>   As you can see I'm using Google AJAX API CDN to host the jQuery file. You can also download the jQuery here and host it in your server if you'd like. Okay here's the the jQuery script below for adding the collapsible effects:   1: <script type="text/javascript"> 2: $(function () { 3: $("a").mouseover(function () { $(this).addClass("highlightRow"); }) 4: .mouseout(function () { $(this).removeClass("highlightRow"); }); 5:   6: $(".section").hide(); 7: $("#Menu > p").click(function () { 8: $(this).next().slideToggle("Slow"); 9: }); 10: }); 11: </script>   Okay to give you a little bit of explaination, at line 3.. what it does is it looks for all the "<a>" anchor elements on the page and attach the mouseover and mouseout event. On mouseover, the highlightRow css class is added to <a> element and on mouse out we remove the css class to revert the style to its default look. at line 6 we will hide all the elements that has a class name set as "section" and if you look at the mark up above it is refering to the <div> elements right after each <p> element. At line 7.. what it does is it looks for a <p> element that is a direct child of the element that has an ID of "Menu" and then attach the click event to toggle the visibilty of the section. Here's how it looks in the page: On Initial Load: After Clicking the Section Header:   That's it! I hope someone find this post usefu!   Technorati Tags: ASP.NET,JQuery,Master Page,JavaScript

    Read the article

  • Windows Server 2008 Task Scheduler Tasks Not Executing

    - by omatase
    I've been having an intermittent problem for some time now with the Windows Task Scheduler that I can't work out. I use the task scheduler to run a C# app I've written that has various plugins used to ensure production systems are working. This task scheduler itself is actually a production system so I have one simple task that executes every 8 minutes to notify an external monitoring system that the task scheduler is still up. If this external service fails to receive an "all-clear" at least once every 15 minutes (or so I don't remember the exact number right now) it will message us that the monitoring system is down. In the past we've had intermittent "down" messages from time to time and each time I've investigated the cause I was unable to find any problems. So this time I wanted to ask the StackOverflow community since it doesn't look like I'll have luck on my own. This morning at 2:32 AM the task fired (exactly 8 minutes after the previous firing) however the task didn't fire again until 3:28. There are no errors that I can see in the Event Viewer at this time. When I look at the Task Scheduler log there are no errors there either. Here is what the log looks like though: Information 6/11/2011 3:28:56 AM 102 Task completed (2) d6cf2412-269e-48bf-9f40-4a863347baad Information 6/11/2011 3:28:56 AM 201 Action completed (2) d6cf2412-269e-48bf-9f40-4a863347baad Information 6/11/2011 3:28:55 AM 129 Created Task Process Info Information 6/11/2011 3:28:55 AM 200 Action started (1) d6cf2412-269e-48bf-9f40-4a863347baad Information 6/11/2011 3:28:55 AM 100 Task Started (1) d6cf2412-269e-48bf-9f40-4a863347baad Information 6/11/2011 3:28:55 AM 319 Task Engine received message to start task (1) Information 6/11/2011 3:28:55 AM 107 Task triggered on scheduler Info d6cf2412-269e-48bf-9f40-4a863347baad Information 6/11/2011 3:28:15 AM 102 Task completed (2) b91fe5ce-39ef-42fb-adbe-bd8be012c00a Information 6/11/2011 3:28:15 AM 201 Action completed (2) b91fe5ce-39ef-42fb-adbe-bd8be012c00a Information 6/11/2011 3:28:15 AM 102 Task completed (2) 556c07dc-2724-4a21-a97e-dc4abd56f94d Information 6/11/2011 3:28:15 AM 201 Action completed (2) 556c07dc-2724-4a21-a97e-dc4abd56f94d Information 6/11/2011 3:28:15 AM 102 Task completed (2) 79328289-f742-49dd-aa0d-c3d05db50895 Information 6/11/2011 3:28:15 AM 201 Action completed (2) 79328289-f742-49dd-aa0d-c3d05db50895 Information 6/11/2011 3:28:15 AM 102 Task completed (2) 19743755-47b6-4b98-9bec-052193be9496 Information 6/11/2011 3:28:15 AM 201 Action completed (2) 19743755-47b6-4b98-9bec-052193be9496 Information 6/11/2011 3:28:15 AM 102 Task completed (2) c165754f-e3e6-4176-a327-11f9c06c39a5 Information 6/11/2011 3:28:15 AM 201 Action completed (2) c165754f-e3e6-4176-a327-11f9c06c39a5 Information 6/11/2011 3:28:15 AM 102 Task completed (2) 0e62ad3e-1f6e-40c0-9155-19f0108dee22 Information 6/11/2011 3:28:15 AM 201 Action completed (2) 0e62ad3e-1f6e-40c0-9155-19f0108dee22 Information 6/11/2011 3:28:10 AM 129 Created Task Process Info Information 6/11/2011 3:28:10 AM 200 Action started (1) 0e62ad3e-1f6e-40c0-9155-19f0108dee22 Information 6/11/2011 3:28:10 AM 129 Created Task Process Info Information 6/11/2011 3:28:10 AM 200 Action started (1) c165754f-e3e6-4176-a327-11f9c06c39a5 Information 6/11/2011 3:28:10 AM 129 Created Task Process Info Information 6/11/2011 3:28:10 AM 200 Action started (1) 19743755-47b6-4b98-9bec-052193be9496 Information 6/11/2011 3:28:10 AM 129 Created Task Process Info Information 6/11/2011 3:28:10 AM 200 Action started (1) 79328289-f742-49dd-aa0d-c3d05db50895 Information 6/11/2011 3:28:10 AM 129 Created Task Process Info Information 6/11/2011 3:28:10 AM 200 Action started (1) 556c07dc-2724-4a21-a97e-dc4abd56f94d Information 6/11/2011 3:28:10 AM 129 Created Task Process Info Information 6/11/2011 3:28:10 AM 200 Action started (1) b91fe5ce-39ef-42fb-adbe-bd8be012c00a Information 6/11/2011 3:28:10 AM 100 Task Started (1) 0e62ad3e-1f6e-40c0-9155-19f0108dee22 Information 6/11/2011 3:28:10 AM 319 Task Engine received message to start task (1) Information 6/11/2011 3:28:10 AM 100 Task Started (1) c165754f-e3e6-4176-a327-11f9c06c39a5 Information 6/11/2011 3:28:10 AM 319 Task Engine received message to start task (1) Information 6/11/2011 3:28:10 AM 100 Task Started (1) 19743755-47b6-4b98-9bec-052193be9496 Information 6/11/2011 3:28:10 AM 319 Task Engine received message to start task (1) Information 6/11/2011 3:28:10 AM 100 Task Started (1) 79328289-f742-49dd-aa0d-c3d05db50895 Information 6/11/2011 3:28:10 AM 319 Task Engine received message to start task (1) Information 6/11/2011 3:28:10 AM 100 Task Started (1) 556c07dc-2724-4a21-a97e-dc4abd56f94d Information 6/11/2011 3:28:10 AM 319 Task Engine received message to start task (1) Information 6/11/2011 3:28:10 AM 100 Task Started (1) b91fe5ce-39ef-42fb-adbe-bd8be012c00a Information 6/11/2011 3:28:10 AM 319 Task Engine received message to start task (1) Information 6/11/2011 3:28:10 AM 107 Task triggered on scheduler Info 0e62ad3e-1f6e-40c0-9155-19f0108dee22 Information 6/11/2011 3:28:10 AM 107 Task triggered on scheduler Info c165754f-e3e6-4176-a327-11f9c06c39a5 Information 6/11/2011 3:28:10 AM 107 Task triggered on scheduler Info 19743755-47b6-4b98-9bec-052193be9496 Information 6/11/2011 3:28:10 AM 107 Task triggered on scheduler Info 79328289-f742-49dd-aa0d-c3d05db50895 Information 6/11/2011 3:28:10 AM 107 Task triggered on scheduler Info 556c07dc-2724-4a21-a97e-dc4abd56f94d Information 6/11/2011 3:28:10 AM 107 Task triggered on scheduler Info b91fe5ce-39ef-42fb-adbe-bd8be012c00a Information 6/11/2011 2:32:56 AM 102 Task completed (2) 16e4f2c3-a340-410a-9c14-4bfe0861fdd5 Information 6/11/2011 2:32:56 AM 201 Action completed (2) 16e4f2c3-a340-410a-9c14-4bfe0861fdd5 Information 6/11/2011 2:32:55 AM 129 Created Task Process Info Information 6/11/2011 2:32:55 AM 200 Action started (1) 16e4f2c3-a340-410a-9c14-4bfe0861fdd5 Information 6/11/2011 2:32:55 AM 100 Task Started (1) 16e4f2c3-a340-410a-9c14-4bfe0861fdd5 Information 6/11/2011 2:32:55 AM 319 Task Engine received message to start task (1) Information 6/11/2011 2:32:55 AM 107 Task triggered on scheduler Info 16e4f2c3-a340-410a-9c14-4bfe0861fdd5 Seems kind of strange. I also have two other C# apps that run and check something each hour on the hour using task scheduler. If I look at the history for those I can see that they didn't execute at 3 AM either! They all waited until 3:28 to start as well. If I look at "tasks completed" in the Event Viewer it shows that only one task was able to run between the 2:32 AM to 3:28 AM time period. The task was "\Microsoft\Windows\RAC\RACAgent" And here's what it looked like: Information 6/11/2011 3:18:09 AM 102 Task completed (2) 00c53a85-ba20-4666-80db-fbbe2492c0ad Information 6/11/2011 3:18:09 AM 201 Action completed (2) 00c53a85-ba20-4666-80db-fbbe2492c0ad Information 6/11/2011 3:18:08 AM 129 Created Task Process Info Information 6/11/2011 3:18:08 AM 200 Action started (1) 00c53a85-ba20-4666-80db-fbbe2492c0ad Information 6/11/2011 3:18:08 AM 100 Task Started (1) 00c53a85-ba20-4666-80db-fbbe2492c0ad Information 6/11/2011 3:18:08 AM 319 Task Engine received message to start task (1) I appreciate any ideas anyone may have.

    Read the article

  • KVM/Libvirt bridged/routed networking not working on newer guest kernels

    - by SharkWipf
    I have a dedicated server running Debian 6, with Libvirt (0.9.11.3) and Qemu-KVM (qemu-kvm-1.0+dfsg-11, Debian). I am having a problem getting bridged/routed networking to work in KVM guests with newer kernels (2.6.38). NATted networking works fine though. Older kernels work perfectly fine as well. The host kernel is at version 3.2.0-2-amd64, the problem was also there on an older host kernel. The contents of the host's /etc/network/interfaces (ip removed): # Loopback device: auto lo iface lo inet loopback # bridge auto br0 iface br0 inet static address 176.9.xx.xx broadcast 176.9.xx.xx netmask 255.255.255.224 gateway 176.9.xx.xx pointopoint 176.9.xx.xx bridge_ports eth0 bridge_stp off bridge_maxwait 0 bridge_fd 0 up route add -host 176.9.xx.xx dev br0 # VM IP post-up mii-tool -F 100baseTx-FD br0 # default route to access subnet up route add -net 176.9.xx.xx netmask 255.255.255.224 gw 176.9.xx.xx br0 The output of ifconfig -a on the host: br0 Link encap:Ethernet HWaddr 54:04:a6:8a:66:13 inet addr:176.9.xx.xx Bcast:176.9.xx.xx Mask:255.255.255.224 inet6 addr: fe80::5604:a6ff:fe8a:6613/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:20216729 errors:0 dropped:0 overruns:0 frame:0 TX packets:19962220 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:14144528601 (13.1 GiB) TX bytes:7990702656 (7.4 GiB) eth0 Link encap:Ethernet HWaddr 54:04:a6:8a:66:13 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:26991788 errors:0 dropped:12066 overruns:0 frame:0 TX packets:19737261 errors:270082 dropped:0 overruns:0 carrier:270082 collisions:1686317 txqueuelen:1000 RX bytes:15459970915 (14.3 GiB) TX bytes:6661808415 (6.2 GiB) Interrupt:17 Memory:fe500000-fe520000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:6240133 errors:0 dropped:0 overruns:0 frame:0 TX packets:6240133 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:6081956230 (5.6 GiB) TX bytes:6081956230 (5.6 GiB) virbr0 Link encap:Ethernet HWaddr 52:54:00:79:e4:5a inet addr:192.168.100.1 Bcast:192.168.100.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:225016 errors:0 dropped:0 overruns:0 frame:0 TX packets:412958 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:16284276 (15.5 MiB) TX bytes:687827984 (655.9 MiB) virbr0-nic Link encap:Ethernet HWaddr 52:54:00:79:e4:5a BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:500 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) vnet0 Link encap:Ethernet HWaddr fe:54:00:93:4e:68 inet6 addr: fe80::fc54:ff:fe93:4e68/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:607670 errors:0 dropped:0 overruns:0 frame:0 TX packets:5932089 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:500 RX bytes:83574773 (79.7 MiB) TX bytes:1092482370 (1.0 GiB) vnet1 Link encap:Ethernet HWaddr fe:54:00:ed:6a:43 inet6 addr: fe80::fc54:ff:feed:6a43/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:922132 errors:0 dropped:0 overruns:0 frame:0 TX packets:6342375 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:500 RX bytes:251091242 (239.4 MiB) TX bytes:1629079567 (1.5 GiB) vnet2 Link encap:Ethernet HWaddr fe:54:00:0d:cb:3d inet6 addr: fe80::fc54:ff:fe0d:cb3d/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:9461 errors:0 dropped:0 overruns:0 frame:0 TX packets:665189 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:500 RX bytes:4990275 (4.7 MiB) TX bytes:49229647 (46.9 MiB) vnet3 Link encap:Ethernet HWaddr fe:54:cd:83:eb:aa inet6 addr: fe80::fc54:cdff:fe83:ebaa/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1649 errors:0 dropped:0 overruns:0 frame:0 TX packets:12177 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:500 RX bytes:77233 (75.4 KiB) TX bytes:2127934 (2.0 MiB) The guest's /etc/network/interfaces, in this case running Ubuntu 12.04 (ip removed): # This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 176.9.xx.xx netmask 255.255.255.248 gateway 176.9.xx.xx # Host IP pointopoint 176.9.xx.xx # Host IP dns-nameservers 8.8.8.8 8.8.4.4 The output of ifconfig -a on the guest: eth0 Link encap:Ethernet HWaddr 52:54:cd:83:eb:aa inet addr:176.9.xx.xx Bcast:0.0.0.0 Mask:255.255.255.255 inet6 addr: fe80::5054:cdff:fe83:ebaa/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:14190 errors:0 dropped:0 overruns:0 frame:0 TX packets:1768 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:2614642 (2.6 MB) TX bytes:82700 (82.7 KB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:954 errors:0 dropped:0 overruns:0 frame:0 TX packets:954 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:176679 (176.6 KB) TX bytes:176679 (176.6 KB) Output of ping -c4 on the guest: PING google.nl (173.194.35.151) 56(84) bytes of data. 64 bytes from muc03s01-in-f23.1e100.net (173.194.35.151): icmp_req=1 ttl=55 time=14.7 ms From static.174.82.xx.xx.clients.your-server.de (176.9.xx.xx): icmp_seq=2 Redirect Host(New nexthop: static.161.82.9.176.clients.your-server.de (176.9.82.161)) 64 bytes from muc03s01-in-f23.1e100.net (173.194.35.151): icmp_req=2 ttl=55 time=15.1 ms From static.198.170.9.176.clients.your-server.de (176.9.170.198) icmp_seq=3 Destination Host Unreachable From static.198.170.9.176.clients.your-server.de (176.9.170.198) icmp_seq=4 Destination Host Unreachable --- google.nl ping statistics --- 4 packets transmitted, 2 received, +2 errors, 50% packet loss, time 3002ms rtt min/avg/max/mdev = 14.797/14.983/15.170/0.223 ms, pipe 2 The static.174.82.xx.xx.clients.your-server.de (176.9.xx.xx) is the host's IP. I have encountered this problem with every guest OS I've tried, that being Fedora, Ubuntu (server/desktop) and Debian with an upgraded kernel. I've also tried compiling the guest kernel myself, to no avail. I have no problem with recompiling a kernel, though the host cannot afford any downtime. Any ideas on this problem are very welcome. EDIT: I can ping the host from inside the guest.

    Read the article

  • Bypass cache for mobile user agents, VARNISH+NGINX+W3CACHE

    - by Mike McGhee
    Right now I'm running Wordpress w/ W3 Cache on nginx with varnish front end. I'm trying to use the WP Touch Pro plugin for wordpress to display mobile sites, but it is not working. Shows the desktop theme still. I've put the mobile user agents in the rejected user agents box in w3 cache. Here is the nginx config w3 cache spit out: BEGIN W3TC Page Cache cache location ~ /wp-content/w3tc/pgcache.*html$ { expires modified 3600s; add_header X-Powered-By "W3 Total Cache/0.9.2.4"; add_header Vary "Accept-Encoding, Cookie"; } location ~ /wp-content/w3tc/pgcache.*gzip$ { gzip off; types {} default_type text/html; expires modified 3600s; add_header X-Powered-By "W3 Total Cache/0.9.2.4"; add_header Vary "Accept-Encoding, Cookie"; add_header Content-Encoding gzip; } # END W3TC Page Cache cache # BEGIN W3TC Browser Cache gzip on; gzip_types text/css application/x-javascript text/x-component text/richtext image/svg+xml text/plain text/xsd text/xsl text/xml image/x-icon; location ~ \.(css|js|htc)$ { expires 31536000s; add_header X-Powered-By "W3 Total Cache/0.9.2.4"; } location ~ \.(html|htm|rtf|rtx|svg|svgz|txt|xsd|xsl|xml)$ { expires 3600s; add_header X-Powered-By "W3 Total Cache/0.9.2.4"; } location ~ \.(asf|asx|wax|wmv|wmx|avi|bmp|class|divx|doc|docx|eot|exe|gif|gz|gzip|ico|jpg|jpeg|jpe|mdb|mid|midi|mov|qt|mp3|m4a|mp4|m4v|mpeg|mpg|mpe|mpp|otf|odb|odc|odf|odg|odp|ods|odt|ogg|pdf|png|pot|pps|ppt|pptx|ra|ram|svg|svgz|swf|tar|tif|tiff|ttf|ttc|wav|wma|wri|xla|xls|xlsx|xlt|xlw|zip)$ { expires 31536000s; add_header X-Powered-By "W3 Total Cache/0.9.2.4"; } # END W3TC Browser Cache # BEGIN W3TC Minify core rewrite ^/wp-content/w3tc/min/w3tc_rewrite_test$ /wp-content/w3tc/min/index.php?w3tc_rewrite_test=1 last; rewrite ^/wp-content/w3tc/min/(.+\.(css|js))$ /wp-content/w3tc/min/index.php?file=$1 last; # END W3TC Minify core # BEGIN W3TC Page Cache core rewrite ^(.*\/)?w3tc_rewrite_test$ $1?w3tc_rewrite_test=1 last; set $w3tc_rewrite 1; if ($request_method = POST) { set $w3tc_rewrite 0; } if ($query_string != "") { set $w3tc_rewrite 0; } if ($http_host != "mysite.com") { set $w3tc_rewrite 0; } set $w3tc_rewrite2 1; if ($request_uri !~ \/$) { set $w3tc_rewrite2 0; } if ($request_uri ~* "(sitemap(_index)?\.xml(\.gz)?|[a-z0-9_\-]+-sitemap([0-9]+)?\.xml(\.gz)?)") { set $w3tc_rewrite2 1; } if ($w3tc_rewrite2 != 1) { set $w3tc_rewrite 0; } set $w3tc_rewrite3 1; if ($request_uri ~* "(\/wp-admin\/|\/xmlrpc.php|\/wp-(app|cron|login|register|mail)\.php|\/feed\/|wp-.*\.php|index\.php)") { set $w3tc_rewrite3 0; } if ($request_uri ~* "(wp\-comments\-popup\.php|wp\-links\-opml\.php|wp\-locations\.php)") { set $w3tc_rewrite3 1; } if ($w3tc_rewrite3 != 1) { set $w3tc_rewrite 0; } if ($http_cookie ~* "(comment_author|wp\-postpass|wordpress_\[a\-f0\-9\]\+|wordpress_logged_in)") { set $w3tc_rewrite 0; } if ($http_user_agent ~* "(W3\ Total\ Cache/0\.9\.2\.4|iphone|ipod|ipad|aspen|incognito|webmate|android|dream|cupcake|froyo|blackberry9500|blackberry9520|blackberry9530|blackberry9550|blackberry\ 9800|blackberry\ 9780|webos|s8000|bada)") { set $w3tc_rewrite 0; } set $w3tc_ua ""; if ($http_user_agent ~* "(acer\ s100|android|archos5|blackberry9500|blackberry9530|blackberry9550|blackberry\ 9800|cupcake|docomo\ ht\-03a|dream|htc\ hero|htc\ magic|htc_dream|htc_magic|incognito|ipad|iphone|ipod|kindle|lg\-gw620|liquid\ build|maemo|mot\-mb200|mot\-mb300|nexus\ one|opera\ mini|samsung\-s8000|series60.*webkit|series60/5\.0|sonyericssone10|sonyericssonu20|sonyericssonx10|t\-mobile\ mytouch\ 3g|t\-mobile\ opal|tattoo|webmate|webos)") { set $w3tc_ua _high; } set $w3tc_ref ""; set $w3tc_ssl ""; set $w3tc_enc ""; if ($http_accept_encoding ~ gzip) { set $w3tc_enc _gzip; } set $w3tc_ext ""; if (-f "$document_root/wp-content/w3tc/pgcache/$request_uri/_index$w3tc_ua$w3tc_ref$w3tc_ssl.html$w3tc_enc") { set $w3tc_ext .html; } if ($w3tc_ext = "") { set $w3tc_rewrite 0; } if ($w3tc_rewrite = 1) { rewrite .* "/wp- content/w3tc/pgcache/$request_uri/_index$w3tc_ua$w3tc_ref$w3tc_ssl$w3tc_ext$w3tc_enc" last; } # END W3TC Page Cache core And here is what I have in my varnish vcl.. sub vcl_recv { # Add a unique header containing the client address remove req.http.X-Forwarded-For; set req.http.X-Forwarded-For = client.ip; # Device detection set req.http.X-Device = "desktop"; if ( req.http.User-Agent ~ "iP(hone|od|ad)" || req.http.User-Agent ~ "Android" ) { set req.http.X-Device = "smart"; } elseif ( req.http.User-Agent ~ "(SymbianOS|BlackBerry|SonyEricsson|Nokia|SAMSUNG|^LG)" ) { set req.http.X-Device = "cell"; } Any help is greatly appreciated, I've been banging my head against this for 2 days..

    Read the article

  • a friendly elaboratation on how iceweasel misses as recent request was discovered in post tut

    - by v3x3n
    I would like to specify what you asked about in how iceweasel misses on in hopes that you are interested in considering the answer to be valid considering the package should run efficient to a novice user especially when getting a first impression on a new software, browser, or package they are given by default. First let me say that I am very grateful for all debian has and continues to do for its user base and I am not complaining, only hoping that in mentioning such it will eventually aid in improving the disappointing experience iceweasel gave me in multiple ways right from the start of using it. I am sure tremendous amounts of hard work went into making it work as well as it does, and tremendous effort continues to go into IW like the rest of the package and distros such as this... Having said that, I do aim to keep my examples simple as possible but also explained in enough detail as to not leave out important info to get a full idea of my issue... (Thanks for understanding if I sound novice to debian, I am but continue to learn and advance by trial and error each day, and of course the wonderful help from all the great minds available here and there)... Well, my bone to pick (unfortunately as I disdain being a complainer when so much has been done for us, the end user) with IW is mainly 3 immediately noticed problems in offering me a smooth test drive (completely normal browsing conduct of your average user, I should indicate) as soon as I started out a task using IW (heading directly to google images and searched for a few web sized images, then saved a few (no more than 800 to 900 in dimensions, typically a normal procedure I would imagine) I experienced a horrendous reoccurring freeze and lag time that almost left me to kill the process more than once on each attempt and continued to compound so that by a few attempts later, even 1 single image save locked up the entire browser, ultimately giving up on my procedure in its simplicity with a bit of aggravation, if there was something I had done wrong on my end pre-attempt or during installation. Secondly, visiting video player sites in this case youtube will not allow for any video feed to play whatsoever and again, experienced a slight freezing that eventually dissipated as long as I didn't attempt to press play or reload another video trial... Very annoying and possibly related to my first issue (which could rely on my own lack of knowledge in setup config, but that troubles me if so).... Lastly as another user has stated already, in which brought you to ask them what exactly seemed to be the problem in the first place is also my final straw that broke the weasel's back, leading me here to find a solution to a updated version of firefox... being that iceweasel (an apparent build of firefox 17.x.x) leaving the user to find themselves out of luck when installing their favorite or much useful add-ons or plugins that aid their task or normal usage online.... As the other user clearly stated... there is almost zero options and barely any luck to find compatible plugins with such an outdated version still being used... I can only imagine if as a novoice or standard user to linux... and beginner to debian (which I am all for btw!) if I am discovering major conductivity issues which hinder normal user conduct right off the bat... that there is probably a bundle or more of security vulnerabilities that would unravel to me if I continued to give mr Hommey's package any further attention, or interest in continuing to use... This is a deal breaker on any account, unless all 3 of these issues stated are due to a very n00berish setting or lack thereof (which if that is the case, such was not properly stated for beginner in any place available by following installation tutorials such as LVM installers, which I felt lacked a clear understanding to someone who is trying to learn why and how, rather than, a fix to a potentially broken out of box installation... I really think that is irrelevant in any case however, as there is much step by step support regarding that kind of thing, just saying... its very challenging, along side a job to keep deadlines on, and the need to learn a better and more costomizible system, that was not as popularized when I became a dev via windows (OK OK I know thats a huge mistake and now I am way off original topic but anyway) Thanks very much for taking the time to read over this, and I hope it honestly gives you a few good and valid reasons for users wanting and seeking a way to get around iceweasel altogether... Finally, if I am missing something vital that renders all I've stated invalid or useless, please feel free to shove that elephant in the room my direction! Thanks for all you do, as I am sure it is very useful and dedicated, being you are a debian maintainer, and super user! Much love for intelligence, freedom and sharing the secrets of the web! May it remain unregulated and a good place to grow and learn for generations to come! Stay true! Take care now! -v3x (v3x @ gmx .com)

    Read the article

  • How to Edit or Add a New Row in jqGrid

    - by Paul
    My jqGrid that does a great job of pulling data from my database, but I'm having trouble understanding how the Add New Row functionality works. Right now, I'm able to edit inline data, but I'm not able to create a new row using the Modal Box. I'm missing that extra logic that says, "If this is a new row, post this to the server side URL" instead of modifying existing data. (Right now, hitting Submit only clears the form and reloads the grid data.) The documentation states that Add New Row is: jQuery("#editgrid").jqGrid('editGridRow',"new",{height:280,reloadAfterSubmit:false}); but I'm not sure how to use that correctly. I've spent alot of time studying the Demos, but they seem to all use an external button to fire the new row command, rather than using the Modal Form, which I want to do. My complete code is here: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>jqGrid</title> <link rel="stylesheet" type="text/css" media="screen" href="../css/ui-lightness/jquery-ui-1.7.2.custom.css" /> <link rel="stylesheet" type="text/css" media="screen" href="../css/ui.jqgrid.css" /> <script src="jquery-1.3.2.min.js" type="text/javascript"></script> <script src="../js/i18n/grid.locale-en.js" type="text/javascript"></script> <script src="jquery.jqGrid.min.js" type="text/javascript"></script> </head> <body> <h2>My Grid Data</h2> <table id="list" class="scroll"></table> <div id="pager" class="scroll c1"></div> <script type="text/javascript"> var lastSelectedId; jQuery('#list').jqGrid({ url:'grid.php', datatype: 'json', mtype: 'POST', colNames:['ID','Name', 'Price', 'Promotion'], colModel:[ {name:'product_id',index:'product_id', width:25,editable:false}, {name:'name',index:'name', width:50,editable:true, edittype:'text',editoptions:{size:30,maxlength:50}}, {name:'price',index:'price', width:50, align:'right',formatter:'currency', editable:true}, {name:'on_promotion',index:'on_promotion', width:50, formatter:'checkbox',editable:true, edittype:'checkbox'}], rowNum:10, rowList:[5,10,20,30], pager: $('#pager'), sortname: 'product_id', viewrecords: true, sortorder: "desc", caption:"Database", width:500, height:150, onSelectRow: function(id){ if(id && id!==lastSelectedId){ $('#list').restoreRow(lastSelectedId); $('#list').editRow(id,true,null,onSaveSuccess); lastSelectedId=id; }}, editurl:'grid.php?action=save'}) .jqGrid('navGrid','#pager', {refreshicon: 'ui-icon-refresh',view:true}, {height:280,reloadAfterSubmit:true}, {height:280,reloadAfterSubmit:true}, {reloadAfterSubmit:true}) .jqGrid('editGridRow',"new",{height:280,reloadAfterSubmit:false}); function onSaveSuccess(xhr) {response = xhr.responseText; if(response == 1) return true; return false;} </script></body></html> If it makes it easier, I'd be willing to scrap the inline editing functionality and do editing and posting via modal boxes. Any help would be greatly appreciated.

    Read the article

  • .NET SerialPort DataReceived event not firing

    - by Klay
    I have a WPF test app for evaluating event-based serial port communication (vs. polling the serial port). The problem is that the DataReceived event doesn't seem to be firing at all. I have a very basic WPF form with a TextBox for user input, a TextBlock for output, and a button to write the input to the serial port. Here's the code: public partial class Window1 : Window { SerialPort port; public Window1() { InitializeComponent(); port = new SerialPort("COM2", 9600, Parity.None, 8, StopBits.One); port.DataReceived += new SerialDataReceivedEventHandler(port_DataReceived); port.Open(); } void port_DataReceived(object sender, SerialDataReceivedEventArgs e) { Debug.Print("receiving!"); string data = port.ReadExisting(); Debug.Print(data); outputText.Text = data; } private void Button_Click(object sender, RoutedEventArgs e) { Debug.Print("sending: " + inputText.Text); port.WriteLine(inputText.Text); } } Now, here are the complicating factors: The laptop I'm working on has no serial ports, so I'm using a piece of software called Virtual Serial Port Emulator to setup a COM2. VSPE has worked admirably in the past, and it's not clear why it would only malfunction with .NET's SerialPort class, but I mention it just in case. When I hit the button on my form to send the data, my Hyperterminal window (connected on COM2) shows that the data is getting through. Yes, I disconnect Hyperterminal when I want to test my form's ability to read the port. I've tried opening the port before wiring up the event. No change. I've read through another post here where someone else is having a similar problem. None of that info has helped me in this case. EDIT: Here's the console version (modified from http://mark.michaelis.net/Blog/TheBasicsOfSystemIOPortsSerialPort.aspx): class Program { static SerialPort port; static void Main(string[] args) { port = new SerialPort("COM2", 9600, Parity.None, 8, StopBits.One); port.DataReceived += new SerialDataReceivedEventHandler(port_DataReceived); port.Open(); string text; do { text = Console.ReadLine(); port.Write(text + "\r\n"); } while (text.ToLower() != "q"); } public static void port_DataReceived(object sender, SerialDataReceivedEventArgs args) { string text = port.ReadExisting(); Console.WriteLine("received: " + text); } } This should eliminate any concern that it's a Threading issue (I think). This doesn't work either. Again, Hyperterminal reports the data sent through the port, but the console app doesn't seem to fire the DataReceived event. EDIT #2: I realized that I had two separate apps that should both send and receive from the serial port, so I decided to try running them simultaneously... If I type into the console app, the WPF app DataReceived event fires, with the expected threading error (which I know how to deal with). If I type into the WPF app, the console app DataReceived event fires, and it echoes the data. I'm guessing the issue is somewhere in my use of the VSPE software, which is set up to treat one serial port as both input and output. And through some weirdness of the SerialPort class, one instance of a serial port can't be both the sender and receiver. Anyway, I think it's solved.

    Read the article

  • Apache DS fails to list users

    - by CuriousMind
    Apache ds fails to list the users INFO | jvm 1 | 2012/03/28 15:54:04 | java.lang.Error: ERR_546 CRITICAL: page header magic for block 59 not OK 0 INFO | jvm 1 | 2012/03/28 15:54:04 | at jdbm.recman.PageHeader.(PageHeader.java:95) INFO | jvm 1 | 2012/03/28 15:54:04 | at jdbm.recman.PageHeader.getView(PageHeader.java:124) INFO | jvm 1 | 2012/03/28 15:54:04 | at jdbm.recman.PageManager.getNext(PageManager.java:234) INFO | jvm 1 | 2012/03/28 15:54:04 | at jdbm.recman.PageCursor.next(PageCursor.java:104) INFO | jvm 1 | 2012/03/28 15:54:04 | at jdbm.recman.PhysicalRowIdManager.fetch(PhysicalRowIdManager.java:158) INFO | jvm 1 | 2012/03/28 15:54:04 | at jdbm.recman.BaseRecordManager.fetch(BaseRecordManager.java:324) INFO | jvm 1 | 2012/03/28 15:54:04 | at jdbm.recman.CacheRecordManager.fetch(CacheRecordManager.java:262) INFO | jvm 1 | 2012/03/28 15:54:04 | at jdbm.btree.BPage.loadBPage(BPage.java:899) INFO | jvm 1 | 2012/03/28 15:54:04 | at jdbm.btree.BPage.childBPage(BPage.java:890) INFO | jvm 1 | 2012/03/28 15:54:04 | at jdbm.btree.BPage.find(BPage.java:284) INFO | jvm 1 | 2012/03/28 15:54:04 | at jdbm.btree.BPage.find(BPage.java:285) INFO | jvm 1 | 2012/03/28 15:54:04 | at jdbm.btree.BTree.find(BTree.java:408) INFO | jvm 1 | 2012/03/28 15:54:04 | at org.apache.directory.server.core.partition.impl.btree.jdbm.JdbmTable.get(JdbmTable.java:395) INFO | jvm 1 | 2012/03/28 15:54:04 | at org.apache.directory.server.core.partition.impl.btree.jdbm.JdbmMasterTable.get(JdbmMasterTable.java:155) INFO | jvm 1 | 2012/03/28 15:54:04 | at org.apache.directory.server.core.partition.impl.btree.jdbm.JdbmStore.lookup(JdbmStore.java:1332) INFO | jvm 1 | 2012/03/28 15:54:04 | at org.apache.directory.server.core.partition.impl.btree.jdbm.JdbmStore.lookup(JdbmStore.java:70) INFO | jvm 1 | 2012/03/28 15:54:04 | at org.apache.directory.server.xdbm.search.impl.EqualityEvaluator.evaluate(EqualityEvaluator.java:126) INFO | jvm 1 | 2012/03/28 15:54:04 | at org.apache.directory.server.xdbm.search.impl.AndCursor.matches(AndCursor.java:234) INFO | jvm 1 | 2012/03/28 15:54:04 | at org.apache.directory.server.xdbm.search.impl.AndCursor.next(AndCursor.java:143) INFO | jvm 1 | 2012/03/28 15:54:04 | at org.apache.directory.server.xdbm.search.impl.AndCursor.next(AndCursor.java:139) INFO | jvm 1 | 2012/03/28 15:54:04 | at org.apache.directory.server.core.partition.impl.btree.ServerEntryCursorAdaptor.next(ServerEntryCursorAdaptor.java:178) INFO | jvm 1 | 2012/03/28 15:54:04 | at org.apache.directory.server.core.filtering.BaseEntryFilteringCursor.next(BaseEntryFilteringCursor.java:499) INFO | jvm 1 | 2012/03/28 15:54:04 | at org.apache.directory.server.ldap.handlers.SearchHandler.readResults(SearchHandler.java:314) INFO | jvm 1 | 2012/03/28 15:54:04 | at org.apache.directory.server.ldap.handlers.SearchHandler.doSimpleSearch(SearchHandler.java:749) INFO | jvm 1 | 2012/03/28 15:54:04 | at org.apache.directory.server.ldap.handlers.SearchHandler.handleIgnoringReferrals(SearchHandler.java:978) INFO | jvm 1 | 2012/03/28 15:54:04 | at org.apache.directory.server.ldap.handlers.SearchHandler.handleIgnoringReferrals(SearchHandler.java:78) INFO | jvm 1 | 2012/03/28 15:54:04 | at org.apache.directory.server.ldap.handlers.ReferralAwareRequestHandler.handle(ReferralAwareRequestHandler.java:83) INFO | jvm 1 | 2012/03/28 15:54:04 | at org.apache.directory.server.ldap.handlers.ReferralAwareRequestHandler.handle(ReferralAwareRequestHandler.java:57) INFO | jvm 1 | 2012/03/28 15:54:04 | at org.apache.directory.server.ldap.handlers.LdapRequestHandler.handleMessage(LdapRequestHandler.java:208) INFO | jvm 1 | 2012/03/28 15:54:04 | at org.apache.directory.server.ldap.handlers.LdapRequestHandler.handleMessage(LdapRequestHandler.java:58) INFO | jvm 1 | 2012/03/28 15:54:04 | at org.apache.mina.handler.demux.DemuxingIoHandler.messageReceived(DemuxingIoHandler.java:232) INFO | jvm 1 | 2012/03/28 15:54:04 | at org.apache.directory.server.ldap.LdapProtocolHandler.messageReceived(LdapProtocolHandler.java:193) INFO | jvm 1 | 2012/03/28 15:54:04 | at org.apache.mina.core.filterchain.DefaultIoFilterChain$TailFilter.messageReceived(DefaultIoFilterChain.java:713) INFO | jvm 1 | 2012/03/28 15:54:04 | at org.apache.mina.core.filterchain.DefaultIoFilterChain.callNextMessageReceived(DefaultIoFilterChain.java:434) INFO | jvm 1 | 2012/03/28 15:54:04 | at org.apache.mina.core.filterchain.DefaultIoFilterChain.access$1200(DefaultIoFilterChain.java:46) INFO | jvm 1 | 2012/03/28 15:54:04 | at org.apache.mina.core.filterchain.DefaultIoFilterChain$EntryImpl$1.messageReceived(DefaultIoFilterChain.java:793) INFO | jvm 1 | 2012/03/28 15:54:04 | at org.apache.mina.core.filterchain.IoFilterEvent.fire(IoFilterEvent.java:71) INFO | jvm 1 | 2012/03/28 15:54:04 | at org.apache.mina.core.session.IoEvent.run(IoEvent.java:63) INFO | jvm 1 | 2012/03/28 15:54:04 | at org.apache.mina.filter.executor.UnorderedThreadPoolExecutor$Worker.runTask(UnorderedThreadPoolExecutor.java:480) INFO | jvm 1 | 2012/03/28 15:54:04 | at org.apache.mina.filter.executor.UnorderedThreadPoolExecutor$Worker.run(UnorderedThreadPoolExecutor.java:434) INFO | jvm 1 | 2012/03/28 15:54:04 | at java.lang.Thread.run(Thread.java:619) INFO | jvm 1 | 2012/03/28 15:54:04 | [15:54:04] WARN [org.apache.directory.server.ldap.LdapProtocolHandler] - Null LdapSession given to cleanUpSession. INFO | jvm 1 | 2012/03/28 15:55:20 | [15:55:20] WARN [org.apache.directory.server.ldap.LdapProtocolHandler] - Unexpected exception forcing session to close: sending disconnect notice to client.

    Read the article

  • facebook iframe stream.Publish cannot close dialog or skip

    - by fooyee
    am pulling my hair over this :( spent 10 hrs but nothing came out I read this thread http://forum.developers.facebook.com/viewtopic.php?pid=198128 but it didn't help much. I'm running a local dev App Engine server ( localhost:8080 ) iframe app so I have a couple of problems. 1) on safari 4.0.4, the publish story dialog comes up nicely with all images/data/action_links. upon posting a story (or skipping), the dialog goes blank and wouldn't close. 2) I tested the same code on firefox 3.5.8, the dialog comes up with all images/data/action_links, but then the whole thing freezes. Clicking anywhere on the dialog doesn't help at all. If i'm patient enough and click "publish", I have to wait for abt 10 seconds before the dialog says "story is published". then it freezes. (clicking on skip doesn't make a difference). btw, there is no "button clicking effect" : ie: the buttons don't look like they "sink down" upon clicking. I checked the firefox memory using the command "top" on the terminal, it all seems okay, no spike in CPU processes ( i could open other firefox tabs and work on them) My futile attempts at solving the problems... 1) so i thought, hmm could this be because of local dev (localhost) problem? I uploaded the code to the production server, the same thing happens. 2) I tried an older firefox (3.1) and the same problem persisted ( the freezing ) 3) I noticed that i kind of used 2 different FB features ( Connect and XFBML). The Connect Feature I used in the PostStory function. The XFBML feature I used before the tag. So I thought, hmm ... I tried replacing the FB_RequireFeatures["Connect"] feature with FB_RequireFeatures["XFBML"]. nothing changed. I still can't close the story dialog. 4) Is there a possibility that I didn't connect to xd_receiver.htm properly? my xd_receiver.htm is stored in my folder /media/fbconnect in my app.yaml handler: - url: /fbconnect static_dir: media/fbconnect so i thought a connection has to be established with xd_receiver.htm. any way I can test that? here're all the codes: <script type="text/javascript"> //post story function function PostStory() { //init facebook FB_RequireFeatures(["Connect"], function() { FB.Facebook.init('my_app_key', "/fbconnect/xd_receiver.htm"); FB.ensureInit(function() { var message = 'the message'; var attachment = { 'name': 'a simple app to send gifts', 'href': 'http://apps.facebook.com/my_app_name', 'caption': '{*actor*} sent u something', 'description': 'some description', "media": [{ "type": "image", "src": "http://bit.ly/105QYr", "href": "http://bit.ly/105QYr"}] }; //action links can only be seen AFTER the feed is published var action_links = [{ 'text': 'Send him/her a gift back!', 'href': 'http://somelink.com'}]; FB.Connect.streamPublish(message, attachment, action_links, null, "Share the gift with your friends", callback, false, null); }); }); function callback(post_id, exception) { //alert('Wall Post Complete'); } } </script> just before the end of the /body tag, i have this: <script type="text/javascript"> function callFBInit() { FB_RequireFeatures( ["XFBML"], function(){ FB.Facebook.init("my_app_key", "/fbconnect/xd_receiver.htm"); } ); } callFBInit(); btw, my xd_receiver.htm contains: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns=? http://www.w3.org/1999/xhtml? > <head> <title>cross-domain receiver page</title> </head> <body> <script src=?http://static.ak.facebook.com/js/api_lib/v0.4/xdcommreceiver.debug.js? type=? text/javascript? ></script> </body> </html> hope you guys can help out. thx

    Read the article

  • How to prevent jQuery FancyBox from closing immediately after submit?

    - by Dimitri
    Hi! I'm loading an inline registration form in a FancyBox from jQuery. However after submitting the form, the box immediately closes while there is some feedback that I want to show the user in the FancyBox itself. This feedback is generated on the server side and is printed in the FancyBox. How can I make the box only closing when their is no feedback anymore? I was thinking about using ajax to just refresh the FancyBox itself and not the whole page after refreshing. But I just can't figure out how this ajax $.ajax({type, cache, url, data, success}); works... Also it seems like there's no reaction from the 'submit bind' in the javascript. I hope someone can help me with this problem. I paste my code below. If any questions, plz ask.. Thx in advance! This is the javascript: <script type="text/javascript"> $(document).ready(function() { $("#various1").fancybox({ 'transitionIn' : 'none', 'transitionOut' : 'none', 'scrolling' : 'no', 'titleShow' : false, 'onClosed' : function() { $("#registration_error").hide(); } }); }); $("#registration_form").bind("submit", function() { if ($("#registration_error").val() != "Registration succeeded!") { $("#registration_error").show(); $.fancybox.resize(); return false; } $.fancybox.showActivity(); $.ajax({ type : "POST", cache : false, url : "/data/login.php", data : $(this).serializeArray(), success : function(data) { $.fancybox(data); } }); return false; }); This is the inline form that I show in the FancyBox: <div style="display: none;"> <div id="registration" style="width:227px;height:250px;overflow:auto;padding:7px;"> <?php echo "<p id=\"registration_error\">".$feed."</p>"; ?> <form id="registration_form" action="<?php echo $_SERVER['PHP_SELF'] ?>" method="post"> <p> <label for="username">Username: </label> <input type="text" id="login_name" name="username" size="30" /> </p> <p> <label for="password">Password: </label> <input type="password" id="pass" name="pw" size="30" /> </p> <p> <label for="repeat_password">Repeat password: </label> <input type="password" id="rep_pass" name="rep_pw" size="30" /> </p> <p> <input type="submit" value="Register" name="register" id="reg" /> </p> <p> <em></em> </p> </form> </div> </div>

    Read the article

  • Android ProgressDialog Progress Bar doing things in the right order

    - by FauxReal
    I just about got this, but I have a small problem in the order of things going off. Specifically, in my thread() I am setting up an array that is used by a Spinner. Problem is the Spinner is all set and done basically before my thread() is finished, so it sets itself up with a null array. How do I associate the spinners ArrayAdapter with an array that is being loaded by another thread? I've cut the code down to what I think is necessary to understand the problem, but just let me know if more is needed. The problem occurs whether or not refreshData() is called. Along the same lines, sometimes I want to call loadData() from the menu. Directly following loadData() if I try to fire a toast on the next line this causes a forceclose, which is also because of how I'm implementing ProgressDialog. THANK YOU FOR LOOKING public class CMSHome extends Activity { private static List<String> pmList = new ArrayList<String>(); // Instantiate helpers PMListHelper plh = new PMListHelper(); ProjectObjectHelper poc = new ProjectObjectHelper(); // These objects hold lists and methods for dealing with them private Employees employees; private Projects projects; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); // Loads data from filesystem, or webservice if necessary loadData(); // Capture spinner and associate pmList with it through ArrayAdapter spinner = (Spinner) findViewById(R.id.spinner); ArrayAdapter<String> adapter = new ArrayAdapter<String>( this, android.R.layout.simple_spinner_item, pmList); adapter.setDropDownViewResource(android.R.layout.simple_spinner_dropdown_item); spinner.setAdapter(adapter); //---the button is wired to an event handler--- Button btn1 = (Button)findViewById(R.id.btnGetProjects); btn1.setOnClickListener(btnListAllProjectsListener); spinner.setOnItemSelectedListener(new MyOnItemSelectedListener()); } private void loadData() { final ProgressDialog pd = ProgressDialog.show(this, "Please wait", "Loading Data...", true, false); new Thread(new Runnable(){ public void run(){ employees = plh.deserializeEmployeeData(); projects = poc.deserializeProjectData(); // Check to see if data actually loaded, if not then refresh if ((employees == null) || (projects == null)) { refreshData(); } // Load up pmList for spinner control pmList = employees.getPMList(); pd.dismiss(); } }).start(); } private void refreshData() { // Refresh data for Projects projects = poc.refreshData(); poc.saveProjectData(mCtx, projects); // Refresh data for PMList employees = plh.refreshData(); plh.savePMData(mCtx, employees); } } <---- EDIT ----- I tried changing loadData() to the following after Jims suggestion. Not sure if I did this right, still doesn't work: private void loadData() { final ProgressDialog pd = ProgressDialog.show(this, "Please wait", "Loading Data...", true, false); new Thread(new Runnable(){ public void run(){ employees = plh.deserializeEmployeeData(); projects = poc.deserializeProjectData(); // Check to see if data actually loaded, if not return false if ((employees == null) || (projects == null)) { refreshData(); } pd.dismiss(); runOnUiThread(new Runnable() { public void run(){ // Load up pmList for spinner control pmList = employees.getPMList(); } }); } }).start(); }

    Read the article

  • ModalPopupExtender and validation problems

    - by Malachi
    The problem I am facing is that when there is validation on a page and I am trying to display a model pop-up, the pop-up is not getting displayed. And by using fire-bug I have noticed that an error is being thrown. The button that is used to display the pop-up has cause validation set to false so I am stuck as to what is causing the error. I have created a sample page to isolate the problem that I am having, any help would be greatly appreciated. The Error function () {Array.remove(Page_ValidationSummaries, document.getElementById("ValidationSummary1"));}(function () {var fn = function () {AjaxControlToolkit.ModalPopupBehavior.invokeViaServer("mpeSelectClient", true);Sys.Application.remove_load(fn);};Sys.Application.add_load(fn);}) is not a function http://localhost:1131/WebForm1.aspx Line 136 ASP <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="WebForm1.aspx.cs" Inherits="CLIck10.WebForm1" %> <%@ Register Assembly="AjaxControlToolkit" Namespace="AjaxControlToolkit" TagPrefix="ajaxToolkit" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" > <head runat="server"> <title></title> </head> <body> <form id="form1" runat="server"> <asp:ScriptManager ID="ScriptManager1" runat="server"> </asp:ScriptManager> <div> <asp:Button ID="btnPush" runat="server" Text="Push" CausesValidation="false" onclick="btnPush_Click" /> <asp:TextBox ID="txtVal" runat="server" /> <asp:RequiredFieldValidator ID="RequiredFieldValidator1" runat="server" ControlToValidate="txtVal" ErrorMessage="RequiredFieldValidator" /> <asp:ValidationSummary ID="ValidationSummary1" runat="server" /> <asp:Panel ID="pnlSelectClient" Style="display: none" CssClass="box" runat="server"> <asp:UpdatePanel ID="upnlSelectClient" runat="server"> <ContentTemplate> <asp:Button ID="btnOK" runat="server" UseSubmitBehavior="true" Text="OK" CausesValidation="false" OnClick="btnOK_Click" /> <asp:Button ID="btnCancel" runat="server" Text="Cancel" CausesValidation="false" OnClick="btnCancel_Click" /> </ContentTemplate> </asp:UpdatePanel> </asp:Panel> <input id="popupDummy" runat="server" style="display:none" /> <ajaxToolkit:ModalPopupExtender ID="mpeSelectClient" runat="server" TargetControlID="popupDummy" PopupControlID="pnlSelectClient" OkControlID="popupDummy" BackgroundCssClass="modalBackground" CancelControlID="btnCancel" DropShadow="true" /> </div> </form> Code Behind using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; namespace CLIck10 { public partial class WebForm1 : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { } protected void btnOK_Click(object sender, EventArgs e) { mpeSelectClient.Hide(); } protected void btnCancel_Click(object sender, EventArgs e) { mpeSelectClient.Hide(); } protected void btnPush_Click(object sender, EventArgs e) { mpeSelectClient.Show(); } } }

    Read the article

  • When Logged out, one blog shows - logged in, all blogs show

    - by dgPehrson
    I've coded this Wordpress site but I'm finding difficulties with the blog area. Only one blog shows in the blog section but with I am logged in as an admin, I can see all the blogs on the page. I need it to show for everyone, whether they are logged in as a user or just a visitor to the website. I have attempted to see if it was a wordpress issue by checking the 'Settings Reading' settings and they are set just fine, showing to be 10 posts per page.. It could be something wrong with the loop. I have the blog pulling from the index.php. http://www.ilovepennycakes.com/category/blog/ Here is the direct link to the blog not showing in the feed. http://www.ilovepennycakes.com/thanksgiving-thoughts/ The code is as follows: <?php get_header(); ?> <!-- Article Loop --> <article> <?php if ( have_posts() ) : ?> <?php while ( have_posts() ) : the_post(); ?> <div class="news-top"></div> <div class="news"> <div id="post-<?php the_ID(); ?>" <?php post_class(); ?>> <div class="post-header"> <h1 class="meander"><a href="<?php the_permalink(); ?>" rel="bookmark" title="Permanent Link to <?php the_title_attribute(); ?>"><?php the_title(); ?></a></h1> <p class="likes m500"><?php comments_popup_link( '0', '1', '%' ); ?></p> <div class="clear"></div> </div><!--end post header--> <!--div class="entry clear"--> <div class="blog-content m500"> <?php if ( function_exists( 'add_theme_support' ) ) the_post_thumbnail(); ?> <?php the_content(); ?> <?php wp_link_pages(); ?> </div> <!--/div--><!--end entry--> <p class="date M500">Posted <?php the_time( 'j M Y' ); ?></p> <p class="M500"><?php edit_post_link( __( 'Edit', 'pennycakes' ), '<span>', '</span>' ); ?></p> <!--end post footer--> </div><!--end post--> </div> <div class="news-bottom"></div> <?php endwhile; /* rewind or continue if all posts have been fetched */ ?> </div><!--end navigation--> <div class="navigation index"> <div class="alignleft"><?php next_posts_link( 'Older Entries' ); ?></div> <div class="alignright"><?php previous_posts_link( 'Newer Entries' ); ?></div> <?php else : ?> <?php endif; ?> </article> <!-- //Article Loop --> Any help will be appreciated.

    Read the article

< Previous Page | 165 166 167 168 169 170 171 172 173 174 175 176  | Next Page >