Search Results

Search found 11409 results on 457 pages for 'large teams'.

Page 134/457 | < Previous Page | 130 131 132 133 134 135 136 137 138 139 140 141  | Next Page >

  • OpenWorld: Our (Road) Maps are Looking Good!

    - by Tony Berk
    Wow, only one (or two) days down at Oracle OpenWorld! Are you on overload yet? I'm still trying to figure out how to be in 3 sessions at the same time... I guess everyone needs to prioritize! There was a lot to see in Monday's sessions, especially some great forward-looking roadmap sessions. In case you aren't here or you decided to go to other sessions, this is my quick summary of what I could capture from a couple of the roadmaps: In the Fusion CRM Strategy and Roadmap session, Anthony Lye provided an overview of the Fusion CRM strategy including the key design principles of 3 E's: Easy, Effective and Efficient. After an overview of how Oracle has deployed Fusion CRM internally to 25,000 users worldwide, Anthony discussed the features coming in the next release, the releases in the next 12 months and beyond. I can't detail too much since you haven't read Oracle's Safe Harbor statement, but check out Fusion Tap and look for new features and added functionality for sales prediction, marketing, social and integration with a number of the key Customer Experience products.  In the Oracle RightNow CX Cloud Service Vision and Roadmap session, Chris Hamilton presented the focus areas for the RightNow product. As a result of the large increase in development resources after the acquisition, the RightNow CX team is planning a lot of enhancements to the functionality, infrastructure and integrations. As a key piece of the Oracle Customer Experience (CX) strategy, RightNow will be integrated with Oracle Social Network, Oracle Commerce (ATG and Endeca), Oracle Knowledge, Oracle Policy Automation and, of course, further integration with Fusion Sales and Marketing. Look forward to seeing more on the Virtual Assistant, Smart Interaction Hub and Mobility. In addition to the roadmaps, I was looking forward to hearing from Oracle CRM customers. So, I sat in on two great Siebel customer panels: The Maximizing User Adoption Rates for Siebel Sales and Siebel Partner Relationship Management panel consisted of speakers from CSL Behring, McKesson and Intuit. It was great to get an overview of implementations for both B2B and B2C companies. It was great hearing that all of these companies have more than 1,000 sales users (Intuit has 4,000) and how the 360 degree view of the customer in Siebel is helping these customers improve their customers' experience (CX). They are all great examples of centralized implementations which have standardized processes across the globe and across business units.  Waste Management, Farmers Insurance and the US Citizenship & Immigration Services presented in the Driving Great Customer Experiences with Siebel Service Applications session. Talk about serving large customer bases! Is it possible that Farmers with only 10 million households is the smallest of these 3? All of them provided great examples of how they are improving the customer experience (CX) including 60-70% improvements in efficiency or reducing the number of applications the customer service reps (CSRs) need to use from 10 to 1 (Waste Management) and context aware call transfers to avoid the caller explaining their issue 3 times (USCIS). So that's my wrap up of only 4 sessions from Monday. In between sessions, I stopped by the Oracle DEMOgrounds and CRM Pavilion to visit with a group of great partners and see the products and partner integrations in action. Don't miss a recap of Mark Hurd's Keynote. I can't believe there were another 40+ sessions covering CRM, Fusion, Cloud, etc. that I missed today! Anyone else see any great sessions?

    Read the article

  • Observing flow control idle time in TCP

    - by user12820842
    Previously I described how to observe congestion control strategies during transmission, and here I talked about TCP's sliding window approach for handling flow control on the receive side. A neat trick would now be to put the pieces together and ask the following question - how often is TCP transmission blocked by congestion control (send-side flow control) versus a zero-sized send window (which is the receiver saying it cannot process any more data)? So in effect we are asking whether the size of the receive window of the peer or the congestion control strategy may be sub-optimal. The result of such a problem would be that we have TCP data that we could be transmitting but we are not, potentially effecting throughput. So flow control is in effect: when the congestion window is less than or equal to the amount of bytes outstanding on the connection. We can derive this from args[3]-tcps_snxt - args[3]-tcps_suna, i.e. the difference between the next sequence number to send and the lowest unacknowledged sequence number; and when the window in the TCP segment received is advertised as 0 We time from these events until we send new data (i.e. args[4]-tcp_seq = snxt value when window closes. Here's the script: #!/usr/sbin/dtrace -s #pragma D option quiet tcp:::send / (args[3]-tcps_snxt - args[3]-tcps_suna) = args[3]-tcps_cwnd / { cwndclosed[args[1]-cs_cid] = timestamp; cwndsnxt[args[1]-cs_cid] = args[3]-tcps_snxt; @numclosed["cwnd", args[2]-ip_daddr, args[4]-tcp_dport] = count(); } tcp:::send / cwndclosed[args[1]-cs_cid] && args[4]-tcp_seq = cwndsnxt[args[1]-cs_cid] / { @meantimeclosed["cwnd", args[2]-ip_daddr, args[4]-tcp_dport] = avg(timestamp - cwndclosed[args[1]-cs_cid]); @stddevtimeclosed["cwnd", args[2]-ip_daddr, args[4]-tcp_dport] = stddev(timestamp - cwndclosed[args[1]-cs_cid]); @numclosed["cwnd", args[2]-ip_daddr, args[4]-tcp_dport] = count(); cwndclosed[args[1]-cs_cid] = 0; cwndsnxt[args[1]-cs_cid] = 0; } tcp:::receive / args[4]-tcp_window == 0 && (args[4]-tcp_flags & (TH_SYN|TH_RST|TH_FIN)) == 0 / { swndclosed[args[1]-cs_cid] = timestamp; swndsnxt[args[1]-cs_cid] = args[3]-tcps_snxt; @numclosed["swnd", args[2]-ip_saddr, args[4]-tcp_dport] = count(); } tcp:::send / swndclosed[args[1]-cs_cid] && args[4]-tcp_seq = swndsnxt[args[1]-cs_cid] / { @meantimeclosed["swnd", args[2]-ip_daddr, args[4]-tcp_sport] = avg(timestamp - swndclosed[args[1]-cs_cid]); @stddevtimeclosed["swnd", args[2]-ip_daddr, args[4]-tcp_sport] = stddev(timestamp - swndclosed[args[1]-cs_cid]); swndclosed[args[1]-cs_cid] = 0; swndsnxt[args[1]-cs_cid] = 0; } END { printf("%-6s %-20s %-8s %-25s %-8s %-8s\n", "Window", "Remote host", "Port", "TCP Avg WndClosed(ns)", "StdDev", "Num"); printa("%-6s %-20s %-8d %@-25d %@-8d %@-8d\n", @meantimeclosed, @stddevtimeclosed, @numclosed); } So this script will show us whether the peer's receive window size is preventing flow ("swnd" events) or whether congestion control is limiting flow ("cwnd" events). As an example I traced on a server with a large file transfer in progress via a webserver and with an active ssh connection running "find / -depth -print". Here is the output: ^C Window Remote host Port TCP Avg WndClosed(ns) StdDev Num cwnd 10.175.96.92 80 86064329 77311705 125 cwnd 10.175.96.92 22 122068522 151039669 81 So we see in this case, the congestion window closes 125 times for port 80 connections and 81 times for ssh. The average time the window is closed is 0.086sec for port 80 and 0.12sec for port 22. So if you wish to change congestion control algorithm in Oracle Solaris 11, a useful step may be to see if congestion really is an issue on your network. Scripts like the one posted above can help assess this, but it's worth reiterating that if congestion control is occuring, that's not necessarily a problem that needs fixing. Recall that congestion control is about controlling flow to prevent large-scale drops, so looking at congestion events in isolation doesn't tell us the whole story. For example, are we seeing more congestion events with one control algorithm, but more drops/retransmission with another? As always, it's best to start with measures of throughput and latency before arriving at a specific hypothesis such as "my congestion control algorithm is sub-optimal".

    Read the article

  • Fast Data - Big Data's achilles heel

    - by thegreeneman
    At OOW 2013 in Mark Hurd and Thomas Kurian's keynote, they discussed Oracle's Fast Data software solution stack and discussed a number of customers deploying Oracle's Big Data / Fast Data solutions and in particular Oracle's NoSQL Database.  Since that time, there have been a large number of request seeking clarification on how the Fast Data software stack works together to deliver on the promise of real-time Big Data solutions.   Fast Data is a software solution stack that deals with one aspect of Big Data, high velocity.   The software in the Fast Data solution stack involves 3 key pieces and their integration:  Oracle Event Processing, Oracle Coherence, Oracle NoSQL Database.   All three of these technologies address a high throughput, low latency data management requirement.   Oracle Event Processing enables continuous query to filter the Big Data fire hose, enable intelligent chained events to real-time service invocation and augments the data stream to provide Big Data enrichment. Extended SQL syntax allows the definition of sliding windows of time to allow SQL statements to look for triggers on events like breach of weighted moving average on a real-time data stream.    Oracle Coherence is a distributed, grid caching solution which is used to provide very low latency access to cached data when the data is too big to fit into a single process, so it is spread around in a grid architecture to provide memory latency speed access.  It also has some special capabilities to deploy remote behavioral execution for "near data" processing.   The Oracle NoSQL Database is designed to ingest simple key-value data at a controlled throughput rate while providing data redundancy in a cluster to facilitate highly concurrent low latency reads.  For example, when large sensor networks are generating data that need to be captured while analysts are simultaneously extracting the data using range based queries for upstream analytics.  Another example might be storing cookies from user web sessions for ultra low latency user profile management, also leveraging that data using holistic MapReduce operations with your Hadoop cluster to do segmented site analysis.  Understand how NoSQL plays a critical role in Big Data capture and enrichment while simultaneously providing a low latency and scalable data management infrastructure thru clustered, always on, parallel processing in a shared nothing architecture. Learn how easily a NoSQL cluster can be deployed to provide essential services in industry specific Fast Data solutions. See these technologies work together in a demonstration highlighting the salient features of these Fast Data enabling technologies in a location based personalization service. The question then becomes how do these things work together to deliver an end to end Fast Data solution.  The answer is that while different applications will exhibit unique requirements that may drive the need for one or the other of these technologies, often when it comes to Big Data you may need to use them together.   You may have the need for the memory latencies of the Coherence cache, but just have too much data to cache, so you use a combination of Coherence and Oracle NoSQL to handle extreme speed cache overflow and retrieval.   Here is a great reference to how these two technologies are integrated and work together.  Coherence & Oracle NoSQL Database.   On the stream processing side, it is similar as with the Coherence case.  As your sliding windows get larger, holding all the data in the stream can become difficult and out of band data may need to be offloaded into persistent storage.  OEP needs an extreme speed database like Oracle NoSQL Database to help it continue to perform for the real time loop while dealing with persistent spill in the data stream.  Here is a great resource to learn more about how OEP and Oracle NoSQL Database are integrated and work together.  OEP & Oracle NoSQL Database.

    Read the article

  • Wireless connection works but the internet is too slow to use in Ubuntu 11.04

    - by Garrin
    The internet is so slow as to be unusable. And I'm not being picky. Even after minutes I can't get my Google home page to load. I tried installing a package through apt-get and was getting rates between 0 and a few hundred bytes/s. That's bytes, not kilobytes! Mostly 0 however (no exaggeration, it spends large amounts of time stalled). And I would go to a speed test web site of some kind but I can't since nothing will load. Briefly put, the laptop I am using was connected to two wireless networks while using Ubuntu 11.04 without any issues before this. It was also connected to a wired network without any issues. It dual boats Windows 7 which has never had any issues, not even with the current wireless network. Just to be clear, on the current wi-fi network, Windows 7 encounters no issues (speedtest.net puts the network speed at 1mb/s) but my network connection in Ubuntu 11.04 is so slow as to literally be unusable. I am unfamiliar with the router except for the fact that it boasts a Rogers logo (that's a large ISP/cable provider in Canada for those not familiar with the land of igloos and polar bears). I am far from the router and some desktop widget I use tells me the signal strength is at 58% (it seems fairly reliable and this would appear to match up with the filled bars in the network icon). I should also mention I'm just renting a room in this house so I'm not the network administrator and while I can access the 192.168.0.1 router page, the password wasn't set to 'password' so it's not much use to me. Here are a bunch of commands I ran which don't tell me a whole lot but I thought might be more instructive to the wise around here: lspci (just showing my network card): 05:00.0 Network controller: Atheros Communications Inc. AR928X Wireless Network Adapter (PCI-Express) (rev 01) This one is self explanatory. PING www.googele.com (216.65.41.185) 56(84) bytes of data. 64 bytes from nnw.net (216.65.41.185): icmp_req=1 ttl=51 time=267 ms 64 bytes from nnw.net (216.65.41.185): icmp_req=2 ttl=51 time=190 ms 64 bytes from nnw.net (216.65.41.185): icmp_req=3 ttl=51 time=212 ms 64 bytes from nnw.net (216.65.41.185): icmp_req=4 ttl=51 time=207 ms 64 bytes from nnw.net (216.65.41.185): icmp_req=5 ttl=51 time=220 ms --- www.googele.com ping statistics --- 5 packets transmitted, 5 received, 0% packet loss, time 4003ms rtt min/avg/max/mdev = 190.079/219.699/267.963/26.121 ms ifconfig eth0 Link encap:Ethernet HWaddr 20:6a:8a:02:20:da UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:42 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:16 errors:0 dropped:0 overruns:0 frame:0 TX packets:16 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:960 (960.0 B) TX bytes:960 (960.0 B) wlan0 Link encap:Ethernet HWaddr 20:7c:8f:05:c6:bf inet addr:192.168.0.16 Bcast:192.168.0.255 Mask:255.255.255.0 inet6 addr: fe80::227c:8fff:fe05:c6bf/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:982 errors:0 dropped:0 overruns:0 frame:0 TX packets:658 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:497250 (497.2 KB) TX bytes:95076 (95.0 KB) Thank you

    Read the article

  • Challenges in Corporate Reporting - New Independent Research

    - by ndwyouell
    Earlier this year, Oracle and Accenture sponsored a global study on trends in financial close and reporting. We surveyed 1,123 finance professionals in large organizations in 12 countries around the world during February and March. Financial Consolidation and Reporting is the most mature aspect of Enterprise Performance Management with mainstream solutions having been around for over 30 years. But of course over this time there have been many changes and very significant increases in regulation. So just what is the current state is Financial Consolidation and Reporting in our major corporations across the world? We commissioned this independent research to find out. Highlights of the result are: •          Seeking change: Businesses recognize they need to invest in financial reporting to address the challenges they currently face. 47 percent of companies have made substantial investments over the last year to the financial close, filing, and reporting processes. •          Ineffective investments: Despite these investments, spreadsheets (72 percent) and e-mails (68 percent) are still being used daily to track and manage reporting, suggesting that new investments are falling short of expectations. •          Increased costs and uncertainty: The situation is so opaque that managers across the finance function are unable to fully understand the financial impact or cost implications of reporting, with 60 percent of respondents admitting they did not know the total cost of managing and publicizing their financial results. •          Persistent challenges: 68 percent of respondents admitted that they have inadequate visibility into reporting processes, while 84 percent of finance managers surveyed said they find it difficult to control the quality of financial data across the entire reporting process. •          Decreased effectiveness: 71 percent of finance managers feel their effectiveness is limited in some way by data-analysis–related issues, while 39 percent of C-level or VP-level respondents say their effectiveness is impaired by limited visibility. •          Missed deadlines: Due to late changes to the chart of accounts, 15 percent of global businesses have missed statutory filings, putting their companies at risk of financial penalties and potentially impacting share value. The report makes it clear that investments made to date by these large organizations around the world have been uneven across the close, reporting, and filing processes, which has led to the challenges these organizations currently face in the overall process. Regardless of whether companies are using a variety of solutions or a single solution, the report shows they continue to witness increased costs, ineffectual data management, and missed reporting, which—in extreme circumstances—can impact a company’s corporate image and share value. The good news is that businesses realize that these problems persist and 86 percent of companies are likely to make a significant investment during the next five years to address these issues. While they should invest, it is critical that they direct investments correctly to address the key issues this research identified: •          Improving data integrity •          Optimizing processes •          Integrating the extended financial close process By addressing these issues and with clear guidance on how to implement the correct business processes, infrastructure, and software solutions, finance teams will find that their reporting processes are much more effective, cost-efficient, and aligned with their performance expectations. To get a copy of the full report: http://www.oracle.com/webapps/dialogue/ns/dlgwelcome.jsp?p_ext=Y&p_dlg_id=11747758&src=7300117&Act=92 To replay a webcast discussing the findings: http://www.cfo.com/webcast.cfm?webcast=14639438&pcode=ORA061912_ORA

    Read the article

  • A programmer who doesn't get to program - where to turn? [closed]

    - by Just an Anon
    I'm in my mid 20's, and have been working as a full time programmer / developer for the last ~6 years, with several years of part-time freelancing before this, and three straight years of freelancing in the middle of this short career. I work mostly with PHP and the Drupal framework. By and large, I focus on programming custom pieces of functionality; these, of course, vary greatly from project to project. I've got years of solid experience with OOP (have done some Java & C# years ago, too) including intensive experience with front-end development, and even some design work. I've lead small teams (2-4 people) of developers. And of course, given the large amount of freelancing, I've got decent project- & client-management skills. My problem is staying motivated at any place of employment. In the time mentioned I've worked (full-time) at six local companies. The longest I've stayed at any company was just over a year. I find that I'll get hired and be very excited and motivated for the first few months, but the work quickly gets "stale." By that I mean that the interesting components (ie. the programming) get done, and the rest of the work turns into boring cleanup (move a button, add text, change colours, add a field). I don't get challenged, and I don't feel like I'm learning anything new. This happens repeatedly time and time again, and I always end up leaving for either a new opportunity, or to freelance. I'm wondering if perhaps I've painted myself into a corner with the rather niche work market (although with very high demand and good compensation) and need to explore other career choices. Another possibility is that I may be choosing the wrong places of employment, mostly small agencies, and need to look into working for a larger, more established firm. I find programming, writing code, and architecting solutions very rewarding. When I'm working on an interesting problem I lose all sense of time and 14-16 hours can fly by like minutes. I get the same exciting feeling when I'm doing high-level planning of a complex system, breaking up the work and figuring out how everything will tie-in together. I absolutely hate doing small, "stupid" changes that pose no challenge, yet seem to make up more and more of my work. I want to find a workplace where I will get to work on such tasks, be challenged, and improve in all areas of product development. This maybe a programming job, management, architecture of desktop apps, or may be managing a taco stand on a beach in Mexico - I don't know, and I need some advice and real-world feedback. What are some job areas worth exploring? The requirements are fairly simple: working with computers interacting with others challenging decent pay (I'm making just short of 90k / year with a month of vacation & some benefits, and would like to stay in this range, but am willing to take a temporary cut in pay for a more interesting position) Any advice would be much appreciated!

    Read the article

  • export data from WCF Service to excel

    - by Dave
    I need to provide an export to excel feature for a large amount of data returned from a WCF web service. The code to load the datalist is as below: List<resultSet> r = myObject.ReturnResultSet(myWebRequestUrl); //call to WCF service myDataList.DataSource = r; myDataList.DataBind(); I am using the Reponse object to do the job: Response.Clear(); Response.Buffer = true; Response.ContentType = "application/vnd.ms-excel"; Response.AddHeader("Content-Disposition", "attachment; filename=MyExcel.xls"); StringBuilder sb = new StringBuilder(); StringWriter sw = new StringWriter(sb); HtmlTextWriter tw = new HtmlTextWriter(sw); myDataList.RenderControl(tw); Response.Write(sb.ToString()); Response.End(); The problem is that WCF Service times out for large amount of data (about 5000 rows) and the result set is null. When I debug the service, I can see the window for saving/opening the excel sheet appear before the service returns the result and hence the excel sheet is always empty. Please help me figure this out.

    Read the article

  • Abcpdf throwing System.ExecutionEngineException

    - by Tom Tresansky
    I have the binary for several pdf files stored in a collection of Byte arrays. My goal is to concatenate them into a single .pdf file using abcpdf, then stream that newly created file to the Response object on a page of an ASP.Net website. Had been doing it like this: BEGIN LOOP ... 'Create a new Doc Dim doc As Doc = New Doc 'Read the binary of the current PDF doc.Read(bytes) 'Append to the master merged PDF doc _mergedPDFDoc.Append(Doc) END LOOP Which was working fine 95% of the time. Every now and then however, creating a new Doc object would throw a System.ExecutionEngineException and crash the CLR. It didn't seem to be related to a large number of pdfs (sometimes would happen w/ only 2), or with large sized pdfs. It seemed almost completely random. This is a known bug in abcpdf described (not very well) here Item 6.24. I came across a helpful SO post which suggested using a Using block for the abcpdf Doc object. So now I'm doing this: Using doc As New Doc 'Read the binary of the current PDF doc.Read(bytes) 'Append to the master merged PDF doc _mergedPDFDoc.Append(doc) End Using And I haven't seen the problem occur again yet, and have been pounding on a test version as best as I can to get it to. Has anyone had any similar experience with this error? Did this fix it?

    Read the article

  • managing images in an iphone/ipad universal app

    - by taber
    Hi, I'm just curious as to what methods people are using to dynamically use larger or smaller images in their universal iPhone/iPad apps. I created a large test image and I tried scaling down (using cocos2d) by 0.46875. After viewing that in the iPhone 4.0 simulator I found the results were pretty crappy... rough pixel edges, etc. Plus, loading huge image files for iPhone users when they don't need them is pretty lame. So I guess what I will probably have to do is save out two versions of every sprite... large (for the iPad side) and small (for iPhone/iPod Touch) then detect the user's device and spit out the proper sprite like so: NSString *deviceType = [UIDevice currentDevice].model; CCSprite *test; if([deviceType isEqualToString:@"iPad"]) { test = [CCSprite spriteWithFile:@"testBigHuge.png"]; } else { test = [CCSprite spriteWithFile:@"testRegularMcTiny.png"]; } [self addChild: test]; How are you guys doing this? I'd rather avoid sprinkling all of my code with if statements like this. I want to also avoid using .xib files since it's an OpenGL-based app. Thanks!

    Read the article

  • GIT repository layout for server with multiple projects

    - by Paul Alexander
    One of the things I like about the way I have Subversion set up is that I can have a single main repository with multiple projects. When I want to work on a project I can check out just that project. Like this \main \ProductA \ProductB \Shared then svn checkout http://.../main/ProductA As a new user to git I want to explore a bit of best practice in the field before committing to a specific workflow. From what I've read so far, git stores everything in a single .git folder at the root of the project tree. So I could do one of two things. Set up a separate project for each Product. Set up a single massive project and store products in sub folders. There are dependencies between the products, so the single massive project seems appropriate. We'll be using a server where all the developers can share their code. I've already got this working over SSH & HTTP and that part I love. However, the repositories in SVN are already many GB in size so dragging around the entire repository on each machine seems like a bad idea - especially since we're billed for excessive network bandwidth. I'd imagine that the Linux kernel project repositories are equally large so there must be a proper way of handling this with Git but I just haven't figured it out yet. Are there any guidelines or best practices for working with very large multi-project repositories?

    Read the article

  • Creating a Custom EventAggregator Class

    - by Phil
    One thing I noticed about Microsoft's Composite Application Guidance is that the EventAggregator class is a little inflexible. I say that because getting a particular event from the EventAggregator involves identifying the event by its type like so: _eventAggregator.GetEvent<MyEventType>(); But what if you want different events of the same type? For example, if a developer wants to add a new event to his application of type CompositePresentationEvent<int>, he would have to create a new class that derives from CompositePresentationEvent<int> in a shared library somewhere just to keep it separate from any other events of the same type. In a large application, that's a lot of little two-line classes like the following: public class StuffHappenedEvent : CompositePresentationEvent<int> {} public class OtherStuffHappenedEvent : CompositePresentationEvent<int> {} I don't really like that approach. It almost feels dirty to me, partially because I don't want a million two-line event classes sitting around in my infrastructure dll. What if I designed my own simple event aggregator that identified events by an event ID rather than the event type? For example, I could have an enum such as the following: public enum EventId { StuffHappened, OtherStuffHappened, YetMoreStuffHappened } And my new event aggregator class could use the EventId enum (or a more general object) as a key to identify events in the following way: _eventAggregator.GetEvent<CompositePresentationEvent<int>>(EventId.StuffHappened); _eventAggregator.GetEvent<CompositePresentationEvent<int>>(EventId.OtherStuffHappened); Is this good design for the long run? One thing I noticed is that this reduces type safety. In a large application, is this really as important of a concern as I think it is? Do you think there could be a better alternative design?

    Read the article

  • Invalid length for a Base-64 char array.

    - by Code Sherpa
    As the title says, I am getting: Invalid length for a Base-64 char array. I have read about this problem on here and it seems that the suggestion is to store ViewState in SQL if it is large. I am using a wizard with a good deal of data collection so chances are my ViewSate is large. But, before I turn to the "store-in-DB" solution, maybe somebody can take a look and tell me if I have other options? I construct the email for delivery using the below method: public void SendEmailAddressVerificationEmail(string userName, string to) { string msg = "Please click on the link below or paste it into a browser to verify your email account.<BR><BR>" + "<a href=\"" + _configuration.RootURL + "Accounts/VerifyEmail.aspx?a=" + userName.Encrypt("verify") + "\">" + _configuration.RootURL + "Accounts/VerifyEmail.aspx?a=" + userName.Encrypt("verify") + "</a>"; SendEmail(to, "", "", "Account created! Email verification required.", msg); } The Encrypt method looks like this: public static string Encrypt(string clearText, string Password) { byte[] clearBytes = System.Text.Encoding.Unicode.GetBytes(clearText); PasswordDeriveBytes pdb = new PasswordDeriveBytes(Password, new byte[] { 0x49, 0x76, 0x61, 0x6e, 0x20, 0x4d, 0x65, 0x64, 0x76, 0x65, 0x64, 0x65, 0x76 }); byte[] encryptedData = Encrypt(clearBytes, pdb.GetBytes(32), pdb.GetBytes(16)); return Convert.ToBase64String(encryptedData); } On the receiving end, the VerifyEmail.aspx.cs page has the line: string username = Cryptography.Decrypt(_webContext.UserNameToVerify, "verify"); And the decrypt method looks like: public static string Decrypt(string cipherText, string password) { **// THE ERROR IS THROWN HERE!!** byte[] cipherBytes = Convert.FromBase64String(cipherText); Can this error be remedied with a code fix or must I store ViewState in the database? Thanks in advance.

    Read the article

  • Scaling a CBitmap - what am I doing wrong?

    - by Smashery
    I've written the following code, which attempts to take a 32x32 bitmap (loaded through MFC's Resource system) and turn it into a 16x16 bitmap, so they can be used as the big and small CImageLists for a CListCtrl. However, when I open the CListCtrl, all the icons are black (in both small and large view). Before I started playing with resizing, everything worked perfectly in Large View. What am I doing wrong? // Create the CImageLists if (!m_imageListL.Create(32,32,ILC_COLOR24, 1, 1)) { throw std::exception("Failed to create CImageList"); } if (!m_imageListS.Create(16,16,ILC_COLOR24, 1, 1)) { throw std::exception("Failed to create CImageList"); } // Fill the CImageLists with items loaded from ResourceIDs int i = 0; for (std::vector<UINT>::iterator it = vec.begin(); it != vec.end(); it++, i++) { CBitmap* bmpBig = new CBitmap(); bmpBig->LoadBitmap(*it); CDC bigDC; bigDC.CreateCompatibleDC(m_itemList.GetDC()); bigDC.SelectObject(bmpBig); CBitmap* bmpSmall = new CBitmap(); bmpSmall->CreateBitmap(16, 16, 1, 24, 0); CDC smallDC; smallDC.CreateCompatibleDC(&bigDC); smallDC.SelectObject(bmpSmall); smallDC.StretchBlt(0, 0, 32, 32, &bigDC, 0, 0, 16, 16, SRCCOPY); m_imageListL.Add(bmpBig, RGB(0,0,0)); m_imageListS.Add(bmpSmall, RGB(0,0,0)); } m_itemList.SetImageList(&m_imageListS, LVSIL_SMALL); m_itemList.SetImageList(&m_imageListL, LVSIL_NORMAL);

    Read the article

  • Problem using Flash Components in multiple SWC files

    - by Ken Dunnington
    [Edit: Short version - how do you properly handle namespace collisions in SWC files if one SWC has fewer classes from that namespace than another?] I have a rather large Flash application which I'm building in Flash Builder (because coding/debugging in the Flash IDE is... not good) and I've got a ton of external SWC files which I'm linking in to my application. This has worked well so far - the file size is on the large side, but it's a lot simpler than loading in SWFs, especially since I am extending most of the classes in each SWC and adding custom code that way (it's a very design-heavy app.) The problem I'm having is when I have Flash Components, like ComboBox or TextInput, in more than one SWC. Whichever SWC was compiled last will work fine, but the others will fail with errors like the following: TypeError: Error #1034: Type Coercion failed: cannot convert flash.display::MovieClip@1f21adc1 to fl.controls.TextInput. at flash.display::Sprite/constructChildren() at flash.display::Sprite() at flash.display::MovieClip() at com.company.design.login::LoginForm() at com.company.view::Login()[/Users/ken/Workspace/src/com/company/view/Login.as:22] at com.company.view::Main/showLogin()[/Users/ken/Workspace/src/com/company/view/Main.as:209] at flash.events::EventDispatcher/dispatchEventFunction() at flash.events::EventDispatcher/dispatchEvent() at com.company.view::Navigation/handleUIClick()[/Users/ken/Workspace/src/com/company/view/Navigation.as:88] I've been researching components, ComponentShim, etc. but I'm running up against a brick wall. I thought it might be the fact that some of the components had their skins modified in the source FLA, so I tried replacing them with the default skins, but that didn't seem to help. How can I ensure that I have the components imported and available to all my classes, yet still be able to skin them and include them in my various FLAs? (I am never creating new instances of them, they are all laid out by my designer.)

    Read the article

  • WCF - Increase ReaderQuoatas on REST service

    - by Christo Fur
    I have a WCF REST Service which accepts a JSON string One of the parameters is a large string of numbers This causes the following error - which is visible by tracing and using SVC Trace Viewer There was an error deserializing the object of type CarConfiguration. The maximum string content length quota (8192) has been exceeded while reading XML data. This quota may be increased by changing the MaxStringContentLength property on the XmlDictionaryReaderQuotas object used when creating the XML reader. Now I've read all sorts of articles advising how to rectify this All of them recommend increasing various config settings on the server and client e.g. http://stackoverflow.com/questions/65452/error-serializing-string-in-webservice-call http://bloggingabout.net/blogs/ramon/archive/2008/08/20/wcf-and-large-messages.aspx http://social.msdn.microsoft.com/Forums/en/wcf/thread/f570823a-8581-45ba-8b0b-ab0c7d7fcae1 So my config file looks like this <webHttpBinding> <binding name="webBinding" maxBufferSize="5242880" maxReceivedMessageSize="5242880" > <readerQuotas maxDepth="5242880" maxStringContentLength="5242880" maxArrayLength="5242880" maxBytesPerRead="5242880" maxNameTableCharCount="5242880"/> </binding> </webHttpBinding> ... ... ... <endpoint address="/" binding="webHttpBinding" bindingConfiguration="webBinding" My problem is that I can change this on the server, but there are no WCF config settings on the client as its a REST service and I'm just making a http request using the WebClient object any ideas?

    Read the article

  • Hibernate: OutOfMemoryError persisting Blob when printing log message

    - by paul
    I have a Hibernate Entity: @Entity class Foo { //... @Lob public byte[] getBytes() { return bytes; } //.... } My VM is configured with a maximum heap size of 512 MB. When I try to persist an object which has a 75 MB large object, I get an OutOfMemoryError. The names of the methods in the stack trace (StringBuilder, ByteArrayBlobType.toLoggableString, pretty.Printer.toString) suggest that hibernate is trying to write a very large log message that contains my object. Am I correct about why hibernate is using so much memory? What is the simplest way to work around this problem? java.lang.OutOfMemoryError: Java heap space at java.lang.AbstractStringBuilder.<init>(AbstractStringBuilder.java:44) at java.lang.StringBuilder.<init>(StringBuilder.java:81) at org.hibernate.type.ByteArrayBlobType.toString(ByteArrayBlobType.java:117) at org.hibernate.type.ByteArrayBlobType.toLoggableString(ByteArrayBlobType.java:127) at org.hibernate.pretty.Printer.toString(Printer.java:53) at org.hibernate.pretty.Printer.toString(Printer.java:90) at org.hibernate.event.def.AbstractFlushingEventListener.flushEverythingToExecutions(AbstractFlushingEventListener.java:97) at org.hibernate.event.def.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:26) at org.hibernate.impl.SessionImpl.flush(SessionImpl.java:1000) at org.jboss.seam.persistence.HibernateSessionProxy.flush(HibernateSessionProxy.java:181)

    Read the article

  • Response.TransmitFile problem

    - by geoff
    I have the following code delivering a file to users when they click on a download link. For security purposes I can't just link directly to the file so this was set up to decode the url and transmit the file. It has been working fine for a while but recently I started having problems where the file will start downloading but there's no indication of how large the file is. Because of this when the download should stop, it doesn't. The file is about 99mb but when I download it, the browser just keeps downloading way beyond 100mb. I don't know what it's downloading but if I don't cancel it, it doesn't stop. So, my question is, is there either an alternative to transmitfile or a way to make sure the size of the file is sent also so that it stops at the right time? I don't want to use writefile because I don't want to load the entire file into memory since it's so large. Thanks. Here is the code: string filename = Path.GetFileName(url); context.Response.Buffer = true; context.Response.Charset = ""; context.Response.Cache.SetCacheability(HttpCacheability.NoCache); context.Response.ContentType = "application/x-rar-compressed"; context.Response.AddHeader("content-disposition", "attachment;filename=" + filename); context.Response.TransmitFile(context.Server.MapPath(url)); context.Response.Flush();

    Read the article

  • AS3 "Loader" progress immediately goes to 100%

    - by Bart van Heukelom
    If you go to http://moederdagontbijtplacemat.nl/ you will see a progress bar. The application is loading a fairly large SWF from the server using the Loader class. Strangely enough, the progress bar immediately goes to 100% (but the loading still takes a while after that). The code is below, but you'll see it's basically too simple to break. It has worked when the application was on a different server, so I though maybe the new server wasn't sending the size of the large SWF in the http headers. Firebug does show a progress bar though, so that is not the case, the information should be available. It also works when I run the loader swf locally and change the URL (new URLRequest("Placemat.swf")) to the absolute URL of Placemat.swf on the server. var l:Loader = new Loader(); addChild(l); l.contentLoaderInfo.addEventListener(ProgressEvent.PROGRESS, function(e:ProgressEvent) { s.setProgress(e.bytesLoaded/e.bytesTotal); trace(Math.round(100 * e.bytesLoaded/e.bytesTotal), "%"); }); l.contentLoaderInfo.addEventListener(Event.COMPLETE, function() { removeChild(s); }); l.load(new URLRequest("Placemat.swf"));

    Read the article

  • Validation using JAXB and Stax to marshal XML document

    - by bajafresh4life
    I have created an XML schema (foo.xsd) and used xjc to create my binding classes for JAXB. Let's say the root element is "collection" and I am writing N "document" objects, which are complex types. Because I plan to write out large XML files, I am using Stax to write out the "collection" root element, and JAXB to marshal document subtrees using Marshaller.marshal(JAXBElement, XMLEventWriter). This is the approach recommended by jaxb's unofficial user's guide: https://jaxb.dev.java.net/guide/Different_ways_of_marshalling.html#Marshalling_into_a_subtree My question is, how can I validate the XML while it's being marshalled? If I bind a schema to the JAXB marshaller (using Marshaller.setSchema()), I get validation errors because I am only marshalling a subtree (it's complaining that it's not seeing the "collection" root element"). I suppose what I really want to do is bind a schema to the Stax XMLEventWriter or something like that. Any comments on this overall approach would be helpful. Basically I want to be able to use JAXB to marshal and unmarshal large XML documents without running out of memory, so if there's a better way to do this let me know.

    Read the article

  • Best Youtube embed player sizes?

    - by Xeoncross
    I use tumblr to share videos and, unfortunately, when re-posting a video to your tumblelog it uses the embed code at 400x336px. This is neither widescreen nor very large. So I'm trying to set the player to better size and I'm finding that the youtube player really comes in many sizes. For example, when copying the embed code for and HD video I get the sizes 560x340, 640x385, and 853x505. Then when I look at the embed code for a SD video I get the sizes 425x344, 480x385, and 640x505. I was having the best luck with setting the player to 640 x 505 as it was large enough for HD and perfect for SD. print str_replace( array('400', '336'), array('640', '505'), $video_player_html); Has anyone dealt with trying to find a good standard embed size for a site? One thing that would make this a lot easier is to know whether the video was HD or not - however, that info is not provided in the embed code. I guess I am actually more interested in knowing the right ratio (or specific size) to use that works the best for youtube's mixed content.

    Read the article

  • Magento MAGMI: Product attributes (custom options) not showing up in import

    - by Rodgers and Hammertime
    When importing a CSV into Magento with the MAGMI importing tool, I am unable to import Custom Options (as in size: smalee/medium/large). The import manages to put in the basic products, but the Custom Options don't transfer accross. By custom options I mean the fields Title, Input Type, Is Required, Sort Order Title, Price, Price Type, SKU, Sort Order Title, Price, Price Type, SKU, Sort Order Title, Price, Price Type, SKU, Sort Order and so on ... Found in the custom options menu... Even using the example CSV from the MAGMI SourceForge Wiki: sku,name,description,price,Size:drop_down:1 T-Shirt1,T-Shirt,A T-Shirt,5.00,Small|Medium|Large T-Shirt2,T-Shirt2,Another T-Shirt,6.00,XS|S|M|L|XL ...it fails to import the attributes. So i'm simply using MAGMI with the supplied example data from SourceForge on a blank magento product list, and it doesn't transfer properly. Can anyone shed any light on what might be wrong? I am using Magento ver. 1.6.1.0 if that changes anything. Thanks.

    Read the article

  • Database structure - is mySQL the right choice?

    - by Industrial
    Hi everyone, We are currently planning the database structure of a quite complex e-commerce web app that has flexibility as it's main cornerstone. Our app features a large amount of data (products) and we have run into a slight headache trying to keep performance high without compromizing normalization rules in the database, or leaving our highly beloved flexibility concept behind when integrating product options (also widely known as product attributes or parameters). Based on various references and sources available, we have made up lists on pros and cons of all major and well known database patterns to solve this. After comparing these, we have come up with two final alternatives: EAV (Entity-attribute-value model) : Pros: Database is used for all sorting. Cons: All related queries will include a number of joins between multiple tables in order to complete the collection of data. SLOB (Serialized LOB, also known as Facade?) : Pros: Very flexible. Keeping the number of necessary joins low compared to a EAV design pattern. Easy to update/add/remove data from each product. Cons: All sorting will be done by the application instead of the database. Will use lots of performance (memory?) when big datasets is processed by a large number of users. Our main questions: Which pattern/structure would you use, or maybe even a different solution? Is there better databases besides mySQL available nowadays to accomplish what we want? Thanks a lot! Reference: http://stackoverflow.com/questions/695752/product-table-many-kinds-of-product-each-product-has-many-parameters

    Read the article

  • Creating A Single Generic Handler For Agatha?

    - by David
    I'm using the Agatha request/response library (and StructureMap, as utilized by Agatha 1.0.5.0) for a service layer that I'm prototyping, and one thing I've noticed is the large number of handlers that need to be created. It generally makes sense that any request/response type pair would need their own handler. However, as this scales to a large enterprise environment that's going to be A LOT of handlers. What I've started doing is dividing up the enterprise domain into logical processor classes (dozens of processors instead of many hundreds or possibly eventually thousands handlers). The convention is that each request/response type (all of which inherit from a domain base request/response pair, which inherit from Agatha's) gets exactly one function in a processor somewhere. The generic handler (which inherits from Agatha's RequestHandler) then uses reflection in the Handle method to find the method for the given TREQUEST/TRESPONSE and invoke it. If it can't find one or if it finds more than one, it returns a TRESPONSE containing an error message (messages are standardized in the domain's base response class). The goal here is to allow developers across the enterprise to just concern themselves with writing their request/response types and processor functions in the domain and not have to spend additional overhead creating handler classes which would all do exactly the same thing (pass control to a processor function). However, it seems that I still need to have defined a handler class (albeit empty, since the base handler takes care of everything) for each request/response type pair. Otherwise, the following exception is thrown when dispatching a request to the service: StructureMap Exception Code: 202 No Default Instance defined for PluginFamily Agatha.ServiceLayer.IRequestHandler`1[[TSFG.Domain.DTO.Actions.HelloWorldRequest, TSFG.Domain.DTO, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null]], Agatha.ServiceLayer, Version=1.0.5.0, Culture=neutral, PublicKeyToken=6f21cf452a4ffa13 Is there a way that I'm not seeing to tell StructureMap and/or Agatha to always use the base handler class for all request/response type pairs? Or maybe to use Reflection.Emit to generate empty handlers in memory at application start just to satisfy the requirement? I'm not 100% familiar with these libraries and am learning as I go along, but so far my attempts at both those possible approaches have been unsuccessful. Can anybody offer some advice on solving this, or perhaps offer another approach entirely?

    Read the article

  • problem with revalidating a jframe.

    - by John Quesie
    I have this code which should take the radio button input do a little math and display a popup. which it does fine. but then it is supposed to re validate and ask the next question. when i get to the second question, the answer always comes out as the isSelected(true) value no matter which radio button you click on. SO to be clear the first time through it works fin but when the second question comes up, it just takes the default radio button every time. public class EventHandler implements ActionListener { private Main gui; public EventHandler(Main gui){ this.gui = gui; } public void actionPerformed(ActionEvent e){ String answer = ""; double val = 1; //get current answer set String [] anArr = gui.getAnswers(gui.currentStage, gui.currentQuestion); if(e.getSource() == gui.exit){ System.exit(0); } if(e.getSource() == gui.submit){ if(gui.a1.isSelected()){ answer = anArr[0]; val = gui.getScore(1); } if(gui.a2.isSelected()){ answer = anArr[1]; val = gui.getScore(2); } if(gui.a3.isSelected()){ answer = anArr[2]; val = gui.getScore(3); } if(gui.a4.isSelected()){ answer = anArr[3];; val = gui.getScore(4); } JOptionPane.showMessageDialog(null, popupMessage(answer, val), "Your Answer", 1); //compute answer here //figure out what next question is to send gui.moveOn(); gui.setQA(gui.currentStage, gui.currentQuestion); //resets gui gui.goWest(); gui.q.revalidate(); } } public String popupMessage(String ans, double val){ //displays popup after an answer has been choosen gui.computeScore(val); String text = " You Answered " + ans + " Your score is now " + gui.yourScore ; return text; } } public class Main extends JFrame { public JLabel question; public JButton exit; public JButton submit; public JRadioButton a1; public JRadioButton a2; public JRadioButton a3; public JRadioButton a4; public ButtonGroup bg; public double yourScore = 1; public int currentQuestion = 1; public String currentStage = "startup"; JPanel q; public Main(){ setTitle("Ehtics Builder"); setLocation(400,400); setLayout(new BorderLayout(5,5)); setQA("startup", 1); goNorth(); goEast(); goWest(); goSouth(); goCenter(); pack(); setVisible(true); setDefaultCloseOperation(EXIT_ON_CLOSE); } public void goNorth(){ } public void goWest(){ q = new JPanel(); q.setLayout(new GridLayout(0,1)); q.add(question); bg.add(a1); bg.add(a2); bg.add(a3); bg.add(a4); a1.setSelected(true); q.add(a1); q.add(a2); q.add(a3); q.add(a4); add(q, BorderLayout.WEST); System.out.println(); } public void goEast(){ } public void goSouth(){ JPanel p = new JPanel(); p.setLayout(new FlowLayout(FlowLayout.CENTER)); exit = new JButton("Exit"); submit = new JButton("Submit"); p.add(exit); p.add(submit); add(p, BorderLayout.SOUTH); EventHandler myEventHandler = new EventHandler(this); exit.addActionListener(myEventHandler); submit.addActionListener(myEventHandler); } public void goCenter(){ } public static void main(String[] args) { Main open = new Main(); } public String getQuestion(String type, int num){ //reads the questions from a file String question = ""; String filename = ""; String [] ques; num = num - 1; if(type.equals("startup")){ filename = "startup.txt"; }else if(type.equals("small")){ filename = "small.txt"; }else if(type.equals("mid")){ filename = "mid.txt"; }else if(type.equals("large")){ filename = "large.txt"; }else{ question = "error"; return question; } ques = readFile(filename); for(int i = 0;i < ques.length;i++){ if(i == num){ question = ques[i]; } } return question; } public String [] getAnswers(String type, int num){ //reads the answers from a file String filename = ""; String temp = ""; String [] group; String [] ans; num = num - 1; if(type.equals("startup")){ filename = "startupA.txt"; }else if(type.equals("small")){ filename = "smallA.txt"; }else if(type.equals("mid")){ filename = "midA.txt"; }else if(type.equals("large")){ filename = "largeA.txt"; }else{ System.out.println("Error"); } group = readFile(filename); for(int i = 0;i < group.length;i++){ if(i == num){ temp = group[i]; } } ans = temp.split("-"); return ans; } public String [] getValues(String type, int num){ //reads the answers from a file String filename = ""; String temp = ""; String [] group; String [] vals; num = num - 1; if(type.equals("startup")){ filename = "startupV.txt"; }else if(type.equals("small")){ filename = "smallV.txt"; }else if(type.equals("mid")){ filename = "midV.txt"; }else if(type.equals("large")){ filename = "largeV.txt"; }else{ System.out.println("Error"); } group = readFile(filename); for(int i = 0;i < group.length;i++){ if(i == num){ temp = group[i]; } } vals = temp.split("-"); return vals; } public String [] readFile(String filename){ //reads the contentes of a file, for getQuestions and getAnswers String text = ""; int i = -1; FileReader in = null; File f = new File(filename); try{ in = new FileReader(f); }catch(FileNotFoundException e){ System.out.println("file does not exist"); } try{ while((i = in.read()) != -1) text += ((char)i); }catch(IOException e){ System.out.println("Error reading file"); } try{ in.close(); }catch(IOException e){ System.out.println("Error reading file"); } String [] questions = text.split(":"); return questions; } public void computeScore(double val){ //calculates you score times the value of your answer yourScore = val * yourScore; } public double getScore(int aNum){ //gets the score of an answer, stage and q number is already set in the class aNum = aNum - 1; double val = 0; double [] valArr = new double[4]; for(int i = 0;i < getValues(currentStage, currentQuestion).length;i++){ val = Double.parseDouble(getValues(currentStage, currentQuestion)[i]); valArr[i] = val; } if(aNum == 0){ val = valArr[0]; } if(aNum == 1){ val = valArr[1]; } if(aNum == 2){ val = valArr[2]; } if(aNum == 3){ val = valArr[3]; } // use current stage and questiion and trhe aNum to get the value for that answer return val; } public void nextQuestion(int num){ //sets next question to use currentQuestion = num; } public void nextStage(String sta){ // sets next stage to use currentStage = sta; } public void moveOn(){ // uses the score and current question and stage to determine wher to go next and what stage to use next nextQuestion(2); nextStage("startup"); } public void setQA(String level, int num){ String [] arr = getAnswers(level, num); question = new JLabel(getQuestion(level, num)); bg = new ButtonGroup(); a1 = new JRadioButton(arr[0]); a2 = new JRadioButton(arr[1]); a3 = new JRadioButton(arr[2]); a4 = new JRadioButton(arr[3]); } }

    Read the article

  • Flash/Flex: "Warning: filter will not render" problem

    - by davidemm
    In my flex application, I have a custom TitleWindow that pops up in modal fashion. When I resize the browser window, I get this warning: Warning: Filter will not render. The DisplayObject’s filtered dimensions (1286, 107374879) are too large to be drawn. Clearly, I have nothing set with a height of 107374879. After that, any time I mouse over anything in the Flash Player (v. 10), the CPU churns at 100%. When I close the TitleWindow, the problem subsides. Sadly, the warning doesn't seem to indicate which DisplayObject object is too large to draw. I've tried attaching explicit height/widths to the TitleWindow and the components within, but still no luck. [Edit] The plot thickens: I found that the problem only occures when I set the PopUpManager's createPopUp modal parameter to "true." I don't see the behavior when modal is set to "false." It's failing while applying the graying filter to other components that comes from being modal. Any ideas how I can track down the one object that has not been initialized but is being filter during the modal phase? Thanks for reading.

    Read the article

< Previous Page | 130 131 132 133 134 135 136 137 138 139 140 141  | Next Page >