Search Results

Search found 3141 results on 126 pages for 'charles michael'.

Page 124/126 | < Previous Page | 120 121 122 123 124 125 126  | Next Page >

  • Happy 3rd Birthday SilverlightCream!

    - by Dave Campbell
    Happy 3rd Birthday!     Yesterday (May 16) was the 'Birthday' of SilverlightCream, which started just after MIX in 2007 with a post "Interesting Silverlight posts today: Silverlight Control & Silverlight Pad". Too many good posts flying around led me to want to archive them, particularly since I was being aggregated at a new site Silverlight.net, and I could give some of that 'reach' to the community. Saturday's post was number 862, and as of that post, there were 5697 blog posts archived in the database all tagged up and searchable at SilverlightCream.com using the search page. The search needs to be better, and that's another discussion, but it does work. The blog didn't begin life as the SilverlightCream blog, as is obvious from the name, but once I realized people were following it closely, I've tried to keep the signal-to-noise ratio very high. I even secured another blog for when I just want to rant about something to keep that stuff out of this one :) If you've been around since MIX07 days you've heard all this, but after talking to some people at MIX10 I realized not everyone knows all the ways the information is presented, so I figured doing a post like this once a year probably isn't a bad idea :) I scrounge through an ever-growing list of blogs (right now sitting at 505) looking for good stuff. I try to spin through the list every day, but with the list growing that large, it's getting tough. I usually use it as a background task while working or watching TV. If I just sit and go through the blogs it takes about an hour. The list is long enough now that from time to time, I'll only get partway through it and have 10 to 13 entries, so I'll just stop there and go on the next day... I don't like to have more than 15 in any single post. It's all pattern recognition as in "seen that", "seen that", "that's new", etc... so if you're a blogger, look at a heading below for some comments about blogging from my perspective. When I see something new, I make sure you're not pulling a 'Mike Taulty' on me and dumping 6 or 8 new posts in one day :), and I tag the ones I want to review. If there's not a lot going on, I may just push the posts as I come across them. Some days there may be 60 posts in that 'to review' list! Some are non-Silverlight, some are essentially duplicates of others, some are demos, ads, new releases of something, session materials, etc. I push lots of material into a database at WynApse.com, and the "Tagged Posts" menu on the left sidebar there takes you to a tag cloud of (at this very moment) "9224 articles tagged 13915 different ways using 459 unique tags". There are links in there on Gibson guitars, Jazz Guitar instructional stuff, Ford F-250 links, and tons of technical and non-technical stuff I've been aggregating for about 5 years now. So when I decide to blog (or shoutout) something, I first push it into the database at WynApse.com. Then I tag it all up and push it into the database at SilverlightCream.com. Then it gets pushed to @SilverlightNews. For a little over a year now, we're tracking unique IP hits on posts launched from either the blog post or from one of the SilverlightCream.com pages, and the posts with top hits from unique IP addresses in the last 7 days are displayed in a 'Skim' page at SilverlightCream... and that page needs work as well. The Skim page and tracking was the brainchild of my buddy Michael Washington. What I blog/shoutout After some time doing posts, I decided there were things that probably have no need to be searchable, but are good information, so I post those as 'Shoutouts'. Eventually I also decided the Shoutouts should get posted to @SilverlightNews, and that's now taking place. Notes to bloggers Remember I said spinning throught the Big List-o-BlogsTM is pattern recognition... that means I don't spend a lot of time on any individual blog deciding if it has new content. If you're familiar with the term 'Above the Fold', then you're probably ok. If I have to scroll the page to see if there's something new, or wade through some maze of menus, I'm probably going to miss new stuff. Likewise if you only show the latest on the front page and make it a puzzle to find the rest of them, or if you make the titles and initial graphics almost identical to the previous article, I'll miss it. Another thing is name/brand-recognition. Far be it for me (WynApse) to comment on someone blogging with a pseudonym, but if you want to get get some recognition, you are going to want your name to be available somewhere. I can think right off the top of my head of a couple good blogs that I have no idea of the individuals' real names. I can pull that off a bit because I've been around so long almost everyone knows who I am, but if you're new to the blog-o-sphere, being able to be name-recognized is as important as getting your brand out there. Kick my tires Finally, stuff happens... I may hit the wrong key and delete your blog, or a post might slip past me and I not realize it's new because of the naming, and never blog it. If you think I missed something, send me an email or use the submit page at SilverlightCream.com. Some bloggers have figured out that if they submit (one way or another) to me, their posts will go out next. I try to honor anyone that takes the time to submit with a quicker 'Cream posting. Thanks! Finally, thanks to everyone that contributes to the community as a whole... the blogs, the videos, and the presentations. A special thanks to everyone that reads SilverlightCream, or follows @WynApse or @SilverlightNews. Keep it all coming, and... Stay in the 'Light

    Read the article

  • My First Weeks at Red Gate

    - by Jess Nickson
    Hi, my name’s Jess and early September 2012 I started working at Red Gate as a Software Engineer down in The Agency (the Publishing team). This was a bit of a shock, as I didn’t think this team would have any developers! I admit, I was a little worried when it was mentioned that my role was going to be different from normal dev. roles within the company. However, as luck would have it, I was placed within a team that was responsible for the development and maintenance of Simple-Talk and SQL Server Central (SSC). I felt rather unprepared for this role. I hadn’t used many of the technologies involved and of those that I had, I hadn’t looked at them for quite a while. I was, nevertheless, quite excited about this turn of events. As I had predicted, the role has been quite challenging so far. I expected that I would struggle to get my head round the large codebase already in place, having never used anything so much as a fraction of the size of this before. However, I was perhaps a bit naive when it came to how quickly things would move. I was required to start learning/remembering a number of different languages and technologies within time frames I would never have tried to set myself previously. Having said that, my first week was pretty easy. It was filled with meetings that were designed to get the new starters up to speed with the different departments, ideals and rules within the company. I also attended some lightning talks being presented by other employees, which were pretty useful. These occur once a fortnight and normally consist of around four speakers. In my spare time, we set up the Simple-Talk codebase on my computer and I started exploring it and worked on my first feature – redirecting requests for URLs that used incorrect casing! It was also during this time that I was given my first introduction to test-driven development (TDD) with Michael via a code kata. Although I had heard of the general ideas behind TDD, I had definitely never tried it before. Indeed, I hadn’t really done any automated testing of code before, either. The session was therefore very useful and gave me insights as to some of the coding practices used in my team. Although I now understand the importance of TDD, it still seems odd in my head and I’ve yet to master how to sensibly step up the functionality of the code a bit at a time. The second week was both easier and more difficult than the first. I was given a new project to work on, meaning I was no longer using the codebase already in place. My job was to take some designs, a WordPress theme, and some initial content and build a page that allowed users of the site to read provided resources and give feedback. This feedback could include their thoughts about the resource, the topics covered and the page design itself. Although it didn’t sound the most challenging of projects when compared to fixing bugs in our current codebase, it nevertheless provided a few sneaky problems that had me stumped. I really enjoyed working on this project as it allowed me to play around with HTML, CSS and JavaScript; all things that I like working with but rarely have a chance to use. I completed the aims for the project on time and was happy with the final outcome – though it still needs a good designer to take a look at it! I am now into my third week at Red Gate and I have temporarily been pulled off the website from week 2. I am again back to figuring out the Simple-Talk codebase. Monday provided me with the chance to learn a bunch of new things: system level testing, Selenium and Python. I was set the challenge of testing a bug fix dealing with the search bars in Simple-Talk. The exercise was pretty fun, although Mike did have to point me in the right direction when I started making the tests a bit too complex. The rest of the week looks set to be focussed on pair programming with Mike as we work together on a new feature. I look forward to the challenges that still face me and hope that I will be able to get up to speed quickly. *fingers crossed*

    Read the article

  • Impressions and Reactions from Alliance 2012

    - by user739873
    Alliance 2012 has come to a conclusion.  What strikes me about every Alliance conference is the amazing amount of collaboration and cooperation I see across higher education in the sharing of best practices around the entire Oracle PeopleSoft software suite, not just the student information system (Oracle’s PeopleSoft Campus Solutions).  In addition to the vibrant U.S. organization, it's gratifying to see the growth in the international attendance again this year, with an EMEA HEUG organizing to complement the existing groups in the Netherlands, South Africa, and the U.K.  Their first meeting is planned for London in October, and I suspect they'll be surprised at the amount of interest and attendance. In my discussions with higher education IT and functional leadership at Alliance there were a number of instances where concern was expressed about Oracle's commitment to higher education as an industry, primarily because of a lack of perceived innovation in the applications that Oracle develops for this market. Here I think perception and reality are far apart, and I'd like to explain why I believe this to be true. First let me start with what I think drives this perception. Predominately it's in two areas. The first area is the user interface, both for students and faculty that interact with the system as "customers", and for those employees of the institution (faculty, staff, and sometimes students as well) that use the system in some kind of administrative role. Because the UI hasn't changed all that much from the PeopleSoft days, individuals perceive this as a dead product with little innovation and therefore Oracle isn't investing. The second area is around the integration of the higher education suite of applications (PeopleSoft Campus Solutions) and the rest of the Oracle software assets. Whether grown organically or acquired, there is an impressive array of middleware and other software products that could be leveraged much more significantly by the higher education applications than is currently the case today. This is also perceived as lack of investment. Let me address these two points.  First the UI.  More is being done here than ever before, and the PAG and other groups where this was discussed at Alliance 2012 were more numerous than I've seen in any past meeting. Whether it's Oracle development leveraging web services or some extremely early but very promising work leveraging the recent Endeca acquisition (see some cool examples here) there are a lot of resources aimed at this issue.  There are also some amazing prototypes being developed by our UX (user experience team) that will eventually make their way into the higher education applications realm - they had an impressive setup at Alliance.  Hopefully many of you that attended found this group. If not, the senior leader for that team Jeremy Ashley will be a significant contributor of content to our summer Industry Strategy Council meeting in Washington in June. In the area of integration with other elements of the Oracle stack, this is also an area of focus for the company and my team.  We're making this a priority especially in the areas of identity management and security, leveraging WebCenter more effectively for content, imaging, and mobility, and driving towards the ultimate objective of WebLogic Suite as our platform for SOA, links to learning management systems (SAIP), and content. There is also much work around business intelligence centering on OBI applications. But at the end of the day we get enormous value from the HEUG (higher education user group) and the various subgroups formed as a part of this community that help us align and prioritize our investments, whether it's around better integration with other Oracle products or integration with partner offerings.  It's one of the healthiest, mutually beneficial relationships between customers and an Education IT concern that exists on the globe. And I can't avoid mentioning that this kind of relationship between higher education and the corporate IT community that can truly address the problems of efficiency and effectiveness, institutional excellence (which starts with IT) and student success.  It's not (in my opinion) going to be solved through community source - cost and complexity only increase in that model and in the end higher education doesn't ultimately focus on core competencies: educating, developing, and researching.  While I agree with some of what Michael A. McRobbie wrote in his EDUCAUSE Review article (Information Technology: A View from Both Sides of the President’s Desk), I take strong issue with his assertion that the "the IT marketplace is just the opposite of long-term stability...."  Sure there has been healthy, creative destruction in the past 2-3 decades, but this has had the effect of, in the aggregate, benefiting education with greater efficiency, more innovation and increased stability as larger, more financially secure firms acquire and develop integrated solutions. Cole

    Read the article

  • Business Case for investing time developing Stubs and BizUnit Tests

    - by charlie.mott
    I was recently in a position where I had to justify why effort should be spent developing Stubbed Integration Tests for BizTalk solutions. These tests are usually developed using the BizUnit framework. I assumed that most seasoned BizTalk developers would consider this best practice. Even though Microsoft suggest use of BizUnit on MSDN, I've not found a single site listing the justifications for investing time writing stubs and BizUnit tests. Stubs Stubs should be developed to isolate your development team from external dependencies. This is described by Michael Stephenson here. Failing to do this can result in the following problems: In contract-first scenarios, the external system interface will have been defined.  But the interface may not have been setup or even developed yet for the BizTalk developers to work with. By the time you open the target location to see the data BizTalk has sent, it may have been swept away. If you are relying on the UI of the target system to see the data BizTalk has sent, what do you do if it fails to arrive? It may take time for the data to be processed or it may be scheduled to be processed later. Learning how to use the source\target systems and investigations into where things go wrong in these systems will slow down the BizTalk development effort. By the time the data is visible in a UI it may have undergone further transformations. In larger development teams working together, do you all use the same source and target instances. How do you know which data was created by whose tests? How do you know which event log error message are whose?  Another developer may have “cleaned up” your data. It is harder to write BizUnit tests that clean up the data\logs after each test run. What if your B2B partners' source or target system cannot support the sort of testing you want to do. They may not even have a development or test instance that you can work with. Their single test instance may be used by the SIT\UAT teams. There may be licencing costs of setting up an instances of the external system. The stubs I like to use are generic stubs that can accept\return any message type.  Usually I need to create one per protocol. They should be driven by BizUnit steps to: validates the data received; and select a response messages (or error response). Once built, they can be re-used for many integration tests and from project to project. I’m not saying that developers should never test against a real instance.  Every so often, you still need to connect to real developer or test instances of the source and target endpoints\services. The interface developers may ask you to send them some data to see if everything still works.  Or you might want some messages sent to BizTalk to get confidence that everything still works beyond BizTalk. Tests Automated “Stubbed Integration Tests” are usually built using the BizUnit framework. These facilitate testing of the entire integration process from source stub to target stub. It will ensure that all of the BizTalk components are configured together correctly to meet all the requirements. More fine grained unit testing of individual BizTalk components is still encouraged.  But BizUnit provides much the easiest way to test some components types (e.g. Orchestrations). Using BizUnit with the Behaviour Driven Development approach described by Mike Stephenson delivers the following benefits: source: http://biztalkbddsample.codeplex.com – Video 1. Requirements can be easily defined using Given/When/Then Requirements are close to the code so easier to manage as features and scenarios Requirements are defined in domain language The feature files can be used as part of the documentation The documentation is accurate to the build of code and can be published with a release The scenarios are effective to document the scenarios and are not over excessive The scenarios are maintained with the code There’s an abstraction between the intention and implementation of tests making them easier to understand The requirements drive the testing These same tests can also be used to drive load testing as described here. If you don't do this ... If you don't follow the above “Stubbed Integration Tests” approach, the developer will need to manually trigger the tests. This has the following risks: Developers are unlikely to check all the scenarios each time and all the expected conditions each time. After the developer leaves, these manual test steps may be lost. What test scenarios are there?  What test messages did they use for each scenario? There is no mechanism to prove adequate test coverage. A test team may attempt to automate integration test scenarios in a test environment through the triggering of tests from a source system UI. If this is a replacement for BizUnit tests, then this carries the following risks: It moves the tests downstream, so problems will be found later in the process. Testers may not check all the expected conditions within the BizTalk infrastructure such as: event logs, suspended messages, etc. These automated tests may also get in the way of manual tests run on these environments.

    Read the article

  • Another Marketing Conference, part two – the afternoon

    - by Roger Hart
    In my previous post, I’ve covered the morning sessions at AMC2012. Here’s the rest of the write-up. I’ve skipped Charles Nixon’s session which was a blend of funky futurism and professional development advice, but you can see his slides here. I’ve also skipped the Google presentation, as it was a little thin on insight. 6 – Brand ambassadors: Getting universal buy in across the organisation, Vanessa Northam Slides are here This was the strongest enforcement of the idea that brand and campaign values need to be delivered throughout the organization if they’re going to work. Vanessa runs internal communications at e-on, and shared her experience of using internal comms to align an organization and thereby get the most out of a campaign. She views the purpose of internal comms as: “…to help leaders, to communicate the purpose and future of an organization, and support change.” This (and culture) primes front line staff, which creates customer experience and spreads brand. You ensure a whole organization knows what’s going on with both internal and external comms. If everybody is aligned and informed, if everybody can clearly articulate your brand and campaign goals, then you can turn everybody into an advocate. Alignment is a powerful tool for delivering a consistent experience and message. The pathological counter example is the one in which a marketing message goes out, which creates inbound customer contacts that front line contact staff haven’t been briefed to handle. The NatWest campaign was again mentioned in this context. The good example was e-on’s cheaper tariff campaign. Building a groundswell of internal excitement, and even running an internal launch meant everyone could contribute to a good customer experience. They found that meter readers were excited – not a group they’d considered as obvious in providing customer experience. But they were a group that has a lot of face-to-face contact with customers, and often were asked questions they may not have been briefed to answer. Being able to communicate a simple new message made it easier for them, and also let them become a sales and marketing asset to the organization. 7 – Goodbye Internet, Hello Outernet: the rise and rise of augmented reality, Matt Mills I wasn’t going to write this up, because it was essentially a sales demo for Aurasma. But the technology does merit some discussion. Basically, it replaces QR codes with visual recognition, and provides a simple-looking back end for attaching content. It’s quite sexy. But here’s my beef with it: QR codes had a clear visual language – when you saw one you knew what it was and what to do with it. They were clunky, but they had the “getting started” problem solved out of the box once you knew what you were looking at. However, they fail because QR code reading isn’t native to the platform. You needed an app, which meant you needed to know to download one. Consequentially, you can’t use QR codes with and ubiquity, or depend on them. This means marketers, content providers, etc, never pushed them, and they remained and awkward oddity, a minority sport. Aurasma half solves problem two, and re-introduces problem one, making it potentially half as useful as a QR code. It’s free, and you can apparently build it into your own apps. Add to that the likelihood of it becoming native to the platform if it takes off, and it may have legs. I guess we’ll see. 8 – We all need to code, Helen Mayor Great title – good point. If there was anybody in the room who didn’t at least know basic HTML, and if Helen’s presentation inspired them to learn, that’s fantastic. However, this was a half hour sales pitch for a basic coding training course. Beyond advocating coding skills it contained no useful content. Marketers may also like to consider some of these resources if they’re looking to learn code: Code Academy – free interactive tutorials Treehouse – learn web design, web dev, or app dev WebPlatform.org – tutorials and documentation for web tech  11 – Understanding our inner creativity, Margaret Boden This session was the most theoretical and probably least actionable of the day. It also held my attention utterly. Margaret spoke fluently, fascinatingly, without slides, on the subject of types of creativity and how they work. It was splendid. Yes, it raised a wry smile whenever she spoke of “the content of advertisements” and gave an example from 1970s TV ads, but even without the attempt to meet the conference’s theme this would have been thoroughly engaging. There are, Margaret suggested, three types of creativity: Combinatorial creativity The most common form, and consisting of synthesising ideas from existing and familiar concepts and tropes. Exploratory creativity Less common, this involves exploring the limits and quirks of a particular constraint or style. Transformational creativity This is uncommon, and arises from finding a way to do something that the existing rules would hold to be impossible. In essence, this involves breaking one of the constraints that exploratory creativity is composed from. Combinatorial creativity, she suggested, is particularly important for attaching favourable ideas to existing things. As such is it probably worth developing for marketing. Exploratory creativity may then come into play in something like developing and optimising an idea or campaign that now has momentum. Transformational creativity exists at the edges of this exploration. She suggested that products may often be transformational, but that marketing seemed unlikely to in her experience. This made me wonder about Listerine. Crucially, transformational creativity is characterised by there being some element of continuity with the strictures of previous thinking. Once it has happened, there may be  move from a revolutionary instance into an explored style. Again, from a marketing perspective, this seems to chime well with the thinking in Youngme Moon’s book: Different Talking about the birth of Modernism is visual art, Margaret pointed out that transformational creativity has historically risked a backlash, demanding what is essentially an education of the market. This is best accomplished by referring back to the continuities with the past in order to make the new familiar. Thoughts The afternoon is harder to sum up than the morning. It felt less concrete, and was troubled by a short run of poor presentations in the middle. Mainly, I found myself wrestling with the internal comms issue. It’s one of those things that seems astonishingly obvious in hindsight, but any campaign – particularly any large one – is doomed if the people involved can’t believe in it. We’ve run things here that haven’t gone so well, of course we have; who hasn’t? I’m not going to air any laundry, but people not being informed (much less aligned) feels like a common factor. It’s tough though. Managing and anticipating information needs across an organization of any size can’t be easy. Even the simple things like ensuring sales and support departments know what’s in a product release, and what messages go with it are easy to botch. The thing I like about framing this as a brand and campaign advocacy problem is that it makes it likely to get addressed. Better is always sexier than less-worse. Any technical communicator who’s ever felt crowded out by a content strategist or marketing copywriter  knows this – increasing revenue gets a seat at the table far more readily than reducing support costs, even if the financial impact is identical. So that’s it from AMC. The big thought-provokers were social buying behaviour and eliciting behaviour change, and the value of internal communications in ensuring successful campaigns and continuity of customer experience. I’ll be chewing over that for a while, and I’d definitely return next year.      

    Read the article

  • Pre-rentrée Oracle Open World 2012 : à vos agendas

    - by Eric Bezille
    A maintenant moins d'un mois de l’événement majeur d'Oracle, qui se tient comme chaque année à San Francisco, fin septembre, début octobre, les spéculations vont bon train sur les annonces qui vont y être dévoilées... Et sans lever le voile, je vous engage à prendre connaissance des sujets des "Key Notes" qui seront tenues par Larry Ellison, Mark Hurd, Thomas Kurian (responsable des développements logiciels) et John Fowler (responsable des développements systèmes) afin de vous donner un avant goût. Stratégie et Roadmaps Oracle Bien entendu, au-delà des séances plénières qui vous donnerons  une vision précise de la stratégie, et pour ceux qui seront sur place, je vous engage à ne pas manquer les séances d'approfondissement qui auront lieu dans la semaine, dont voici quelques morceaux choisis : "Accelerate your Business with the Oracle Hardware Advantage" avec John Fowler, le lundi 1er Octobre, 3:15pm-4:15pm "Why Oracle Softwares Runs Best on Oracle Hardware" , avec Bradley Carlile, le responsable des Benchmarks, le lundi 1er Octobre, 12:15pm-13:15pm "Engineered Systems - from Vision to Game-changing Results", avec Robert Shimp, le lundi 1er Octobre 1:45pm-2:45pm "Database and Application Consolidation on SPARC Supercluster", avec Hugo Rivero, responsable dans les équipes d'intégration matériels et logiciels, le lundi 1er Octobre, 4:45pm-5:45pm "Oracle’s SPARC Server Strategy Update", avec Masood Heydari, responsable des développements serveurs SPARC, le mardi 2 Octobre, 10:15am - 11:15am "Oracle Solaris 11 Strategy, Engineering Insights, and Roadmap", avec Markus Flier, responsable des développements Solaris, le mercredi 3 Octobre, 10:15am - 11:15am "Oracle Virtualization Strategy and Roadmap", avec Wim Coekaerts, responsable des développement Oracle VM et Oracle Linux, le lundi 1er Octobre, 12:15pm-1:15pm "Big Data: The Big Story", avec Jean-Pierre Dijcks, responsable du développement produits Big Data, le lundi 1er Octobre, 3:15pm-4:15pm "Scaling with the Cloud: Strategies for Storage in Cloud Deployments", avec Christine Rogers,  Principal Product Manager, et Chris Wood, Senior Product Specialist, Stockage , le lundi 1er Octobre, 10:45am-11:45am Retours d'expériences et témoignages Si Oracle Open World est l'occasion de partager avec les équipes de développement d'Oracle en direct, c'est aussi l'occasion d'échanger avec des clients et experts qui ont mis en oeuvre  nos technologies pour bénéficier de leurs retours d'expériences, comme par exemple : "Oracle Optimized Solution for Siebel CRM at ACCOR", avec les témoignages d'Eric Wyttynck, directeur IT Multichannel & CRM  et Pascal Massenet, VP Loyalty & CRM systems, sur les bénéfices non seulement métiers, mais également projet et IT, le mercredi 3 Octobre, 1:15pm-2:15pm "Tips from AT&T: Oracle E-Business Suite, Oracle Database, and SPARC Enterprise", avec le retour d'expérience des experts Oracle, le mardi 2 Octobre, 11:45am-12:45pm "Creating a Maximum Availability Architecture with SPARC SuperCluster", avec le témoignage de Carte Wright, Database Engineer à CKI, le mercredi 3 Octobre, 11:45am-12:45pm "Multitenancy: Everybody Talks It, Oracle Walks It with Pillar Axiom Storage", avec le témoignage de Stephen Schleiger, Manager Systems Engineering de Navis, le lundi 1er Octobre, 1:45pm-2:45pm "Oracle Exadata for Database Consolidation: Best Practices", avec le retour d'expérience des experts Oracle ayant participé à la mise en oeuvre d'un grand client du monde bancaire, le lundi 1er Octobre, 4:45pm-5:45pm "Oracle Exadata Customer Panel: Packaged Applications with Oracle Exadata", animé par Tim Shetler, VP Product Management, mardi 2 Octobre, 1:15pm-2:15pm "Big Data: Improving Nearline Data Throughput with the StorageTek SL8500 Modular Library System", avec le témoignage du CTO de CSC, Alan Powers, le jeudi 4 Octobre, 12:45pm-1:45pm "Building an IaaS Platform with SPARC, Oracle Solaris 11, and Oracle VM Server for SPARC", avec le témoignage de Syed Qadri, Lead DBA et Michael Arnold, System Architect d'US Cellular, le mardi 2 Octobre, 10:15am-11:15am "Transform Data Center TCO with Oracle Optimized Servers: A Customer Panel", avec les témoignages notamment d'AT&T et Liberty Global, le mardi 2 Octobre, 11:45am-12:45pm "Data Warehouse and Big Data Customers’ View of the Future", avec The Nielsen Company US, Turkcell, GE Retail Finance, Allianz Managed Operations and Services SE, le lundi 1er Octobre, 4:45pm-5:45pm "Extreme Storage Scale and Efficiency: Lessons from a 100,000-Person Organization", le témoignage de l'IT interne d'Oracle sur la transformation et la migration de l'ensemble de notre infrastructure de stockage, mardi 2 Octobre, 1:15pm-2:15pm Echanges avec les groupes d'utilisateurs et les équipes de développement Oracle Si vous avez prévu d'arriver suffisamment tôt, vous pourrez également échanger dès le dimanche avec les groupes d'utilisateurs, ou tous les soirs avec les équipes de développement Oracle sur des sujets comme : "To Exalogic or Not to Exalogic: An Architectural Journey", avec Todd Sheetz - Manager of DBA and Enterprise Architecture, Veolia Environmental Services, le dimanche 30 Septembre, 2:30pm-3:30pm "Oracle Exalytics and Oracle TimesTen for Exalytics Best Practices", avec Mark Rittman, de Rittman Mead Consulting Ltd, le dimanche 30 Septembre, 10:30am-11:30am "Introduction of Oracle Exadata at Telenet: Bringing BI to Warp Speed", avec Rudy Verlinden & Eric Bartholomeus - Managers IT infrastructure à Telenet, le dimanche 30 Septembre, 1:15pm-2:00pm "The Perfect Marriage: Sun ZFS Storage Appliance with Oracle Exadata", avec Melanie Polston, directeur, Data Management, de Novation et Charles Kim, Managing Director de Viscosity, le dimanche 30 Septembre, 9:00am-10am "Oracle’s Big Data Solutions: NoSQL, Connectors, R, and Appliance Technologies", avec Jean-Pierre Dijcks et les équipes de développement Oracle, le lundi 1er Octobre, 6:15pm-7:00pm Testez et évaluez les solutions Et pour finir, vous pouvez même tester les technologies au travers du Oracle DemoGrounds, (1133 Moscone South pour la partie Systèmes Oracle, OS, et Virtualisation) et des "Hands-on-Labs", comme : "Deploying an IaaS Environment with Oracle VM", le mardi 2 Octobre, 10:15am-11:15am "Virtualize and Deploy Oracle Applications in Minutes with Oracle VM: Hands-on Lab", le mardi 2 Octobre, 11:45am-12:45pm (il est fortement conseillé d'avoir suivi le "Hands-on-Labs" précédent avant d'effectuer ce Lab. "x86 Enterprise Cloud Infrastructure with Oracle VM 3.x and Sun ZFS Storage Appliance", le mercredi 3 Octobre, 5:00pm-6:00pm "StorageTek Tape Analytics: Managing Tape Has Never Been So Simple", le mercredi 3 Octobre, 1:15pm-2:15pm "Oracle’s Pillar Axiom 600 Storage System: Power and Ease", le lundi 1er Octobre, 12:15pm-1:15pm "Enterprise Cloud Infrastructure for SPARC with Oracle Enterprise Manager Ops Center 12c", le lundi 1er Octobre, 1:45pm-2:45pm "Managing Storage in the Cloud", le mardi 2 Octobre, 5:00pm-6:00pm "Learn How to Write MapReduce on Oracle’s Big Data Platform", le lundi 1er Octobre, 12:15pm-1:15pm "Oracle Big Data Analytics and R", le mardi 2 Octobre, 1:15pm-2:15pm "Reduce Risk with Oracle Solaris Access Control to Restrain Users and Isolate Applications", le lundi 1er Octobre, 10:45am-11:45am "Managing Your Data with Built-In Oracle Solaris ZFS Data Services in Release 11", le lundi 1er Octobre, 4:45pm-5:45pm "Virtualizing Your Oracle Solaris 11 Environment", le mardi 2 Octobre, 1:15pm-2:15pm "Large-Scale Installation and Deployment of Oracle Solaris 11", le mercredi 3 Octobre, 3:30pm-4:30pm En conclusion, une semaine très riche en perspective, et qui vous permettra de balayer l'ensemble des sujets au coeur de vos préoccupations, de la stratégie à l'implémentation... Cette semaine doit se préparer, pour tailler votre agenda sur mesure, à travers les plus de 2000 sessions dont je ne vous ai fait qu'un extrait, et dont vous pouvez retrouver l'ensemble en ligne.

    Read the article

  • Scripting Language Sessions at Oracle OpenWorld and MySQL Connect, 2012

    - by cj
    This posts highlights some great scripting language sessions coming up at the Oracle OpenWorld and MySQL Connect conferences. These events are happening in San Francisco from the end of September. You can search for other interesting conference sessions in the Content Catalog. Also check out what is happening at JavaOne in that event's Content Catalog (I haven't included sessions from it in this post.) To find the timeslots and locations of each session, click their respective link and check the "Session Schedule" box on the top right. GEN8431 - General Session: What’s New in Oracle Database Application Development This general session takes a look at what’s been new in the last year in Oracle Database application development tools using the latest generation of database technology. Topics range from Oracle SQL Developer and Oracle Application Express to Java and PHP. (Thomas Kyte - Architect, Oracle) BOF9858 - Meet the Developers of Database Access Services (OCI, ODBC, DRCP, PHP, Python) This session is your opportunity to meet in person the Oracle developers who have built Oracle Database access tools and products such as the Oracle Call Interface (OCI), Oracle C++ Call Interface (OCCI), and Open Database Connectivity (ODBC) drivers; Transparent Application Failover (TAF); Oracle Database Instant Client; Database Resident Connection Pool (DRCP); Oracle Net Services, and so on. The team also works with those who develop the PHP, Ruby, Python, and Perl adapters for Oracle Database. Come discuss with them the features you like, your pains, and new product enhancements in the latest database technology. CON8506 - Syndication and Consolidation: Oracle Database Driver for MySQL Applications This technical session presents a new Oracle Database driver that enables you to run MySQL applications (written in PHP, Perl, C, C++, and so on) against Oracle Database with almost no code change. Use cases for such a driver include application syndication such as interoperability across a relationship database management system, application migration, and database consolidation. In addition, the session covers enhancements in database technology that enable and simplify the migration of third-party databases and applications to and consolidation with Oracle Database. Attend this session to learn more and see a live demo. (Srinath Krishnaswamy - Director, Software Development, Oracle. Kuassi Mensah - Director Product Management, Oracle. Mohammad Lari - Principal Technical Staff, Oracle ) CON9167 - Current State of PHP and MySQL Together, PHP and MySQL power large parts of the Web. The developers of both technologies continue to enhance their software to ensure that developers can be satisfied despite all their changing and growing needs. This session presents an overview of changes in PHP 5.4, which was released earlier this year and shows you various new MySQL-related features available for PHP, from transparent client-side caching to direct support for scaling and high-availability needs. (Johannes Schlüter - SoftwareDeveloper, Oracle) CON8983 - Sharding with PHP and MySQL In deploying MySQL, scale-out techniques can be used to scale out reads, but for scaling out writes, other techniques have to be used. To distribute writes over a cluster, it is necessary to shard the database and store the shards on separate servers. This session provides a brief introduction to traditional MySQL scale-out techniques in preparation for a discussion on the different sharding techniques that can be used with MySQL server and how they can be implemented with PHP. You will learn about static and dynamic sharding schemes, their advantages and drawbacks, techniques for locating and moving shards, and techniques for resharding. (Mats Kindahl - Senior Principal Software Developer, Oracle) CON9268 - Developing Python Applications with MySQL Utilities and MySQL Connector/Python This session discusses MySQL Connector/Python and the MySQL Utilities component of MySQL Workbench and explains how to write MySQL applications in Python. It includes in-depth explanations of the features of MySQL Connector/Python and the MySQL Utilities library, along with example code to illustrate the concepts. Those interested in learning how to expand or build their own utilities and connector features will benefit from the tips and tricks from the experts. This session also provides an opportunity to meet directly with the engineers and provide feedback on your issues and priorities. You can learn what exists today and influence future developments. (Geert Vanderkelen - Software Developer, Oracle) BOF9141 - MySQL Utilities and MySQL Connector/Python: Python Developers, Unite! Come to this lively discussion of the MySQL Utilities component of MySQL Workbench and MySQL Connector/Python. It includes in-depth explanations of the features and dives into the code for those interested in learning how to expand or build their own utilities and connector features. This is an audience-driven session, so put on your best Python shirt and let’s talk about MySQL Utilities and MySQL Connector/Python. (Geert Vanderkelen - Software Developer, Oracle. Charles Bell - Senior Software Developer, Oracle) CON3290 - Integrating Oracle Database with a Social Network Facebook, Flickr, YouTube, Google Maps. There are many social network sites, each with their own APIs for sharing data with them. Most developers do not realize that Oracle Database has base tools for communicating with these sites, enabling all manner of information, including multimedia, to be passed back and forth between the sites. This technical presentation goes through the methods in PL/SQL for connecting to, and then sending and retrieving, all types of data between these sites. (Marcelle Kratochvil - CTO, Piction) CON3291 - Storing and Tuning Unstructured Data and Multimedia in Oracle Database Database administrators need to learn new skills and techniques when the decision is made in their organization to let Oracle Database manage its unstructured data. They will face new scalability challenges. A single row in a table can become larger than a whole database. This presentation covers the techniques a DBA needs for managing the large volume of data in a standard Oracle Database instance. (Marcelle Kratochvil - CTO, Piction) CON3292 - Using PHP, Perl, Visual Basic, Ruby, and Python for Multimedia in Oracle Database These five programming languages are just some of the most popular ones in use at the moment in the marketplace. This presentation details how you can use them to access and retrieve multimedia from Oracle Database. It covers programming techniques and methods for achieving faster development against Oracle Database. (Marcelle Kratochvil - CTO, Piction) UGF5181 - Building Real-World Oracle DBA Tools in Perl Perl is not normally associated with building mission-critical application or DBA tools. Learn why Perl could be a good choice for building your next killer DBA app. This session draws on real-world experience of building DBA tools in Perl, showing the framework and architecture needed to deal with portability, efficiency, and maintainability. Topics include Perl frameworks; Which Comprehensive Perl Archive Network (CPAN) modules are good to use; Perl and CPAN module licensing; Perl and Oracle connectivity; Compiling and deploying your app; An example of what is possible with Perl. (Arjen Visser - CEO & CTO, Dbvisit Software Limited) CON3153 - Perl: A DBA’s and Developer’s Best (Forgotten) Friend This session reintroduces Perl as a language of choice for many solutions for DBAs and developers. Discover what makes Perl so successful and why it is so versatile in our day-to-day lives. Perl can automate all those manual tasks and is truly platform-independent. Perl may not be in the limelight the way other languages are, but it is a remarkable language, it is still very current with ongoing development, and it has amazing online resources. Learn what makes Perl so great (including CPAN), get an introduction to Perl language syntax, find out what you can use Perl for, hear how Oracle uses Perl, discover the best way to learn Perl, and take away a small Perl project challenge. (Arjen Visser - CEO & CTO, Dbvisit Software Limited) CON10332 - Oracle RightNow CX Cloud Service’s Connect PHP API: Intro, What’s New, and Roadmap Connect PHP is a public API that enables developers to build solutions with the Oracle RightNow CX Cloud Service platform. This API is used primarily by developers working within the Oracle RightNow Customer Portal Cloud Service framework who are looking to gain access to data and services hosted by the Oracle RightNow CX Cloud Service platform through a backward-compatible API. Connect for PHP leverages the same data model and services as the Connect Web Services for SOAP API. Come to this session to get an introduction and learn what’s new and what’s coming up. (Mark Rhoads - Senior Principal Applications Engineer, Oracle. Mark Ericson - Sr. Principle Product Manager, Oracle) CON10330 - Oracle RightNow CX Cloud Service APIs and Frameworks Overview Oracle RightNow CX Cloud Service APIs are available in the following areas: desktop UI, Web services, customer portal, PHP, and knowledge. These frameworks provide access to Oracle RightNow CX Cloud Service’s Connect Common Object Model and custom objects. This session provides a broad overview of capabilities in all these areas. (Mark Ericson - Sr. Principle Product Manager, Oracle)

    Read the article

  • «Oracle E-Business Suite: ERP DBA????»??

    - by user10821858
    ????????,???????4????,????????????????????,?????????«Oracle E-Business Suite:ERP DBA????»????????????????????????,??????????????“??”,????????????,???????????,???????????????????,????,????,?????????????????????????,???????????????,????????????????????,??????????,????????????????????????:1.????????????????????2.??DBA???ERP??,??????????3.???????????,?????????????????????ERP??????4.?????????ERP????????5.??????????????ERP????????????6.??ERP???????????????????7.????CIO???ERP???????????8.????????????????????????? ?:?????????????:http://vdisk.weibo.com/s/6X-ze ?:?????:http://product.dangdang.com/product.aspx?product_id=22788613 ?:?????:http://book.360buy.com/11021724.html ?:?????:http://product.china-pub.com/3661378 ?:?????:http://www.amazon.cn/dp/B008BFNAX0 ?:?????:http://detail.tmall.com/item.htm?id=18024644999 ????,?????????????????????????! ? ? ???? ERP ????????????20 ?,?Oracle ??????(E-Business Suite)? ????????ERP ????,????????????????,???????? ????????,????????????? ??ERP ???,?????????????ERP DBA ????????ERP DBA ???????DBA ????????????:ERP DBA ??????????,?? ???ERP ????????,????????????????????????? ?,???????????????????,ERP DBA ????????????? ??????,?Oracle ???????????ERP ??,?????????,?? ???????????,?????????ERP DBA ??????,??????? ????ERP ?????,???????????????? ???????????ERP DBA ?????,?????????????ERP ? ???,???ERP ??????????,????????????????ERP ? ?????????,??????????ERP ????,?????:“?????? ??,????????”,???“????”????????????????,? ??ERP DBA ?????,?????????????,????????????? ???“????”??,?“??????”???,??????????????, ??????,???????????????????“????,??????”, ???????????,??????????????,???????????? ????????,????????????ERP ????,????ERP ???? ????? ?????? ????7 ?,???????3 ?:?????????????? ???(?1 ~ 2 ?):??????Oracle ????????????????? ???????,??????????ERP ???????????????,??? ???? ???(?3 ~ 6 ?):?3 ????Oracle ERP ?????????????;?4 ????Oracle ERP ???????ERP ????;?5 ??????Oracle ????? ??????????????;?6 ?????????????????ERP ???? ??? ?????(?7 ?):????????Oracle ERP ??????????,??? ??????????????????????,???????????????? ?,????????????????????????????????????? ???? ????????????: (1)????,?????????????????????????????? ?,???????????????????,?????????????????? ??????,??????????????????????? (2)???,???,??????,??????????????????Oracle ERP ???????????,????;????????????Oracle ERP ??? ??????????,????? (3)????????,???????????????????????,?? ?????????,???????????????????????????;?? ERP DBA ????????????,????????????????????,? ???????????,????????,???????????? ??????? (1)Oracle ERP ??????????????????ERP ???????? (2)???ERP ?????????DBA ????IT ????? (3)??????????????????? ??????? ?????????????,??????????????,????????? ???????????????????????????????,???????? ??????: E-Mail:longchun.zhu-AT-gmail-DOT-com ?? “?????,?????”,???????????????“?????”??? ?????????????,???????????????????????? ?????????????????,??Mike?Charles???????????, ????????????????????,??????????????????? ??????? ? ? ? ? ?1 ? Oracle ??????/1 1.1 Oracle ?????????????/2 1.2 Oracle ??????R12 /3 1.2.1 R12 ???????/3 1.2.2 R12 ???/4 1.3 ????/5 ?2 ? ERP ????/6 2.1 ERP ??????/7 2.1.1 ERP ??????????/7 2.1.2 ERP ?????????/8 2.1.3 ERP ??????/8 2.2 ??????/9 2.2.1 ????????/9 2.2.2 ???????/11 2.2.3 ?????????/12 2.3 ??ERP ?????????/14 2.3.1 ????/14 2.3.2 ??????/15 2.3.3 ????/16 2.3.4 ?????/19 2.3.5 ????/19 2.3.6 ??????/23 2.3.7 ???????/25 2.3.8 ????/26 2.3.9 ???????/27 2.4 ????/28 2.5 ????/29 ?3 ? Oracle ERP ??/31 3.1 ?????/32 3.1.1 ?????? /32 3.1.2 ????/34 3.1.3 ????/35 3.1.4 ????/37 3.1.5 ?????/37 3.1.6 ??Bug ??/39 3.1.7 ????/40 3.2 Oracle ??????R12 ???/42 3.2.1 ????/42 3.2.2 ???????/43 3.2.3 ????/43 3.2.4 ??????/60 3.3 ??????????????/62 3.3.1 ???????/62 3.3.2 ??DNS ???/63 3.3.3 ????Oracle ??????/63 3.3.4 ?????/64 3.4 ????/66 ?4 ? ??????ERP ????/67 4.1 ???????/68 4.2 ??????/81 4.3 ??????/83 4.4 ??????/85 4.4.1 ??????????/86 4.4.2 ????????????/86 4.4.3 ??????ERP ????/87 4.5 ???????????/89 4.6 ?????/89 4.6.1 ???????/90 4.6.2 ???????/92 4.6.3 ??redo ??/95 4.6.4 ??????/96 4.7 ????/98 4.7.1 ???????/98 4.7.2 ?????/99 4.7.3 ???????/100 4.7.4 ??????/100 4.7.5 ?????/101 4.8 ????????? /101 4.8.1 ????/102 4.8.2 ??DNS ???/102 4.8.3 ??sendmail /103 4.8.4 ??IMAP ??? /104 4.8.5 ??Oracle Alert /104 4.8.6 ??Workflow Mail /110 4.8.7 ???????/115 4.9 ?? Forms Socket ??/121 4.10 ????????/123 4.11 ?????????? /124 4.11.1 ???????/124 4.11.2 ???????/125 4.12 ???? /127 ?5 ? Oracle ???????????/128 5.1 Oracle ????????/129 5.1.1 ???/129 5.1.2 ???/130 5.1.3 ????/131 5.1.4 ????/131 5.2 ????????/131 5.2.1 Oracle ?????????????/131 5.2.2 Oracle ??????????/133 5.3 ??????/134 5.3.1 ???????????????/135 5.3.2 AutoPatch/136 5.3.3 AutoConfig/138 5.4 Rapid Clone/141 5.4.1 ????????????/141 5.4.2 ?????/142 5.4.3 ??????/145 5.4.4 ?Clone ?? /146 5.4.5 ??????????/146 5.4.6 Clone ??/148 5.4.7 Clone ??????? /150 5.4.8 ??Clone/157 5.5 OAM ??/158 5.5.1 OAM ?????/158 5.5.2 OAM ????????/158 5.5.3 ????????/159 5.5.4 ????????????/159 5.5.5 ????????/164 5.5.6 OAM ??????/168 5.6 ????/171 ?6 ? ???????/172 6.1 ??Oracle ??????/173 6.1.1 ??Form ?HTML ??/173 6.1.2 ????????/182 6.1.3 ????/187 6.2 ????/192 6.2.1 ?????/193 6.2.2 ????? /193 6.2.3 ?????/194 6.3 ???????/195 6.4 ?????/205 6.4.1 “??”?????/205 6.4.2 ???????/206 6.4.3 ??? /207 6.4.4 ??????? /208 6.4.5 ????????? /208 6.4.6 ??????/209 6.4.7 ???????/218 6.4.8 ??????/225 6.5 ??????/237 6.5.1 ??????/237 6.5.2 ??????/239 6.6 ????/245 ?7 ? ERP ???????/246 7.1 ERP ??????/247 7.2 ERP ???????/247 7.2.1 ????????/248 7.2.2 ?????????/248 7.2.3 ?????????/249 7.2.4 ???????/249 7.3 ????????/249 7.3.1 ERP ???????/249 7.3.2 ????/251 7.3.3 ????/251 7.3.4 ????/253 7.3.5 ????/255 7.4 ERP ??????/255 7.4.1 ????????/255 7.4.2 ???????/256 7.4.3 ?????????/257 7.5 ?????ERP ??/260 7.5.1 ????????/260 7.5.2 ????/261 7.5.3 ??????/262 7.5.4 ????????/273 7.6 ERP ????/280 7.6.1 ??X-Windows/280 7.6.2 ???????/281 7.6.3 ?????????/281 7.6.4 ??redo ??/282 7.6.5 ??opmn.xml ??/283 7.6.6 ???????/283 7.6.7 ??Apache ??/285 7.6.8 Forms Server socket ????/285 7.6.9 ??Forms Dead Client ??/286 7.6.10 ??Cancel Query /287 7.6.11 ??????????/288 7.6.12 ????????/293 7.6.13 ?? GSM ??/294 7.6.14 ??“ICX:????”/297 7.6.15 ????????/298 7.6.16 ???????/302 7.7 ??????/305 7.7.1 ?????/305 7.7.2 ????/305 7.7.3 ?????/305 7.7.4 ???????/306 7.7.5 ??????/306 7.8 ????????/306 7.8.1 ???CSI Number /307 7.8.2 ???SR /307 7.8.3 TAR ??/307 7.8.4 ????/307 7.9 ????/308

    Read the article

  • Subversion vision and roadmap

    - by gbjbaanb
    Recently C Michael Pilato of the core subversion team posted a mail to the subversion dev mailing list suggesting a vision and roadmap for the future of Subversion. Naturally, he wanted as much feedback and response as possible which is why I'm posting this here - to elicit some suggestions and contributions from you, the users of Subversion. Any comments are welcome, and I shall feedback a synopsis with a link to this question to the dev mailing list. Similarly, I've created a post on ServerFault to get feedback from the administator side of things too. So, without further ado: Vision The first thing on his "vision statement" is: Subversion has no future as a DVCS tool. Let's just get that out there. At least two very successful such tools exist already, and to squeeze another horse into that race would be a poor investment of energy and talent. There's no need to suggest distributed features for subversion. If you want a DVCS, there should be no ill-feeling if you migrate to Git, Mercurial or Bazaar. As he says, its pointless trying to make SVN like them when they already exist, especially when there are different usage patterns that SVN should be targetting. The vision for Subversion is: Subversion exists to be universally recognized and adopted as an open-source, centralized version control system characterized by its reliability as a safe haven for valuable data; the simplicity of its model and usage; and its ability to support the needs of a wide variety of users and projects, from individuals to large-scale enterprise operations. Roadmap Several ideas were suggested as being "very nice to have" and are offered as the starting point of a future roadmap. These are: Obliterate Shelve/Checkpoint Repository-dictated Configuration Rename Tracking Improved Merging Improved Tree Conflict Handling Enterprise Authentication Mechanisms Forward History Searching Log Message Templates If anyone has suggestions to add, or comments on these, the subversion community would welcome all of them. Community And lastly, there was a call for more people to become involved with Subversion development. As with most OSS projects it can be daunting to join, but there is now a push for more to be done to help. If you feel like you can contribute, please do so.

    Read the article

  • Cannot populate form with ajax and populate jquery plugin

    - by Azriel_
    I'm trying to populate a form with jquery's populate plugin, but using $.ajax The idea is to retrieve data from my database according to the id in the links (ex of link: get_result_edit.php?id=34), reformulate it to json, return it to my page and fill up the form up with the populate plugin. But somehow i cannot get it to work. Any ideas: here's the code: $('a').click(function(){ $('#updatediv').hide('slow'); $.ajax({ type: "GET", url: "get_result_edit.php", success: function(data) { var $response=$(data); $('#form1').populate($response); } }); $('#updatediv').fadeIn('slow'); return false; whilst the php file states as follow: <?php $conn = new mysqli('localhost', 'XXXX', 'XXXXX', 'XXXXX'); @$query = 'Select * FROM news WHERE id ="'.$_GET['id'].'"'; $stmt = $conn->query($query) or die ($mysql->error()); if ($stmt) { $results = $stmt->fetch_object(); // get database data $json = json_encode($results); // convert to JSON format echo $json; } ?> Now first thing is that the mysql returns a null in this way: is there something wrong with he declaration of the sql statement in the $_GET part? Second is that even if i put a specific record to bring up, populate doesn't populate. Update: I changed the populate library with the one called "PHP jQuery helper functions" and the difference is that finally it says something. finally i get an error saying NO SUCH ELEMENT AS i wen into the library to have a look and up comes the following function function populateFormElement(form, name, value) { // check that the named element exists in the form var name = name; // handle non-php naming var element = form[name]; if(element == undefined) { debug('No such element as ' + name); return false; } // debug options if(options.debug) { _populate.elements.push(element); } } Now looking at it one can see that it should print out also the name, but its not printing it out. so i'm guessing that retrieving the name form the json is not working correctly. Link is at http://www.ocdmonline.org/michael/edit%5Fnews.php with username: Testing and pass:test123 Any ideas?

    Read the article

  • ASP NET MVC (loading data from database)

    - by rah.deex
    hi experts, its me again... i have some code like this.. using System; using System.Collections.Generic; using System.Linq; using System.Web; namespace MvcGridSample.Models { public class CustomerService { private List<SVC> Customers { get { List<SVC> customers; if (HttpContext.Current.Session["Customers"] != null) { customers = (List<SVC>) HttpContext.Current.Session["Customers"]; } else { //Create customer data store and save in session customers = new List<SVC>(); InitCustomerData(customers); HttpContext.Current.Session["Customers"] = customers; } return customers; } } public SVC GetByID(int customerID) { return this.Customers.AsQueryable().First(customer => customer.seq_ == customerID); } public IQueryable<SVC> GetQueryable() { return this.Customers.AsQueryable(); } public void Add(SVC customer) { this.Customers.Add(customer); } public void Update(SVC customer) { } public void Delete(int customerID) { this.Customers.RemoveAll(customer => customer.seq_ == customerID); } private void InitCustomerData(List<SVC> customers) { customers.Add(new SVC { ID = 1, FirstName = "John", LastName = "Doe", Phone = "1111111111", Email = "[email protected]", OrdersPlaced = 5, DateOfLastOrder = DateTime.Parse("5/3/2007") }); customers.Add(new SVC { ID = 2, FirstName = "Jane", LastName = "Doe", Phone = "2222222222", Email = "[email protected]", OrdersPlaced = 3, DateOfLastOrder = DateTime.Parse("4/5/2008") }); customers.Add(new SVC { ID = 3, FirstName = "John", LastName = "Smith", Phone = "3333333333", Email = "[email protected]", OrdersPlaced = 25, DateOfLastOrder = DateTime.Parse("4/5/2000") }); customers.Add(new SVC { ID = 4, FirstName = "Eddie", LastName = "Murphy", Phone = "4444444444", Email = "[email protected]", OrdersPlaced = 1, DateOfLastOrder = DateTime.Parse("4/5/2003") }); customers.Add(new SVC { ID = 5, FirstName = "Ziggie", LastName = "Ziggler", Phone = null, Email = "[email protected]", OrdersPlaced = 0, DateOfLastOrder = null }); customers.Add(new SVC { ID = 6, FirstName = "Michael", LastName = "J", Phone = "666666666", Email = "[email protected]", OrdersPlaced = 5, DateOfLastOrder = DateTime.Parse("12/3/2007") }); } } } those codes is an example that i've got from the internet.. in that case, the data is created and saved in session before its shown.. the things that i want to ask is how if i want to load the data from table? i'am a newbie here.. please help :) thank b4 for advance..

    Read the article

  • How are you using IronPython?

    - by Will Dean
    I'm keen to drink some modern dynamic language koolaid, so I've believed all the stuff on Michael Foord's blog and podcasts, I've bought his book (and read some of it), and I added an embedded IPy runtime to a large existing app a year or so ago (though that was for someone else and I didn't really use it myself). Now I need to do some fairly simple code generation stuff, where I'm going to call a few methods on a few .net objects (custom, C#-authored objects), create a few strings, write some files, etc. The experience of trying this leaves me feeling like the little boy who thinks he's the only one who can see that The Emperor has no clothes on. If you're using IronPython, I'd really appreciate knowing how you deal with the following aspects of it: Code editing - do you use the .NET framework without Intellisense? Refactoring - I know a load of 'refactoring' is about working around language-related busywork, so if Python is sufficiently lightweight then we won't need that, But things like renames seem to me to be essential to iteratively developing quality code regardless of language. Crippling startup time - One of the things which is supposed to be good about interpreted languages is the lack of compile time leading to fast interactive development. Unfortunately I can compile a C# application and launch it quicker than IPy can start up. Interactive hacking - the IPy console/repl is supposed to be good for this, but I haven't found a good way to take the code you've interactively arrived at and persist it into a file - cut and paste from the console is fairly miserable. And the console seems to hold references to .NET assemblies you've imported, so you have to quit it and restart it if you're working on the C# stuff as well. Hacking on C# in something like LinqPad seems a much faster and easier way to try things out (and has proper Intellisense). Do you use the console? Debugging - what's the story here? I know someone on the IPy team is working on a command-line hobby-project, but let's just say I'm not immediately attracted to a command line debugger. I don't really need a debugger from little Python scripts, but I would if I were to use IPy for scripting unit tests, for example. Unit testing - I can see that dynamic languages could be great for this, but is there any IDE test-runner integration (like for Resharper, etc). The Foord book has a chapter about this, which I'll admit I have not yet read properly, but it does seem to involve driving a console-mode test-runner from the command prompt, which feels to be an enormous step back from using an integrated test runner like TestDriven.net or Resharper. I really want to believe in this stuff, so I am still working on the assumption that I've missed something. I would really like to know how other people are dealing with IPy, particularly if they're doing it in a way which doesn't feel like we've just lost 15 years'-worth of tool development.

    Read the article

  • How can I tell groovy/grails not to try to "re-encode" binary data? (Revised title)

    - by ?????
    I have a groovy/grails application that needs to serve images It works fine on my dev box, the image is returned properly. Here's the start of the returned JPEG, as seen by od -cx 0000000 377 330 377 340 \0 020 J F I F \0 001 001 001 001 , d8ff e0ff 1000 464a 4649 0100 0101 2c01 but on the production box, there's some garbage in front, and the d8ff e0ff before the 1000 is missing 0000000 ? ** ** ? ** ** ? ** ** ? ** ** \0 020 J F bfef efbd bdbf bfef efbd bdbf 1000 464a 0000020 I F \0 001 001 001 \0 H \0 H \0 \0 ? ** ** ? 4649 0100 0101 4800 4800 0000 bfef efbd It's the exact same code. I just moved the .war over and run it on a different machine. (Isn't Java supposed to be write once, run everywhere?) Any ideas? An "encoding" problem? The code is sent to the response like this: response.contentType = "image/jpeg"; response.outputStream << out; Here's the code that locates the image on an internal application server and re-serves the image. I've pared down the code a bit to remove the error handling, etc, to make it easier to read. def show = { def address = "http://internal.application.server:9899/img?photoid=${params.id}" def out = new ByteArrayOutputStream() out << new URL(address).openStream() response.contentLength = out.size(); // XXX If you don't do this hack, "head" requests won't work! if (request.method == 'HEAD') { render( text : "", contentType : "image/jpeg" ); } else { response.contentType = "image/jpeg"; response.outputStream << out; } } Update: I tried setting the CharacterEncoding response.setCharacterEncoding("ISO-8859-1"); if (request.method == 'HEAD') { render( text : "", contentType : "image/jpeg" ); } else { response.contentType = "image/jpeg;charset=ISO-8859-1"; response.outputStream << out; } but it made no difference in the output. On my production machine, the binary bytes in the image are re-encoded/escaped as if they were UTF-8 (see Michael's explanation below). It works fine on my development machine.

    Read the article

  • Expression Too Complex In Access 2007

    - by Jazzepi
    When I try to run this query in Access through the ODBC interface into a MySQL database I get an "Expression too complex in query expression" error. The essential thing I'm trying to do is translate abbreviated names of languages into their full body English counterparts. I was curious if there was some way to "trick" access into thinking the expression is smaller with sub queries, or if someone else had a better idea of how to solve this problem. I thought about making a temporary table and doing a join on it, but that's not supported in Access SQL. Just as an FYI, the query worked fine until I added the big long IFF chain. I tested the query on a smaller IFF chain for three languages, and that wasn't an issue, so the problem definitely stems from the huge IFF chain (It's 26 deep). Also, I might be able to drop some of the options (like combining the different forms of Chinese or Portuguese) As a test, I was able to get the SQL query to work after paring it down to 14 IFF() statements, but that's a far cry from the 26 languages I'd like to represent. SELECT TOP 5 Count( * ) AS [Number of visits by language], IIf(login.lang="ar","Arabic",IIf(login.lang="bg","Bulgarian",IIf(login.lang="zh_CN","Chinese (Simplified Han)",IIf(login.lang="zh_TW","Chinese (Traditional Han)",IIf(login.lang="cs","Czech",IIf(login.lang="da","Danish",IIf(login.lang="de","German",IIf(login.lang="en_US","United States English",IIf(login.lang="en_GB","British English",IIf(login.lang="es","Spanish",IIf(login.lang="fr","French",IIf(login.lang="el","Greek",IIf(login.lang="it","Italian",IIf(login.lang="ko","Korean",IIf(login.lang="hu","Hungarian",IIf(login.lang="nl","Dutch",IIf(login.lang="pl","Polish",IIf(login.lang="pt_PT","European Portuguese",IIf(login.lang="pt_BR","Brazilian Portuguese",IIf(login.lang="ru","Russian",IIf(login.lang="sk","Slovak",IIf(login.lang="sl","Slovenian","IIf(login.lang="fi","Finnish",IIf(login.lang="sv","Swedish",IIf(login.lang="tr","Turkish","Unknown")))))))))))))))))))))))))) AS [Language] FROM login, reservations, reservation_users, schedules WHERE (reservations.start_date Between DATEDIFF('s','1970-01-01 00:00:00',[Starting Date in the Following Format YYYY/MM/DD]) And DATEDIFF('s','1970-01-01 00:00:00',[Ending Date in the Following Format YYYY/MM/DD])) And reservations.is_blackout=0 And reservation_users.memberid=login.memberid And reservation_users.resid=reservations.resid And reservation_users.invited=0 And reservations.scheduleid=schedules.scheduleid And scheduletitle=[Schedule Title] GROUP BY login.lang ORDER BY Count( * ) DESC; @ Michael Todd I completely agree. The list of languages should have been a table in the database and the login.lang should have been a FK into that table. Unfortunately this isn't how the database was written, and it's not really mine to modify. The languages are placed into the login.lang field by the PHP running on top of the database.

    Read the article

  • Question SpeechSynthesizer.SetOutputToAudioStream audio format problem

    - by Chris Kugler
    Hi, I'm currently working on an application which requires transmission of speech encoded to a specific audio format. System.Speech.AudioFormat.SpeechAudioFormatInfo synthFormat = new System.Speech.AudioFormat.SpeechAudioFormatInfo(System.Speech.AudioFormat.EncodingFormat.Pcm, 8000, 16, 1, 16000, 2, null); This states that the audio is in PCM format, 8000 samples per second, 16 bits per sample, mono, 16000 average bytes per second, block alignment of 2. When I attempt to execute the following code there is nothing written to my MemoryStream instance; however when I change from 8000 samples per second up to 11025 the audio data is written successfully. SpeechSynthesizer synthesizer = new SpeechSynthesizer(); waveStream = new MemoryStream(); PromptBuilder pbuilder = new PromptBuilder(); PromptStyle pStyle = new PromptStyle(); pStyle.Emphasis = PromptEmphasis.None; pStyle.Rate = PromptRate.Fast; pStyle.Volume = PromptVolume.ExtraLoud; pbuilder.StartStyle(pStyle); pbuilder.StartParagraph(); pbuilder.StartVoice(VoiceGender.Male, VoiceAge.Teen, 2); pbuilder.StartSentence(); pbuilder.AppendText("This is some text."); pbuilder.EndSentence(); pbuilder.EndVoice(); pbuilder.EndParagraph(); pbuilder.EndStyle(); synthesizer.SetOutputToAudioStream(waveStream, synthFormat); synthesizer.Speak(pbuilder); synthesizer.SetOutputToNull(); There are no exceptions or errors recorded when using a sample rate of 8000 and I couldn't find anything useful in the documentation regarding SetOutputToAudioStream and why it succeeds at 11025 samples per second and not 8000. I have a workaround involving a wav file that I generated and converted to the correct sample rate using some sound editing tools, but I would like to generate the audio from within the application if I can. One particular point of interest was that the SpeechRecognitionEngine accepts that audio format and successfully recognized the speech in my synthesized wave file... Update: Recently discovered that this audio format succeeds for certain installed voices, but fails for others. It fails specifically for LH Michael and LH Michelle, and failure varies for certain voice settings defined in the PromptBuilder.

    Read the article

  • Time required for a process to complete

    - by yelkawar
    I am new to C# world. I am attempting to calculate time taken by a algorithum for the purpose of comparison. Following code measures the elapsed time from when a subroutine is called until the subroutine returns to the main program.This example is taken from "Data structures through C#" by Michael McMillan. After running this program the output is Time=0, which is incorrect. The program appears to be logically correct. Can anybody help me. Following is the code using System.Collections.Generic; using System.Collections; using System.Linq; using System.Text; namespace Chap1 { class Program { static void Main(string[] args) { int num1 = 100; int num2 = 200; Console.WriteLine("num1: " + num1); Console.WriteLine("num2: " + num2); Swap<int>(ref num1, ref num2); Console.WriteLine("num1: " + num1); Console.WriteLine("num2: " + num2); string str1 = "Sam"; string str2 = "Tom"; Console.WriteLine("String 1: " + str1); Console.WriteLine("String 2: " + str2); Swap<string>(ref str1, ref str2); Console.WriteLine("String 1: " + str1); Console.WriteLine("String 2: " + str2); Console.ReadKey(); } static void Swap<T>(ref T val1, ref T val2) { T temp; temp = val1; val1 = val2; val2 = temp; } } class Timing { TimeSpan StartTiming; TimeSpan duration; public Timing() { StartTiming = new TimeSpan(0); duration = new TimeSpan(0); } public TimeSpan startTime() { GC.Collect(); GC.WaitForPendingFinalizers(); StartTiming = Process.GetCurrentProcess().Threads[0].UserProcessorTime; return StartTiming; } public void stopTime() { duration = Process.GetCurrentProcess().Threads[0].UserProcessorTime.Subtract(StartTiming); } public TimeSpan result() { return duration; } } }

    Read the article

  • Parsing secure entries XML file with jquery

    - by user573131
    Apologies if this is elementary. I'm primarily a front end designer/dev. I have webform through a form service called wufoo. Wufoo generates a lovely XML (or json) file that can be grabed and parsed. I'm trying to grab the entries xml feed that is associated with the form and parse it via jquery to show who has entered. Im using the following code (which works with a local xml file). http://bostonwebsitemakeover.com/2/test <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.3.2/jquery.js"></script> <script> $(document).ready(function () { $.ajax({ type: "GET", url: "people.xml", dataType: "xml", success: xmlParser }); }); function xmlParser(xml) { $('#load').fadeOut(); $(xml).find("Entry").each(function () { $(".main").append('<div class="entry">' + $(this).find("Field1").text() + ' ' + $(this).find("Field2").text() + ' http://twitter.com/' + $(this).find("Field17").text() + '</div>'); $(".entry").fadeIn(1000); }); } </script> My XML file contains the following: <?xml version="1.0"?> <Entries> <Entry> <EntryId>1</EntryId> <Field1>Meaghan</Field1> <Field2>Severson</Field2> <Field17/> </Entry> <Entry> <EntryId>2</EntryId> <Field1>Michael</Field1> <Field2>Flint</Field2> <Field17>michaelflint</Field17> </Entry> <Entry> <EntryId>3</EntryId> <Field1>Niki</Field1> <Field2>Brown</Field2> <Field17>nikibrown</Field17> </Entry> <Entry> <EntryId>4</EntryId> <Field1>Niki</Field1> <Field2>Brown</Field2> <Field17>nikibrown</Field17> </Entry> </Entries> I'm wondering how I would do this with the xml file hosted on the wufoo (which is https) So I guess Im asking how do I authenticate the feed via jquery? Or do i need to do this via json? Could someone explain how?

    Read the article

  • passing data from a client form via jquery ajax dinamicly

    - by quantum62
    i wanna insert specification of members that enter in textboxs of form in the database .i do this operation with jquery ajax when i call webmetod with static value the operation do successfully.for example this code is ok. $.ajax({ type: "POST", url:"MethodInvokeWithJQuery.aspx/executeinsert", data: '{ "username": "user1", "name":"john","family":"michael","password":"123456","email": "[email protected]", "tel": "123456", "codemeli": "123" }', contentType: "application/json; charset=utf-8", dataType: "json", async: true, cache: false, success: function (msg) { $('#myDiv2').text(msg.d); }, error: function (x, e) { alert("The call to the server side failed. " + x.responseText); } } ); but when i wanna use of values that enter in textboxes dynamically error occur.whats problem?i try this two code <script type="text/javascript"> $(document).ready( function () { $("#Button1").click( function () { var username, family, name, email, tel, codemeli, password; username = $('#<%=TextBox1.ClientID%>').val(); name = $('#<%=TextBox2.ClientID%>').val(); family = $('#<%=TextBox3.ClientID%>').val(); password = $('#<%=TextBox4.ClientID%>').val(); email = $('#<%=TextBox5.ClientID%>').val(); tel = $('#<%=TextBox6.ClientID%>').val(); codemeli = $('#<%=TextBox7.ClientID%>').val(); $.ajax( { type: "POST", url: "WebApplication20.aspx/executeinsert", data: "{'username':'username','name':name, 'family':family,'password':password, 'email':email,'tel':tel, 'codemeli':codemeli}", contentType: "application/json;charset=utf-8", dataType: "json", async: true, cache: false, success: function(msg) { alert(msg); }, error: function (x, e) { alert("The call to the server side failed. " + x.responseText); } } ); } ) }) </script> or $(document).ready( function () { $("#Button1").click( function () { var username, family, name, email, tel, codemeli, password; username = $('#<%=TextBox1.ClientID%>').val(); name = $('#<%=TextBox2.ClientID%>').val(); family = $('#<%=TextBox3.ClientID%>').val(); password = $('#<%=TextBox4.ClientID%>').val(); email = $('#<%=TextBox5.ClientID%>').val(); tel = $('#<%=TextBox6.ClientID%>').val(); codemeli = $('#<%=TextBox7.ClientID%>').val(); $.ajax( { type: "POST", url: "WebApplication20.aspx/executeinsert", data: '{"username" : '+username+', "name": '+name+', "family": '+family+', "password": '+password+', "email": '+email+', "tel": '+tel+' , "codemeli": '+codemeli+'}', contentType: "application/json;charset=utf-8", dataType: "json", async: true, cache: false, success: function(msg) { alert(msg); }, error: function (x, e) { alert("The call to the server side failed. " + x.responseText); } } ); } ) })

    Read the article

  • Rails Tutorial Chapter 4 2nd Ed. Title Helper not being called with out argument.

    - by SuddenlyAwakened
    I am running through the Rails Tutorial by Michael Hartl (Screen Cast). Ran into the and issue in chapter 4 on the title helper. I have been putting my own twist on the code as I go to make sure I understand it all. However on this one I it is very similar and I am not quite sure why it is acting the way it is. Here is the code: Application.html.erb <!DOCTYPE html> <html> <head> <%- Rails.logger.info("Testing: #{yield(:title)}") %> <title><%= full_title(yield(:title)) %></title> <%= stylesheet_link_tag "application", :media => "all" %> <%= javascript_include_tag "application" %> <%= csrf_meta_tags %> </head> <body> <%= yield %> </body> </html> Application_helper.rb module ApplicationHelper def full_title(page_title) full_title = "Ruby on Rails Tutorial App" full_title += " | #{page_title}" unless page_title.blank? end end Home.html.erb <h1><%= t(:sample_app) %></h1> <p> This is the home page for the <a href="http://railstutorial.org/">Ruby on Rails Tutorial</a> sample application </p> about.html.erb <% provide(:title, t(:about_us)) %> <h1><%= t(:about_us) %></h1> <p> The <a href="http://railstutorial.org/">Ruby on Rails Tutorial</a> is a project to make a book and screencast to teach web development with <a href="http://railstutorial.org/">Ruby on Rails</a>. This is the sample application for the tutorial. </p> What Happens: The code works fine when I set the provide method like on the about page. However when I do not it does not seem to even call the helper. I am assuming that because no title is passed back. Any ideas on what I am doing wrong? Thank you all for your help.

    Read the article

  • HTML Language question

    - by Mike
    Note my code below. I am trying to figure out why my data is not changing to Spanish. I understand it to be one line of code and that is all within the HTML attribute lang=”es”. Any help would be greatly appreciated. <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xlmns="http://www.w3.org/1999/xhtml" lang=”es” xml:lang="en"> <head> <title>JavaJam Coffee House</title> <link href="javajam.css" rel="stylesheet" type="text/css" /> </head> <body bgcolor="brown"> <h1>JavaJam Coffee House</h1> <ul> <li>Specialty Coffee and Tea</li> <li>Bagels, Muffins, and Organic Snacks</li> <li>Music and Poetry Readings</li> <li>Usability Studies</li> <li>Open Mic Night</li> </ul> <br></br> <p>12312 Main Street<br> Mountain Home, CA 93923<br> 1-888-555-5555</br> </p> <p> <em> <small>Copyright &copy; 2008 JavaJam Coffee House</em></p> E-Mail <a href="mailto;[email protected]"> Michael J. Crawley</a> </body> </html>

    Read the article

  • Connecting debian and windows via IPsec VPN with Racoon and ipsec-tools

    - by Michi Qne
    I've some trouble with the IPsec configuration on my debian server (6 squeeze). This server should connect via IPsec VPN to an windows server, which is protected by an firewall. I've used racoon and ipsec-tools and this tutorial http://wiki.debian.org/IPsec. However, I am not quite sure, if this tutorial fits to my purpose, because of some differences: my Host and my gateway are the same server. So I don't have two different ip addresses. I guess, that's not a problem the other server is an windows system behind a firewall. Hopefully, not a problem the subnet of the windows system is /32 not /24. So I change it to /32. I worked through the tutorial step by step, but I wasn't able to route the ip. The following command didn't work for me: ip route add to 172.16.128.100/32 via XXX.XXX.XXX.XXX src XXX.XXX.XXX.XXX So I tried the following instead: ip route add to 172.16.128.100 .., which obviously not solved the problem. The next problem is the compression. The windows doesn't use a compression, but 'compression_algorithm none;' doesn't work with my racoon. So the current value is 'compression_algorithm deflate;' So my current result looks like this: When I am trying to ping the windows host (ping 172.16.128.100), I receive the following error message from ping: ping: sendmsg: Operation not permitted And racoon logs: racoon: ERROR: failed to get sainfo. After googling for a while I came to no conclusion, what's the solution. Does this error message mean that the first phase of IPsec works? I am thankful for any advice. I guess my configs might be helpful. My racoon.conf looks like this: path pre_shared_key "/etc/racoon/psk.txt"; remote YYY.YYY.YYY.YYY { exchange_mode main; proposal { lifetime time 8 hour; encryption_algorithm 3des; hash_algorithm sha1; authentication_method pre_shared_key; dh_group 2; } } sainfo address XXX.XXX.XXX.XXX/32 any address 172.16.128.100/32 any { pfs_group 2; lifetime time 8 hour; encryption_algorithm aes 256; authentication_algorithm hmac_sha1; compression_algorithm deflate; } And my ipsec-tools.conf looks like this: flush; spdflush; spdadd XXX.XXX.XXX.XXX/32 172.16.128.100/32 any -P out ipsec esp/tunnel/XXX.XXX.XXX.XXX-YYY.YYY.YYY.YYY/require; spdadd 172.16.128.100/32 XXX.XXX.XXX.XXX/32 any -P in ipsec esp/tunnel/YYY.YYY.YYY.YYY-XXX.XXX.XXX.XXX/require; If anyone has an advice, that would be awesome. Thanks in Advance. Greets, Michael It was a simple copy-and-paste error in an ip address.

    Read the article

  • High Load mysql on Debian server

    - by Oleg Abrazhaev
    I have Debian server with 32 gb memory. And there is apache2, memcached and nginx on this server. Memory load always on maximum. Only 500m free. Most memory leak do MySql. Apache only 70 clients configured, other services small memory usage. When mysql use all memory it stops. And nothing works, need mysql reboot. Mysql configured use maximum 24 gb memory. I have hight weight InnoDB bases. (400000 rows, 30 gb). And on server multithread daemon, that makes many inserts in this tables, thats why InnoDB. There is my mysql config. [mysqld] # # * Basic Settings # default-time-zone = "+04:00" user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp language = /usr/share/mysql/english skip-external-locking default-time-zone='Europe/Moscow' # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. # # * Fine Tuning # #low_priority_updates = 1 concurrent_insert = ALWAYS wait_timeout = 600 interactive_timeout = 600 #normal key_buffer_size = 2024M #key_buffer_size = 1512M #70% hot cache key_cache_division_limit= 70 #16-32 max_allowed_packet = 32M #1-16M thread_stack = 8M #40-50 thread_cache_size = 50 #orderby groupby sort sort_buffer_size = 64M #same myisam_sort_buffer_size = 400M #temp table creates when group_by tmp_table_size = 3000M #tables in memory max_heap_table_size = 3000M #on disk open_files_limit = 10000 table_cache = 10000 join_buffer_size = 5M # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP #myisam_use_mmap = 1 max_connections = 200 thread_concurrency = 8 # # * Query Cache Configuration # #more ignored query_cache_limit = 50M query_cache_size = 210M #on query cache query_cache_type = 1 # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. #log = /var/log/mysql/mysql.log # # Error logging goes to syslog. This is a Debian improvement :) # # Here you can see queries with especially long duration log_slow_queries = /var/log/mysql/mysql-slow.log long_query_time = 1 log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. # note: if you are setting up a replication slave, see README.Debian about # other settings you may need to change. #server-id = 1 #log_bin = /var/log/mysql/mysql-bin.log server-id = 1 log-bin = /var/lib/mysql/mysql-bin #replicate-do-db = gate log-bin-index = /var/lib/mysql/mysql-bin.index log-error = /var/lib/mysql/mysql-bin.err relay-log = /var/lib/mysql/relay-bin relay-log-info-file = /var/lib/mysql/relay-bin.info relay-log-index = /var/lib/mysql/relay-bin.index binlog_do_db = 24avia expire_logs_days = 10 max_binlog_size = 100M read_buffer_size = 4024288 innodb_buffer_pool_size = 5000M innodb_flush_log_at_trx_commit = 2 innodb_thread_concurrency = 8 table_definition_cache = 2000 group_concat_max_len = 16M #binlog_do_db = gate #binlog_ignore_db = include_database_name # # * BerkeleyDB # # Using BerkeleyDB is now discouraged as its support will cease in 5.1.12. #skip-bdb # # * InnoDB # # InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/. # Read the manual for more InnoDB related options. There are many! # You might want to disable InnoDB to shrink the mysqld process by circa 100MB. #skip-innodb # # * Security Features # # Read the manual, too, if you want chroot! # chroot = /var/lib/mysql/ # # For generating SSL certificates I recommend the OpenSSL GUI "tinyca". # # ssl-ca=/etc/mysql/cacert.pem # ssl-cert=/etc/mysql/server-cert.pem # ssl-key=/etc/mysql/server-key.pem [mysqldump] quick quote-names max_allowed_packet = 500M [mysql] #no-auto-rehash # faster start of mysql but no tab completition [isamchk] key_buffer = 32M key_buffer_size = 512M # # * NDB Cluster # # See /usr/share/doc/mysql-server-*/README.Debian for more information. # # The following configuration is read by the NDB Data Nodes (ndbd processes) # not from the NDB Management Nodes (ndb_mgmd processes). # # [MYSQL_CLUSTER] # ndb-connectstring=127.0.0.1 # # * IMPORTANT: Additional settings that can override those from this file! # The files must end with '.cnf', otherwise they'll be ignored. # !includedir /etc/mysql/conf.d/ Please, help me make it stable. Memory used /etc/mysql # free total used free shared buffers cached Mem: 32930800 32766424 164376 0 139208 23829196 -/+ buffers/cache: 8798020 24132780 Swap: 33553328 44660 33508668 Maybe my problem not in memory, but MySQL stops every day. As you can see, cache memory free 24 gb. Thank to Michael Hampton? for correction. Load overage on server 3.5. Maybe hdd or another problem? Maybe my config not optimal for 30gb InnoDB ?

    Read the article

  • Exchange Server 2007 Setup

    - by AlamedaDad
    Hi, I'm working on a upgrade to Exchange 2007 and I wanted to get some advise on hardware choices. We currently have an Exchange 2003 STD server with 400 users split between 6 AD Sites, that is housed on a single server. We need to move to a redundant, fault tolerant system to support our users. I'm planning on installing 2 Dell 1950 servers with W2k8-std to act as CAS and Hub servers, with NLB to allow abstraction of the actual server name to the users. There won't be an edge system since we have a Barracuda box already that will handle in/out spam/virus filtering. Backend I'm planning on 2 mailbox servers which will be Dell 2950s with 16GB RAM, 2 either dual-core or quad-core CPUs and 6 300GB SAS drives in some RAID config. These systems will be clustered using W2k8 Ent clustering and running CCR in Exchange. My questions are as follows: Is 16GB enough RAM for serving that many mailboxes along with the windows clustering and ccr? I'm trying to figure out disk layouts and I'm unsure of whether to use all local disk or some local and some SAN, via an OpenFiler iSCSI server. The SAN would be a Dell 2850 with 6 - 300GB SCSI drives and a PERC controller to slice as I want, with 8GB RAM. Option 1: 2 drives, RAID 1 - OS 2 drives, RAID 1 - Logs 2 drives, RAID 1 - Mail stores Option 2: 2 drives, RAID 1 - OS and logs 4 drives, RAID 5 - Mail Stores and scratch space for eseutil. Option 3: 2 drives, RAID 1 - OS 2 drives, RAID 1 - Logs 2 drives, RAID 0 - scratch space ~300GB iSCSI volume for mail stores Option 4: 2 drives, RAID 1 - OS 4 drives, RAID 5 - scratch space ~300GB iSCSI volume for mail stores ~300GB iSCSI volume for logs I have 2 sockets for CPUs and need to chose between dual and quad cores. The dual core have faster clocks but less cache and I'm thinking older architecture. Am I better off with more cores and cache while sacraficing clock speed? I am planning on adding the new E2K7 cluster to the E2K3 server and then move each mailbox over, all at once, then remove the old server. This seems more complicated than simply getting rid of the 2003 server and then adding the 2007 cluster and restoring the mailboxes using PowerControls or exmerge. The migration option lets me do this on my time, where a cutover means it all needs to work at once. If I go with the cutover method, how can I prebuild the servers and add them to the domain right after removing the 2003 server, or can't I? I think the answer is no and the migration is my only real option if I want to prebuild. I need to also migrate about 30GB of Public Folders. Is there anything special about this, other than specifying in the E2K7 install that I want older Outlook clients and PF's setup? I guess I could even keep the E2K3 server to host just the PFs? Lastly, if I have a mix of Outlook 200, 2003 and 2007 what do I need to do to make sure they all have access to the GAL and OAB? At time of cutover, we'll be at like 90% 2007, but we will have some older stuff around. My plan is to use Outlook Anywhere on laptops that are used outside the physical network. Are there any gotchas involved in that? I'm even thinking about using is for all Outlook clients, does anyone do that? The reason I'm considering it is that our WAN is really VPN tunnels over internet connections, so not a fully messhed, stable WAN. Thank you all very much for the assistance in advance and I look forward to discussion of these points! Regards...Michael

    Read the article

  • SQL Server 2008 R2 Reporting Services - The Word is But a Stage (T-SQL Tuesday #006)

    - by smisner
    Host Michael Coles (blog|twitter) has selected LOB data as the topic for this month's T-SQL Tuesday, so I'll take this opportunity to post an overview of reporting with spatial data types. As part of my work with SQL Server 2008 R2 Reporting Services, I've been exploring the use of spatial data types in the new map data region. You can create a map using any of the following data sources: Map Gallery - a set of Shapefiles for the United States only that ships with Reporting Services ESRI Shapefile - a .shp file conforming to the Environmental Systems Research Institute, Inc. (ESRI) shapefile spatial data format SQL Server spatial data - a query that includes SQLGeography or SQLGeometry data types Rob Farley (blog|twitter) points out today in his T-SQL Tuesday post that using the SQL geography field is a preferable alternative to ESRI shapefiles for storing spatial data in SQL Server. So how do you get spatial data? If you don't already have a GIS application in-house, you can find a variety of sources. Here are a few to get you started: US Census Bureau Website, http://www.census.gov/geo/www/tiger/ Global Administrative Areas Spatial Database, http://biogeo.berkeley.edu/gadm/ Digital Chart of the World Data Server, http://www.maproom.psu.edu/dcw/ In a recent post by Pinal Dave (blog|twitter), you can find a link to free shapefiles for download and a tutorial for using Shape2SQL, a free tool to convert shapefiles into SQL Server data. In my post today, I'll show you how to use combine spatial data that describes boundaries with spatial data in AdventureWorks2008R2 that identifies stores locations to embed a map in a report. Preparing the spatial data First, I downloaded Shapefile data for the administrative boundaries in France and unzipped the data to a local folder. Then I used Shape2SQL to upload the data into a SQL Server database called Spatial. I'm not sure of the reason why, but I had to uncheck the option to create a spatial index to upload the data. Otherwise, the upload appeared to run successfully, but no table appeared in my database. The zip file that I downloaded contained three files, but I didn't know what was in them until I used Shape2SQL to upload the data into tables. Then I found that FRA_adm0 contains spatial data for the country of France, FRA_adm1 contains spatial data for each region, and FRA_adm2 contains spatial data for each department (a subdivision of region). Next I prepared my SQL query containing sales data for fictional stores selling Adventure Works products in France. The Person.Address table in the AdventureWorks2008R2 database (which you can download from Codeplex) contains a SpatialLocation column which I joined - along with several other tables - to the Sales.Customer and Sales.Store tables. I'll be able to superimpose this data on a map to see where these stores are located. I included the SQL script for this query (as well as the spatial data for France) in the downloadable project that I created for this post. Step 1: Using the Map Wizard to Create a Map of France You can build a map without using the wizard, but I find it's rather useful in this case. Whether you use Business Intelligence Development Studio (BIDS) or Report Builder 3.0, the map wizard is the same. I used BIDS so that I could create a project that includes all the files related to this post. To get started, I added an empty report template to the project and named it France Stores. Then I opened the Toolbox window and dragged the Map item to the report body which starts the wizard. Here are the steps to perform to create a map of France: On the Choose a source of spatial data page of the wizard, select SQL Server spatial query, and click Next. On the Choose a dataset with SQL Server spatial data page, select Add a new dataset with SQL Server spatial data. On the Choose a connection to a SQL Server spatial data source page, select New. In the Data Source Properties dialog box, on the General page, add a connecton string like this (changing your server name if necessary): Data Source=(local);Initial Catalog=Spatial Click OK and then click Next. On the Design a query page, add a query for the country shape, like this: select * from fra_adm1 Click Next. The map wizard reads the spatial data and renders it for you on the Choose spatial data and map view options page, as shown below. You have the option to add a Bing Maps layer which shows surrounding countries. Depending on the type of Bing Maps layer that you choose to add (from Road, Aerial, or Hybrid) and the zoom percentage you select, you can view city names and roads and various boundaries. To keep from cluttering my map, I'm going to omit the Bing Maps layer in this example, but I do recommend that you experiment with this feature. It's a nice integration feature. Use the + or - button to rexize the map as needed. (I used the + button to increase the size of the map until its edges were just inside the boundaries of the visible map area (which is called the viewport). You can eliminate the color scale and distance scale boxes that appear in the map area later. Select the Embed map data in this report for faster rendering. The spatial data won't be changing, so there's no need to leave it in the database. However, it does increase the size of the RDL. Click Next. On the Choose map visualization page, select Basic Map. We'll add data for visualization later. For now, we have just the outline of France to serve as the foundation layer for our map. Click Next, and then click Finish. Now click the color scale box in the lower left corner of the map, and press the Delete key to remove it. Then repeat to remove the distance scale box in the lower right corner of the map. Step 2: Add a Map Layer to an Existing Map The map data region allows you to add multiple layers. Each layer is associated with a different data set. Thus far, we have the spatial data that defines the regional boundaries in the first map layer. Now I'll add in another layer for the store locations by following these steps: If the Map Layers windows is not visible, click the report body, and then click twice anywhere on the map data region to display it. Click on the New Layer Wizard button in the Map layers window. And then we start over again with the process by choosing a spatial data source. Select SQL Server spatial query, and click Next. Select Add a new dataset with SQL Server spatial data, and click Next. Click New, add a connection string to the AdventureWorks2008R2 database, and click Next. Add a query with spatial data (like the one I included in the downloadable project), and click Next. The location data now appears as another layer on top of the regional map created earlier. Use the + button to resize the map again to fill as much of the viewport as possible without cutting off edges of the map. You might need to drag the map within the viewport to center it properly. Select Embed map data in this report, and click Next. On the Choose map visualization page, select Basic Marker Map, and click Next. On the Choose color theme and data visualization page, in the Marker drop-down list, change the marker to diamond. There's no particular reason for a diamond; I think it stands out a little better than a circle on this map. Clear the Single color map checkbox as another way to distinguish the markers from the map. You can of course create an analytical map instead, which would change the size and/or color of the markers according to criteria that you specify, such as sales volume of each store, but I'll save that exploration for another post on another day. Click Finish and then click Preview to see the rendered report. Et voilà...c'est fini. Yes, it's a very simple map at this point, but there are many other things you can do to enhance the map. I'll create a series of posts to explore the possibilities. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • 2 Days of Share &amp; Point

    - by Mark Rackley
    Groovy man… SharePoint Saturday Ozarks is back for 2010, bigger and better than before. Join us for a far out time and learn more about SharePoint in one day than you could in a year from the man… Yes! SharePoint Saturday Ozarks is back! SharePoint Saturday Ozarks is the largest SharePoint conference in Arkansas, Southern Missouri, and the very north east tip of Oklahoma. Last year we had a great turn out with 20 speakers, 5 MVPs, and attendees coming from Arkansas, Texas, Oklahoma, Missouri, Kansas, Nebraska, Indiana, Ohio, Alabama, Michigan, and Washington. Hey Man… what’s SharePoint Saturday anyway? Sounds like a conspiracy man… Not to worry, SharePoint Saturday is not an arm of the government bent on mind control or any attempt what-so-ever to bring you down man. SharePoint Saturday is grass roots effort started by Michael Lotter (http://www.sharepointsaturday.org/pages/about.aspx). It is a FREE one day event where the best SharePoint speakers gather to present their love, hatred, and frustrations of SharePoint to those lucky individuals who attend. Lessons are learned, contacts are made, prizes are won, food is eaten, assorted beverages are consumed until wee hours of the morning. SharePoint Saturday started with just a few sporadic one day events here and there. However, over the past year SharePoint Saturday has exploded and it’s hard to find a weekend where there is NOT a SharePoint Saturday event happing in some corner of the globe. There are even occasions where there are two SharePoint Saturdays on the same day! Many people are pleasantly surprised at the caliber of speakers at these SharePoint Saturday events. For the most part, these speakers are more eloquent, practiced, and practical than those speakers you find at the major multi-day conferences. These guys aren’t even paid to speak.. they do it out of love man… SharePoint Saturday Ozarks 2009 Alumni We had a star studded cast last year with many returning this year! Just check out the fun that they had… John Ferringer – Admin rockstar… I can still sense the awesomeness   SharePoint poster children Mike Watson & Laura Rogers     Lori Gowin spreading the SharePoint Love Eric Shupps is a little bit country and a little bit rock and roll       Cathy Dew, Sean McDonough, and JD Wade relaxing between gigs Actually, you can see real photos from last year’s SharePoint Saturday ozarks here:  picasaweb.google.com/mrackley/SharePointSaturdayOzarks#    What’s new for SharePoint Saturday Ozarks 2010 SharePoint Saturday Ozarks 2010 will totally blow your mind man. We’re getting the band back to together with many returning speakers and few new faces. Joel Oleson will be speaking this year, maybe he’ll grace us with his song stylings. Sadly, once again, Andrew Connell will not be able to attend SharePoint Saturday Ozarks, however he did feel the need to show his support in his own way. Prizes this year currently include books, software, a Zune HD, and much more! Wait Man… You said 2 days? I thought it was a one day event? Correct you are my herbal smelling friend… SharePoint Saturday Ozarks 2010 will spread the love an additional day this year. The first day will be all about the SharePoint love, on day 2 we will be taking a leisurely float down the Buffalo National River for those interested in a truly unique experience (no banjos allowed please).   Here are the details: WHAT 4 – 5 hour float down the Buffalo National River WHEN & WHERE Sunday June 13th. We will be leaving at 10am from the Parking Lot of: Gordon’s Motel & Canoe Rental Old Highway 7 Jasper, AR 72641 (870) 446-5252 Jasper is about 30 minutes south of Harrison, AR on Highway 7 South. You are responsible for bumming a ride to/from Gordon’s Motel, but they will be shuttling us to/from the river and providing canoes and a boxed lunch. WHAT ELSE? The float trip is dependent on the weather of course, we won’t be floating down the river in a thunderstorm, however I planned SPS Ozarks around a time of year ideal for floating. We aren’t talking class 5 rapids here, you don’t need any real skill, but you need to be okay with possibly tipping your canoe over once or twice. You can bring your own assorted beverages with you, but glass containers are not allowed on the river. I suggest a small cooler with extra snacks and drinks. Also bring clothing you can get wet in (these SharePoint people can get ornery). HOW DO I SIGN UP? When you register for SharePoint Saturday Ozarks, you will have the option to also sign up for the float trip. Seats are limited though! If you do not intend to go, please do not take someone else’s place.  The cost for the float trip will be about $35 dollars per person (which you are responsible for unless we find a sponsor). The price includes shuttle to/from river, canoe, life jackets, paddles, and boxed lunch. Far out man… how do I register??? You can register for SharePoint Saturday Ozarks by going to http://spsozarks.eventbrite.com/ We are limited to 200 people for the conference and 50 people for the float trip, so register today before we are sold out. Lodging for SharePoint Saturday Ozarks will once again take place at the Hotel Seville: Annex Suites are available for $103.20 This is So Groovy.. How can I help? I’m glad you asked! We are still looking for a few sponsors and one or two more speakers. If you are interested please let me know!  You can find out more information at http://www.sharepointsaturday.org/ozarks Hey… wait a minute…. what exactly IS SharePoint man??? Come to SharePoint Saturday Ozarks and find out!!  See you guys there!

    Read the article

< Previous Page | 120 121 122 123 124 125 126  | Next Page >