Search Results

Search found 10751 results on 431 pages for 'day cq'.

Page 429/431 | < Previous Page | 425 426 427 428 429 430 431  | Next Page >

  • Unusually high dentry cache usage

    - by Wolfgang Stengel
    Problem A CentOS machine with kernel 2.6.32 and 128 GB physical RAM ran into trouble a few days ago. The responsible system administrator tells me that the PHP-FPM application was not responding to requests in a timely manner anymore due to swapping, and having seen in free that almost no memory was left, he chose to reboot the machine. I know that free memory can be a confusing concept on Linux and a reboot perhaps was the wrong thing to do. However, the mentioned administrator blames the PHP application (which I am responsible for) and refuses to investigate further. What I could find out on my own is this: Before the restart, the free memory (incl. buffers and cache) was only a couple of hundred MB. Before the restart, /proc/meminfo reported a Slab memory usage of around 90 GB (yes, GB). After the restart, the free memory was 119 GB, going down to around 100 GB within an hour, as the PHP-FPM workers (about 600 of them) were coming back to life, each of them showing between 30 and 40 MB in the RES column in top (which has been this way for months and is perfectly reasonable given the nature of the PHP application). There is nothing else in the process list that consumes an unusual or noteworthy amount of RAM. After the restart, Slab memory was around 300 MB If have been monitoring the system ever since, and most notably the Slab memory is increasing in a straight line with a rate of about 5 GB per day. Free memory as reported by free and /proc/meminfo decreases at the same rate. Slab is currently at 46 GB. According to slabtop most of it is used for dentry entries: Free memory: free -m total used free shared buffers cached Mem: 129048 76435 52612 0 144 7675 -/+ buffers/cache: 68615 60432 Swap: 8191 0 8191 Meminfo: cat /proc/meminfo MemTotal: 132145324 kB MemFree: 53620068 kB Buffers: 147760 kB Cached: 8239072 kB SwapCached: 0 kB Active: 20300940 kB Inactive: 6512716 kB Active(anon): 18408460 kB Inactive(anon): 24736 kB Active(file): 1892480 kB Inactive(file): 6487980 kB Unevictable: 8608 kB Mlocked: 8608 kB SwapTotal: 8388600 kB SwapFree: 8388600 kB Dirty: 11416 kB Writeback: 0 kB AnonPages: 18436224 kB Mapped: 94536 kB Shmem: 6364 kB Slab: 46240380 kB SReclaimable: 44561644 kB SUnreclaim: 1678736 kB KernelStack: 9336 kB PageTables: 457516 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 72364108 kB Committed_AS: 22305444 kB VmallocTotal: 34359738367 kB VmallocUsed: 480164 kB VmallocChunk: 34290830848 kB HardwareCorrupted: 0 kB AnonHugePages: 12216320 kB HugePages_Total: 2048 HugePages_Free: 2048 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB DirectMap4k: 5604 kB DirectMap2M: 2078720 kB DirectMap1G: 132120576 kB Slabtop: slabtop --once Active / Total Objects (% used) : 225920064 / 226193412 (99.9%) Active / Total Slabs (% used) : 11556364 / 11556415 (100.0%) Active / Total Caches (% used) : 110 / 194 (56.7%) Active / Total Size (% used) : 43278793.73K / 43315465.42K (99.9%) Minimum / Average / Maximum Object : 0.02K / 0.19K / 4096.00K OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME 221416340 221416039 3% 0.19K 11070817 20 44283268K dentry 1123443 1122739 99% 0.41K 124827 9 499308K fuse_request 1122320 1122180 99% 0.75K 224464 5 897856K fuse_inode 761539 754272 99% 0.20K 40081 19 160324K vm_area_struct 437858 223259 50% 0.10K 11834 37 47336K buffer_head 353353 347519 98% 0.05K 4589 77 18356K anon_vma_chain 325090 324190 99% 0.06K 5510 59 22040K size-64 146272 145422 99% 0.03K 1306 112 5224K size-32 137625 137614 99% 1.02K 45875 3 183500K nfs_inode_cache 128800 118407 91% 0.04K 1400 92 5600K anon_vma 59101 46853 79% 0.55K 8443 7 33772K radix_tree_node 52620 52009 98% 0.12K 1754 30 7016K size-128 19359 19253 99% 0.14K 717 27 2868K sysfs_dir_cache 10240 7746 75% 0.19K 512 20 2048K filp VFS cache pressure: cat /proc/sys/vm/vfs_cache_pressure 125 Swappiness: cat /proc/sys/vm/swappiness 0 I know that unused memory is wasted memory, so this should not necessarily be a bad thing (especially given that 44 GB are shown as SReclaimable). However, apparently the machine experienced problems nonetheless, and I'm afraid the same will happen again in a few days when Slab surpasses 90 GB. Questions I have these questions: Am I correct in thinking that the Slab memory is always physical RAM, and the number is already subtracted from the MemFree value? Is such a high number of dentry entries normal? The PHP application has access to around 1.5 M files, however most of them are archives and not being accessed at all for regular web traffic. What could be an explanation for the fact that the number of cached inodes is much lower than the number of cached dentries, should they not be related somehow? If the system runs into memory trouble, should the kernel not free some of the dentries automatically? What could be a reason that this does not happen? Is there any way to "look into" the dentry cache to see what all this memory is (i.e. what are the paths that are being cached)? Perhaps this points to some kind of memory leak, symlink loop, or indeed to something the PHP application is doing wrong. The PHP application code as well as all asset files are mounted via GlusterFS network file system, could that have something to do with it? Please keep in mind that I can not investigate as root, only as a regular user, and that the administrator refuses to help. He won't even run the typical echo 2 > /proc/sys/vm/drop_caches test to see if the Slab memory is indeed reclaimable. Any insights into what could be going on and how I can investigate any further would be greatly appreciated. Updates Some further diagnostic information: Mounts: cat /proc/self/mounts rootfs / rootfs rw 0 0 proc /proc proc rw,relatime 0 0 sysfs /sys sysfs rw,relatime 0 0 devtmpfs /dev devtmpfs rw,relatime,size=66063000k,nr_inodes=16515750,mode=755 0 0 devpts /dev/pts devpts rw,relatime,gid=5,mode=620,ptmxmode=000 0 0 tmpfs /dev/shm tmpfs rw,relatime 0 0 /dev/mapper/sysvg-lv_root / ext4 rw,relatime,barrier=1,data=ordered 0 0 /proc/bus/usb /proc/bus/usb usbfs rw,relatime 0 0 /dev/sda1 /boot ext4 rw,relatime,barrier=1,data=ordered 0 0 tmpfs /phptmp tmpfs rw,noatime,size=1048576k,nr_inodes=15728640,mode=777 0 0 tmpfs /wsdltmp tmpfs rw,noatime,size=1048576k,nr_inodes=15728640,mode=777 0 0 none /proc/sys/fs/binfmt_misc binfmt_misc rw,relatime 0 0 cgroup /cgroup/cpuset cgroup rw,relatime,cpuset 0 0 cgroup /cgroup/cpu cgroup rw,relatime,cpu 0 0 cgroup /cgroup/cpuacct cgroup rw,relatime,cpuacct 0 0 cgroup /cgroup/memory cgroup rw,relatime,memory 0 0 cgroup /cgroup/devices cgroup rw,relatime,devices 0 0 cgroup /cgroup/freezer cgroup rw,relatime,freezer 0 0 cgroup /cgroup/net_cls cgroup rw,relatime,net_cls 0 0 cgroup /cgroup/blkio cgroup rw,relatime,blkio 0 0 /etc/glusterfs/glusterfs-www.vol /var/www fuse.glusterfs rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072 0 0 /etc/glusterfs/glusterfs-upload.vol /var/upload fuse.glusterfs rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072 0 0 sunrpc /var/lib/nfs/rpc_pipefs rpc_pipefs rw,relatime 0 0 172.17.39.78:/www /data/www nfs rw,relatime,vers=3,rsize=65536,wsize=65536,namlen=255,hard,proto=tcp,port=38467,timeo=600,retrans=2,sec=sys,mountaddr=172.17.39.78,mountvers=3,mountport=38465,mountproto=tcp,local_lock=none,addr=172.17.39.78 0 0 Mount info: cat /proc/self/mountinfo 16 21 0:3 / /proc rw,relatime - proc proc rw 17 21 0:0 / /sys rw,relatime - sysfs sysfs rw 18 21 0:5 / /dev rw,relatime - devtmpfs devtmpfs rw,size=66063000k,nr_inodes=16515750,mode=755 19 18 0:11 / /dev/pts rw,relatime - devpts devpts rw,gid=5,mode=620,ptmxmode=000 20 18 0:16 / /dev/shm rw,relatime - tmpfs tmpfs rw 21 1 253:1 / / rw,relatime - ext4 /dev/mapper/sysvg-lv_root rw,barrier=1,data=ordered 22 16 0:15 / /proc/bus/usb rw,relatime - usbfs /proc/bus/usb rw 23 21 8:1 / /boot rw,relatime - ext4 /dev/sda1 rw,barrier=1,data=ordered 24 21 0:17 / /phptmp rw,noatime - tmpfs tmpfs rw,size=1048576k,nr_inodes=15728640,mode=777 25 21 0:18 / /wsdltmp rw,noatime - tmpfs tmpfs rw,size=1048576k,nr_inodes=15728640,mode=777 26 16 0:19 / /proc/sys/fs/binfmt_misc rw,relatime - binfmt_misc none rw 27 21 0:20 / /cgroup/cpuset rw,relatime - cgroup cgroup rw,cpuset 28 21 0:21 / /cgroup/cpu rw,relatime - cgroup cgroup rw,cpu 29 21 0:22 / /cgroup/cpuacct rw,relatime - cgroup cgroup rw,cpuacct 30 21 0:23 / /cgroup/memory rw,relatime - cgroup cgroup rw,memory 31 21 0:24 / /cgroup/devices rw,relatime - cgroup cgroup rw,devices 32 21 0:25 / /cgroup/freezer rw,relatime - cgroup cgroup rw,freezer 33 21 0:26 / /cgroup/net_cls rw,relatime - cgroup cgroup rw,net_cls 34 21 0:27 / /cgroup/blkio rw,relatime - cgroup cgroup rw,blkio 35 21 0:28 / /var/www rw,relatime - fuse.glusterfs /etc/glusterfs/glusterfs-www.vol rw,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072 36 21 0:29 / /var/upload rw,relatime - fuse.glusterfs /etc/glusterfs/glusterfs-upload.vol rw,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072 37 21 0:30 / /var/lib/nfs/rpc_pipefs rw,relatime - rpc_pipefs sunrpc rw 39 21 0:31 / /data/www rw,relatime - nfs 172.17.39.78:/www rw,vers=3,rsize=65536,wsize=65536,namlen=255,hard,proto=tcp,port=38467,timeo=600,retrans=2,sec=sys,mountaddr=172.17.39.78,mountvers=3,mountport=38465,mountproto=tcp,local_lock=none,addr=172.17.39.78 GlusterFS config: cat /etc/glusterfs/glusterfs-www.vol volume remote1 type protocol/client option transport-type tcp option remote-host 172.17.39.71 option ping-timeout 10 option transport.socket.nodelay on # undocumented option for speed # http://gluster.org/pipermail/gluster-users/2009-September/003158.html option remote-subvolume /data/www end-volume volume remote2 type protocol/client option transport-type tcp option remote-host 172.17.39.72 option ping-timeout 10 option transport.socket.nodelay on # undocumented option for speed # http://gluster.org/pipermail/gluster-users/2009-September/003158.html option remote-subvolume /data/www end-volume volume remote3 type protocol/client option transport-type tcp option remote-host 172.17.39.73 option ping-timeout 10 option transport.socket.nodelay on # undocumented option for speed # http://gluster.org/pipermail/gluster-users/2009-September/003158.html option remote-subvolume /data/www end-volume volume remote4 type protocol/client option transport-type tcp option remote-host 172.17.39.74 option ping-timeout 10 option transport.socket.nodelay on # undocumented option for speed # http://gluster.org/pipermail/gluster-users/2009-September/003158.html option remote-subvolume /data/www end-volume volume replicate1 type cluster/replicate option lookup-unhashed off # off will reduce cpu usage, and network option local-volume-name 'hostname' subvolumes remote1 remote2 end-volume volume replicate2 type cluster/replicate option lookup-unhashed off # off will reduce cpu usage, and network option local-volume-name 'hostname' subvolumes remote3 remote4 end-volume volume distribute type cluster/distribute subvolumes replicate1 replicate2 end-volume volume iocache type performance/io-cache option cache-size 8192MB # default is 32MB subvolumes distribute end-volume volume writeback type performance/write-behind option cache-size 1024MB option window-size 1MB subvolumes iocache end-volume ### Add io-threads for parallel requisitions volume iothreads type performance/io-threads option thread-count 64 # default is 16 subvolumes writeback end-volume volume ra type performance/read-ahead option page-size 2MB option page-count 16 option force-atime-update no subvolumes iothreads end-volume

    Read the article

  • 26 Days: Countdown to Oracle OpenWorld 2012

    - by Michael Snow
    Welcome to our countdown to Oracle OpenWorld! Oracle OpenWorld 2012 is just around the corner. In less than 26 days, San Francisco will be invaded by an expected 50,000 people from all over the world. Here on the Oracle WebCenter team, we’ve all been working to help make the experience a great one for all our WebCenter customers. For a sneak peak  – we’ll be spending this week giving you a teaser of what to look forward to if you are joining us in San Francisco from September 30th through October 4th. We have Oracle WebCenter sessions covering all topics imaginable. Take a look and use the tools we provide to build out your schedule in advance and reserve your seats in your favorite sessions.  That gives you plenty of time to plan for your week with us in San Francisco. If unfortunately, your boss denied your request to attend - there are still some ways that you can join in the experience virtually On-Demand. This year - we are expanding even more up North of Market Street and will be taking over Union Square as well. Check out this map of San Francisco to get a sense of how much of a footprint Oracle OpenWorld has grown to this year. With so much to see and so many sessions to learn from - its no wonder that people get excited. Add to that a good mix of fun and all of the possible WebCenter sessions you could attend - you won't want to sleep at all to take full advantage of such an opportunity. We'll also have our annual WebCenter Customer Appreciation reception - stay tuned this week for some more info on registration to make sure you'll be able to join us. If you've been following the America's Cup at all and believe in EXTREME PERFORMANCE you'll definitely want to take a look at this video from last year's OpenWorld Keynote. 12.00 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Important OpenWorld Links:  Attendee / Presenters Toolkit Oracle Schedule Builder WebCenter Sessions (listed in the catalog under Fusion Middleware as "Portals, Sites, Content, and Collaboration" ) Oracle Music Festival - AMAZING Line up!!  Oracle Customer Appreciation Night -LOOK HERE!! Oracle OpenWorld LIVE On-Demand Here are all the WebCenter sessions broken down by day for your viewing pleasure. Monday, October 1st CON8885 - Simplify CRM Engagement with Contextual Collaboration Are your sales teams disconnected and disengaged? Do you want a tool for easily connecting expertise across your organization and providing visibility into the complete sales process? Do you want a way to enhance and retain organization knowledge? Oracle Social Network is the answer. Attend this session to learn how to make CRM easy, effective, and efficient for use across virtual sales teams. Also learn how Oracle Social Network can drive sales force collaboration with natural conversations throughout the sales cycle, promote sales team productivity through purposeful social networking without the noise, and build cross-team knowledge by integrating conversations with CRM and other business applications. CON8268 - Oracle WebCenter Strategy: Engaging Your Customers. Empowering Your Business Oracle WebCenter is a user engagement platform for social business, connecting people and information. Attend this session to learn about the Oracle WebCenter strategy, and understand where Oracle is taking the platform to help companies engage customers, empower employees, and enable partners. Business success starts with ensuring that everyone is engaged with the right people and the right information and can access what they need through the channel of their choice—Web, mobile, or social. Are you giving customers, employees, and partners the best-possible experience? Come learn how you can! ¶ HOL10208 - Add Social Capabilities to Your Enterprise Applications Oracle Social Network enables you to add real-time collaboration capabilities into your enterprise applications, so that conversations can happen directly within your business systems. In this hands-on lab, you will try out the Oracle Social Network product to collaborate with other attendees, using real-time conversations with document sharing capabilities. Next you will embed social capabilities into a sample Web-based enterprise application, using embedded UI components. Experts will also write simple REST-based integrations, using the Oracle Social Network API to programmatically create social interactions. ¶ CON8893 - Improve Employee Productivity with Intuitive and Social Work Environments Social technologies have already transformed the ways customers, employees, partners, and suppliers communicate and stay informed. Forward-thinking organizations today need technologies and infrastructures to help them advance to the next level and integrate social activities with business applications to deliver a user experience that simplifies business processes and enterprise application engagement. Attend this session to hear from an innovative Oracle Social Network customer and learn how you can improve productivity with intuitive and social work environments and empower your employees with innovative social tools to enable contextual access to content and dynamic personalization of solutions. ¶ CON8270 - Oracle WebCenter Content Strategy and Vision Oracle WebCenter provides a strategic content infrastructure for managing documents, images, e-mails, and rich media files. With a single repository, organizations can address any content use case, such as accounts payable, HR onboarding, document management, compliance, records management, digital asset management, or Website management. In this session, learn about future plans for how Oracle WebCenter will address new use cases as well as new integrations with Oracle Fusion Middleware and Oracle Applications, leveraging your investments by making your users more productive and error-free. ¶ CON8269 - Oracle WebCenter Sites Strategy and Vision Oracle’s Web experience management solution, Oracle WebCenter Sites, enables organizations to use the online channel to drive customer acquisition and brand loyalty. It helps marketers and business users easily create and manage contextually relevant, social, interactive online experiences across multiple channels on a global scale. In this session, learn about future plans for how Oracle WebCenter Sites will provide you with the tools, capabilities, and integrations you need in order to continue to address your customers’ evolving requirements for engaging online experiences and keep moving your business forward. ¶ CON8896 - Living with SharePoint SharePoint is a popular platform, but it’s not always the best fit for Oracle customers. In this session, you’ll discover the technical and nontechnical limitations and pitfalls of SharePoint and learn about Oracle alternatives for collaboration, portals, enterprise and Web content management, social computing, and application integration. The presentation shows you how to integrate with SharePoint when business or IT requirements dictate and covers cloud-based (Office 365) and on-premises versions of SharePoint. Presented by a former Microsoft director of SharePoint product management and backed by independent customer research, this session will prepare you to answer the question “Why don’t we just use SharePoint for that?’ the next time it comes up in your organization. ¶ CON7843 - Content-Enabling Enterprise Processes with Oracle WebCenter Organizations today continually strive to automate business processes, reduce costs, and improve efficiency. Many business processes are content-intensive and unstructured, requiring ad hoc collaboration, and distributed in nature, requiring many approvals and generating huge volumes of paper. In this session, learn how Oracle and SYSTIME have partnered to help a customer content-enable its enterprise with Oracle WebCenter Content and Oracle WebCenter Imaging 11g and integrate them with Oracle Applications. ¶ CON6114 - Tape Robotics’ Newest Superhero: Now Fueled by Oracle Software For small, midsize, and rapidly growing businesses that want the most energy-efficient, scalable storage infrastructure to meet their rapidly growing data demands, Oracle’s most recent addition to its award-winning tape portfolio leverages several pieces of Oracle software. With Oracle Linux, Oracle WebLogic, and Oracle Fusion Middleware tools, the library achieves a higher level of usability than previous products while offering customers a familiar interface for management, plus ease of use. This session examines the competitive advantages of the tape library and how Oracle software raises customer satisfaction. Learn how the combination of Oracle engineered systems, Oracle Secure Backup, and Oracle’s StorageTek tape libraries provide end-to-end coverage of your data. ¶ CON9437 - Mobile Access Management With more than five billion mobile devices on the planet and an increasing number of users using their own devices to access corporate data and applications, securely extending identity management to mobile devices has become a hot topic. This session focuses on how to extend your existing identity management infrastructure and policies to securely and seamlessly enable mobile user access. CON7815 - Customer Experience Online in Cloud: Oracle WebCenter Sites, Oracle ATG Apps, Oracle Exalogic Oracle WebCenter Sites and Oracle’s ATG product line together can provide a compelling marketing and e-commerce experience. When you couple them with the extreme performance of Oracle Exalogic, you’ll see unmatched scalability that provides you with a true cloud-based solution. In this session, you’ll learn how running Oracle WebCenter Sites and ATG applications on Oracle Exalogic delivers both a private and a public cloud experience. Find out what it takes to get these systems working together and delivering engaging Web experiences. Even if you aren’t considering Oracle Exalogic today, the rich Web experience of Oracle WebCenter, paired with the depth of the ATG product line, can provide your business full support, from merchandising through sale completion. ¶ CON8271 - Oracle WebCenter Portal Strategy and Vision To innovate and keep a competitive edge, organizations need to leverage the power of agile and responsive Web applications. Oracle WebCenter Portal enables you to do just that, by delivering intuitive user experiences for enterprise applications to drive innovation with composite applications and mashups. Attend this session to learn firsthand from customers how Oracle WebCenter Portal extends the value of existing enterprise applications, business processes, and content; delivers a superior business user experience; and maximizes limited IT resources. ¶ CON8880 - The Connected Customer Experience Begins with the Online Channel There’s a lot of talk these days about how to connect the customer journey across various touchpoints—from Websites and e-commerce to call centers and in-store—to provide experiences that are more relevant and engaging and ultimately gain competitive edge. Doing it all at once isn’t a realistic objective, so where do you start? Come to this session, and hear about three steps you can take that can help you begin your journey toward delivering the connected customer experience. You’ll hear how Oracle now has an integrated digital marketing platform for your corporate Website, your e-commerce site, your self-service portal, and your marketing and loyalty campaigns, and you’ll learn what you can do today to begin executing on your customer experience initiatives. ¶ GEN11451 - General Session: Building Mobile Applications with Oracle Cloud With the prevalence of smart mobile devices, companies are facing an increased demand to provide access to data and applications from new channels. However, developing applications for mobile devices poses some unique challenges. Come to this session to learn how Oracle addresses these challenges, offering a simpler way to develop and deploy cross-device mobile applications. See how Oracle Cloud enables you to access applications, data, and services from mobile channels in an easier way.  CON8272 - Oracle Social Network Strategy and Vision One key way of increasing employee productivity is by bringing people, processes, and information together—providing new social capabilities to enable business users to quickly correspond and collaborate on business activities. Oracle WebCenter provides a user engagement platform with social and collaborative technologies to empower business users to focus on their key business processes, applications, and content in the context of their role and process. Attend this session to hear how the latest social capabilities in Oracle Social Network are enabling organizations to transform themselves into social businesses.  --- Tuesday, October 2nd HOL10194 - Enterprise Content Management Simplified: Oracle WebCenter Content’s Next-Generation UI Regardless of the nature of your business, unstructured content underpins many of its daily functions. Whether you are working with traditional presentations, spreadsheets, or text documents—or even with digital assets such as images and multimedia files—your content needs to be accessible and manageable in convenient and intuitive ways to make working with the content easier. Additionally, you need the ability to easily share documents with coworkers to facilitate a collaborative working environment. Come to this session to see how Oracle WebCenter Content’s next-generation user interface helps modern knowledge workers easily manage personal and enterprise documents in a collaborative environment.¶ CON8877 - Develop a Mobile Strategy with Oracle WebCenter: Engage Customers, Employees, and Partners Mobile technology has gone from nice-to-have to a cornerstone of user engagement. Mobile access enables users to have information available at their fingertips, enabling them to take action the moment they make a decision, interact in the moment of convenience, and take advantage of new service offerings in their preferred channels. All your employees have your mobile applications in their pocket; now what are you going to do? It is a critical step for companies to think through what their employees, customers, and partners really need on their devices. Attend this session to see how Oracle WebCenter enables you to better engage your customers, employees, and partners by providing a unified experience across multiple channels. ¶ CON9447 - Enabling Access for Hundreds of Millions of Users How do you grow your business by identifying, authenticating, authorizing, and federating users on the Web, leveraging social identity and the open source OAuth protocol? How do you scale your access management solution to support hundreds of millions of users? With social identity support out of the box, Oracle’s access management solution is also benchmarked for 250-million-user deployment according to real-world customer scenarios. In this session, you will learn about the social identity capability and the 250-million-user benchmark testing of Oracle Access Manager and Oracle Adaptive Access Manager running on Oracle Exalogic and Oracle Exadata. ¶ HOL10207 - Build an Intranet Portal with Oracle WebCenter In this hands-on lab, you’ll work with Oracle WebCenter Portal and Oracle WebCenter Content to build out an enterprise portal that maximizes the productivity of teams and individual contributors. Using browser-based tools, you’ll manage site resources such as page styles, templates, and navigation. You’ll edit content stored in Oracle WebCenter Content directly from your portal. You’ll also experience the latest features that promote collaboration, social networking, and personal productivity. ¶ CON2906 - Get Proactive: Best Practices for Maintaining Oracle Fusion Middleware You chose Oracle Fusion Middleware products to help your organization deliver superior business results. Now learn how to take full advantage of your software with all the great tools, resources, and product updates you’re entitled to through Oracle Support. In this session, Oracle product experts provide proven best practices to help you work more efficiently, plan and prepare for upgrades and patching more effectively, and manage risk. Topics include configuration management tools, remote diagnostics, My Oracle Support Community, and My Oracle Support Lifecycle Advisors. New users and Oracle Fusion Middleware experts alike are guaranteed to leave with fresh ideas and practical, easy-to-implement next steps. ¶ CON8878 - Oracle WebCenter’s Cloud Strategy: From Social and Platform Services to Mashups Cloud computing represents a paradigm shift in how we build applications, automate processes, collaborate, and share and in how we secure our enterprise. Additionally, as you adopt cloud-based services in your organization, it’s likely that you will still have many critical on-premises applications running. With these mixed environments, multiple user interfaces, different security, and multiple datasources and content sources, how do you start evolving your strategy to account for these challenges? Oracle WebCenter offers a complete array of technologies enabling you to solve these challenges and prepare you for the cloud. Attend this session to learn how you can use Oracle WebCenter in the cloud as well as create on-premises and cloud application mash-ups. ¶ CON8901 - Optimize Enterprise Business Processes with Oracle WebCenter and Oracle BPM Do you have business processes that span multiple applications? Are you grappling with how to have visibility across these business processes; how to manage content that is associated with these processes; and, most importantly, how to model and optimize these business processes? Attend this session to hear how Oracle WebCenter and Oracle Business Process Management provide a unique set of integrated solutions to provide a composite application dashboard across these business processes and offer a solution for content-centric business processes. ¶ CON8883 - Deliver Engaging Interfaces to Oracle Applications with Oracle WebCenter Critical business processes live within enterprise applications, and application users need to manage and execute these processes as effectively as possible. Oracle provides a comprehensive user engagement platform to increase user productivity and optimize overall processes within Oracle Applications—Oracle E-Business Suite and Oracle’s Siebel, PeopleSoft, and JD Edwards product families—and third-party applications. Attend this session to learn how you can integrate these applications with Oracle WebCenter to deliver composite application dashboards to your end users—whether they are your customers, partners, or employees—for enhanced usability and Web 2.0–enabled enterprise portals.¶ Wednesday, October 3rd CON8895 - Future-Ready Intranets: How Aramark Re-engineered the Application Landscape There are essential techniques and technologies you can use to deliver employee portals that garner higher productivity, improve business efficiency, and increase user engagement. Attend this session to learn how you can leverage Oracle WebCenter Portal as a user engagement platform for bringing together business process management, enterprise content management, and business intelligence into a highly relevant and integrated experience. Hear how Aramark has leveraged Oracle WebCenter Portal and Oracle WebCenter Content to deliver a unified workspace providing simpler navigation and processing, consolidation of tools, easy access to information, integrated search, and single sign-on. ¶ CON8886 - Content Consolidation: Save Money, Increase Efficiency, and Eliminate Silos Organizations are looking for ways to save money and be more efficient. With content in many different places, it’s difficult to know where to look for a document and whether the document is the most current version. With Oracle WebCenter, content can be consolidated into one best-of-breed repository that is secure, scalable, and integrated with your business processes and applications. Users can find the content they need, where they need it, and ensure that it is the right content. This session covers content challenges that affect your business; content consolidation that can lead to savings in storage and administration costs and can lower risks; and how companies are realizing savings. ¶ CON8911 - Improve Online Experiences for Customers and Partners with Self-Service Portals Are you able to provide your customers and partners an easy-to-use online self-service experience? Are you processing high-volume transactions and struggling with call center bottlenecks or back-end systems that won’t integrate, causing order delays and customer frustration? Are you looking to target content such as product and service offerings to your end users? This session shares approaches to providing targeted delivery as well as strategies and best practices for transforming your business by providing an intuitive user experience for your customers and partners. ¶ CON6156 - Top 10 Ways to Integrate Oracle WebCenter Content This session covers 10 common ways to integrate Oracle WebCenter Content with other enterprise applications and middleware. It discusses out-of-the-box modules that provide expanded features in Oracle WebCenter Content—such as enterprise search, SOA, and BPEL—as well as developer tools you can use to create custom integrations. The presentation also gives guidance on which integration option may work best in your environment. ¶ HOL10207 - Build an Intranet Portal with Oracle WebCenter In this hands-on lab, you’ll work with Oracle WebCenter Portal and Oracle WebCenter Content to build out an enterprise portal that maximizes the productivity of teams and individual contributors. Using browser-based tools, you’ll manage site resources such as page styles, templates, and navigation. You’ll edit content stored in Oracle WebCenter Content directly from your portal. You’ll also experience the latest features that promote collaboration, social networking, and personal productivity. ¶ CON7817 - Migration to Oracle WebCenter Imaging 11g Customers today continually strive to automate business processes, reduce costs, and improve efficiency. The accounts payable process—which is often distributed in nature, requires many approvals, and generates huge volumes of paper invoices—is automated by many customers. In this session, learn how Oracle and SYSTIME have partnered to help a customer migrate its existing Oracle Imaging and Process Management Release 7.6 to the latest Oracle WebCenter Imaging 11g and integrate it with Oracle’s JD Edwards family of products. ¶ CON8910 - How to Engage Customers Across Web, Mobile, and Social Channels Whether on desktops at the office, on tablets at home, or on mobile phones when on the go, today’s customers are always connected. To engage today’s customers, you need to make the online customer experience connected and consistent across a host of devices and multiple channels, including Web, mobile, and social networks. Managing this multichannel environment can result in lots of headaches without the right tools. Attend this session to learn how Oracle WebCenter Sites solves the challenge of multichannel customer engagement. ¶ HOL10206 - Oracle WebCenter Sites 11g: Transforming the Content Contributor Experience Oracle WebCenter Sites 11g makes it easy for marketers and business users to contribute to and manage Websites with the new visual, contextual, and intuitive Web authoring interface. In this hands-on lab, you will create and manage content for a sports-themed Website, using many of the new and enhanced features of the 11g release. ¶ CON8900 - Building Next-Generation Portals: An Interactive Customer Panel Discussion Social and collaborative technologies have changed how people interact, learn, and collaborate, and providing a modern, social Web presence is imperative to remain competitive in today’s market. Can your business benefit from a more collaborative and interactive portal environment for employees, customers, and partners? Attend this session to hear from Oracle WebCenter Portal customers as they share their strategies and best practices for providing users with a modern experience that adapts to their needs and includes personalized access to content in context. The panel also addresses how customers have benefited from creating next-generation portals by migrating from older portal technologies to Oracle WebCenter Portal. ¶ CON9625 - Taking Control of Oracle WebCenter Security Organizations are increasingly looking to extend their Oracle WebCenter portal for social business, to serve external users and provide seamless access to the right information. In particular, many organizations are extending Oracle WebCenter in a business-to-business scenario requiring secure identification and authorization of business partners and their users. This session focuses on how customers are leveraging, securing, and providing access control to Oracle WebCenter portal and mobile solutions. You will learn best practices and hear real-world examples of how to provide flexible and granular access control for Oracle WebCenter deployments, using Oracle Platform Security Services and Oracle Access Management Suite product offerings. ¶ CON8891 - Extending Social into Enterprise Applications and Business Processes Oracle Social Network is an extensible social platform that enables contextual collaboration within enterprise applications and business processes, providing relevant data from across various enterprise systems in one place. Attend this session to see how an Oracle Social Network customer is integrating multiple applications—such as CRM, HCM, and business processes—into Oracle Social Network and Oracle WebCenter to enable individuals and teams to solve complex cross-organizational business problems more effectively by utilizing the social enterprise. ¶ Thursday, October 4th CON8899 - Becoming a Social Business: Stories from the Front Lines of Change What does it really mean to be a social business? How can you change our organization to embrace social approaches? What pitfalls do you need to avoid? In this lively panel discussion, customer and industry thought leaders in social business explore these topics and more as they share their stories of the good, the bad, and the ugly that can happen when embracing social methods and technologies to improve business success. Using moderated questions and open Q&A from the audience, the panel discusses vital topics such as the critical factors for success, the major issues to avoid, how to gain senior executive support for social efforts, how to handle undesired behavior, and how to measure business impact. It takes a thought-provoking look at becoming a social business from the inside. ¶ CON6851 - Oracle WebCenter and Oracle Business Intelligence Enterprise Edition to Create Vendor Portals Large manufacturers of grocery items routinely find themselves depending on the inventory management expertise of their wholesalers and distributors. Inventory costs can be managed more efficiently by the manufacturers if they have better insight into the inventory levels of items carried by their distributors. This creates a unique opportunity for distributors and wholesalers to leverage this knowledge into a revenue-generating subscription service. Oracle Business Intelligence Enterprise Edition and Oracle WebCenter Portal play a key part in enabling creation of business-managed business intelligence portals for vendors. This session discusses one customer that implemented this by leveraging Oracle WebCenter and Oracle Business Intelligence Enterprise Edition. ¶ CON8879 - Provide a Personalized and Consistent Customer Experience in Your Websites and Portals Your customers engage with your company online in different ways throughout their journey—from prospecting by acquiring information on your corporate Website to transacting through self-service applications on your customer portal—and then the cycle begins again when they look for new products and services. Ensuring that the customer experience is consistent and personalized across online properties—from branding and content to interactions and transactions—can be a daunting task. Oracle WebCenter enables you to speak and interact with your customers with one voice across your Websites and portals by providing an integrated platform for delivery of self-service and engagement that unifies and personalizes the online experience. Learn more in this session. ¶ CON8898 - Land Mines, Potholes, and Dirt Roads: Navigating the Way to ECM Nirvana Ten years ago, people were predicting that by this time in history, we’d be some kind of utopian paperless society. As we all know, we’re not there yet, but are we getting closer? What is keeping companies from driving down the road to enterprise content management bliss? Most people understand that using ECM as a central platform enables organizations to expedite document-centric processes, but most business processes in organizations are still heavily paper-based. Many of these processes could be automated and improved with an ECM platform infrastructure. In this panel discussion, you’ll hear from Oracle WebCenter customers that have already solved some of these challenges as they share their strategies for success and roads to avoid along your journey. ¶ CON8908 - Oracle WebCenter Portal: Creating and Using Content Presenter Templates Oracle WebCenter Portal applications use task flows to display and integrate content stored in the Oracle WebCenter Content server. Among the most flexible task flows is Content Presenter, which renders various types of content on an Oracle WebCenter Portal page. Although Oracle WebCenter Portal comes with a set of predefined Content Presenter templates, developers can create their own templates for specific rendering needs. This session shows the lifecycle of developing Content Presenter task flows, including how to create, package, import, modify at runtime, and use such templates. In addition to simple examples with Oracle Application Development Framework (Oracle ADF) UI elements to render the content, it shows how to use other UI technologies, CSS files, and JavaScript libraries. ¶ CON8897 - Using Web Experience Management to Drive Online Marketing Success Every year, the online channel becomes more imperative for driving organizational top-line revenue, but for many companies, mastering how to best market their products and services in a fast-evolving online world with high customer expectations for personalized experiences can be a complex proposition. Come to this panel discussion, and hear directly from online marketers how they are succeeding today by using Web experience management to drive marketing success, using capabilities such as targeting and optimization, user-generated content, mobile site publishing, and site visitor personalization to deliver engaging online experiences. ¶ CON8892 - Oracle’s Journey to Social Business Social business is a revolution, one that is causing rapidly accelerating change in how companies and customers engage with one another and how employees work together. Oracle’s goal in becoming a social business is to create a socially connected organization in which working collaboratively across geographical locations, lines of business, and management chains is second nature, enabling innovative solutions to business challenges. We can achieve this by connecting the right people, finding the right content, communicating with the right people, collaborating at the right time, and building the right communities in the right context—all ready in the CLOUD. Attend this session to see how Oracle is transforming itself into a social business. ¶  ------------ If you've read all the way to the end here - we are REALLY looking forward to seeing you in San Francisco.

    Read the article

  • Red Gate Coder interviews: Alex Davies

    - by Michael Williamson
    Alex Davies has been a software engineer at Red Gate since graduating from university, and is currently busy working on .NET Demon. We talked about tackling parallel programming with his actors framework, a scientific approach to debugging, and how JavaScript is going to affect the programming languages we use in years to come. So, if we start at the start, how did you get started in programming? When I was seven or eight, I was given a BBC Micro for Christmas. I had asked for a Game Boy, but my dad thought it would be better to give me a proper computer. For a year or so, I only played games on it, but then I found the user guide for writing programs in it. I gradually started doing more stuff on it and found it fun. I liked creating. As I went into senior school I continued to write stuff on there, trying to write games that weren’t very good. I got a real computer when I was fourteen and found ways to write BASIC on it. Visual Basic to start with, and then something more interesting than that. How did you learn to program? Was there someone helping you out? Absolutely not! I learnt out of a book, or by experimenting. I remember the first time I found a loop, I was like “Oh my God! I don’t have to write out the same line over and over and over again any more. It’s amazing!” When did you think this might be something that you actually wanted to do as a career? For a long time, I thought it wasn’t something that you would do as a career, because it was too much fun to be a career. I thought I’d do chemistry at university and some kind of career based on chemical engineering. And then I went to a careers fair at school when I was seventeen or eighteen, and it just didn’t interest me whatsoever. I thought “I could be a programmer, and there’s loads of money there, and I’m good at it, and it’s fun”, but also that I shouldn’t spoil my hobby. Now I don’t really program in my spare time any more, which is a bit of a shame, but I program all the rest of the time, so I can live with it. Do you think you learnt much about programming at university? Yes, definitely! I went into university knowing how to make computers do anything I wanted them to do. However, I didn’t have the language to talk about algorithms, so the algorithms course in my first year was massively important. Learning other language paradigms like functional programming was really good for breadth of understanding. Functional programming influences normal programming through design rather than actually using it all the time. I draw inspiration from it to write imperative programs which I think is actually becoming really fashionable now, but I’ve been doing it for ages. I did it first! There were also some courses on really odd programming languages, a bit of Prolog, a little bit of C. Having a little bit of each of those is something that I would have never done on my own, so it was important. And then there are knowledge-based courses which are about not programming itself but things that have been programmed like TCP. Those are really important for examples for how to approach things. Did you do any internships while you were at university? Yeah, I spent both of my summers at the same company. I thought I could code well before I went there. Looking back at the crap that I produced, it was only surpassed in its crappiness by all of the other code already in that company. I’m so much better at writing nice code now than I used to be back then. Was there just not a culture of looking after your code? There was, they just didn’t hire people for their abilities in that area. They hired people for raw IQ. The first indicator of it going wrong was that they didn’t have any computer scientists, which is a bit odd in a programming company. But even beyond that they didn’t have people who learnt architecture from anyone else. Most of them had started straight out of university, so never really had experience or mentors to learn from. There wasn’t the experience to draw from to teach each other. In the second half of my second internship, I was being given tasks like looking at new technologies and teaching people stuff. Interns shouldn’t be teaching people how to do their jobs! All interns are going to have little nuggets of things that you don’t know about, but they shouldn’t consistently be the ones who know the most. It’s not a good environment to learn. I was going to ask how you found working with people who were more experienced than you… When I reached Red Gate, I found some people who were more experienced programmers than me, and that was difficult. I’ve been coding since I was tiny. At university there were people who were cleverer than me, but there weren’t very many who were more experienced programmers than me. During my internship, I didn’t find anyone who I classed as being a noticeably more experienced programmer than me. So, it was a shock to the system to have valid criticisms rather than just formatting criticisms. However, Red Gate’s not so big on the actual code review, at least it wasn’t when I started. We did an entire product release and then somebody looked over all of the UI of that product which I’d written and say what they didn’t like. By that point, it was way too late and I’d disagree with them. Do you think the lack of code reviews was a bad thing? I think if there’s going to be any oversight of new people, then it should be continuous rather than chunky. For me I don’t mind too much, I could go out and get oversight if I wanted it, and in those situations I felt comfortable without it. If I was managing the new person, then maybe I’d be keener on oversight and then the right way to do it is continuously and in very, very small chunks. Have you had any significant projects you’ve worked on outside of a job? When I was a teenager I wrote all sorts of stuff. I used to write games, I derived how to do isomorphic projections myself once. I didn’t know what the word was so I couldn’t Google for it, so I worked it out myself. It was horrifically complicated. But it sort of tailed off when I started at university, and is now basically zero. If I do side-projects now, they tend to be work-related side projects like my actors framework, NAct, which I started in a down tools week. Could you explain a little more about NAct? It is a little C# framework for writing parallel code more easily. Parallel programming is difficult when you need to write to shared data. Sometimes parallel programming is easy because you don’t need to write to shared data. When you do need to access shared data, you could just have your threads pile in and do their work, but then you would screw up the data because the threads would trample on each other’s toes. You could lock, but locks are really dangerous if you’re using more than one of them. You get interactions like deadlocks, and that’s just nasty. Actors instead allows you to say this piece of data belongs to this thread of execution, and nobody else can read it. If you want to read it, then ask that thread of execution for a piece of it by sending a message, and it will send the data back by a message. And that avoids deadlocks as long as you follow some obvious rules about not making your actors sit around waiting for other actors to do something. There are lots of ways to write actors, NAct allows you to do it as if it was method calls on other objects, which means you get all the strong type-safety that C# programmers like. Do you think that this is suitable for the majority of parallel programming, or do you think it’s only suitable for specific cases? It’s suitable for most difficult parallel programming. If you’ve just got a hundred web requests which are all independent of each other, then I wouldn’t bother because it’s easier to just spin them up in separate threads and they can proceed independently of each other. But where you’ve got difficult parallel programming, where you’ve got multiple threads accessing multiple bits of data in multiple ways at different times, then actors is at least as good as all other ways, and is, I reckon, easier to think about. When you’re using actors, you presumably still have to write your code in a different way from you would otherwise using single-threaded code. You can’t use actors with any methods that have return types, because you’re not allowed to call into another actor and wait for it. If you want to get a piece of data out of another actor, then you’ve got to use tasks so that you can use “async” and “await” to await asynchronously for it. But other than that, you can still stick things in classes so it’s not too different really. Rather than having thousands of objects with mutable state, you can use component-orientated design, where there are only a few mutable classes which each have a small number of instances. Then there can be thousands of immutable objects. If you tend to do that anyway, then actors isn’t much of a jump. If I’ve already built my system without any parallelism, how hard is it to add actors to exploit all eight cores on my desktop? Usually pretty easy. If you can identify even one boundary where things look like messages and you have components where some objects live on one side and these other objects live on the other side, then you can have a granddaddy object on one side be an actor and it will parallelise as it goes across that boundary. Not too difficult. If we do get 1000-core desktop PCs, do you think actors will scale up? It’s hard. There are always in the order of twenty to fifty actors in my whole program because I tend to write each component as actors, and I tend to have one instance of each component. So this won’t scale to a thousand cores. What you can do is write data structures out of actors. I use dictionaries all over the place, and if you need a dictionary that is going to be accessed concurrently, then you could build one of those out of actors in no time. You can use queuing to marshal requests between different slices of the dictionary which are living on different threads. So it’s like a distributed hash table but all of the chunks of it are on the same machine. That means that each of these thousand processors has cached one small piece of the dictionary. I reckon it wouldn’t be too big a leap to start doing proper parallelism. Do you think it helps if actors get baked into the language, similarly to Erlang? Erlang is excellent in that it has thread-local garbage collection. C# doesn’t, so there’s a limit to how well C# actors can possibly scale because there’s a single garbage collected heap shared between all of them. When you do a global garbage collection, you’ve got to stop all of the actors, which is seriously expensive, whereas in Erlang garbage collections happen per-actor, so they’re insanely cheap. However, Erlang deviated from all the sensible language design that people have used recently and has just come up with crazy stuff. You can definitely retrofit thread-local garbage collection to .NET, and then it’s quite well-suited to support actors, even if it’s not baked into the language. Speaking of language design, do you have a favourite programming language? I’ll choose a language which I’ve never written before. I like the idea of Scala. It sounds like C#, only with some of the niggles gone. I enjoy writing static types. It means you don’t have to writing tests so much. When you say it doesn’t have some of the niggles? C# doesn’t allow the use of a property as a method group. It doesn’t have Scala case classes, or sum types, where you can do a switch statement and the compiler checks that you’ve checked all the cases, which is really useful in functional-style programming. Pattern-matching, in other words. That’s actually the major niggle. C# is pretty good, and I’m quite happy with C#. And what about going even further with the type system to remove the need for tests to something like Haskell? Or is that a step too far? I’m quite a pragmatist, I don’t think I could deal with trying to write big systems in languages with too few other users, especially when learning how to structure things. I just don’t know anyone who can teach me, and the Internet won’t teach me. That’s the main reason I wouldn’t use it. If I turned up at a company that writes big systems in Haskell, I would have no objection to that, but I wouldn’t instigate it. What about things in C#? For instance, there’s contracts in C#, so you can try to statically verify a bit more about your code. Do you think that’s useful, or just not worthwhile? I’ve not really tried it. My hunch is that it needs to be built into the language and be quite mathematical for it to work in real life, and that doesn’t seem to have ended up true for C# contracts. I don’t think anyone who’s tried them thinks they’re any good. I might be wrong. On a slightly different note, how do you like to debug code? I think I’m quite an odd debugger. I use guesswork extremely rarely, especially if something seems quite difficult to debug. I’ve been bitten spending hours and hours on guesswork and not being scientific about debugging in the past, so now I’m scientific to a fault. What I want is to see the bug happening in the debugger, to step through the bug happening. To watch the program going from a valid state to an invalid state. When there’s a bug and I can’t work out why it’s happening, I try to find some piece of evidence which places the bug in one section of the code. From that experiment, I binary chop on the possible causes of the bug. I suppose that means binary chopping on places in the code, or binary chopping on a stage through a processing cycle. Basically, I’m very stupid about how I debug. I won’t make any guesses, I won’t use any intuition, I will only identify the experiment that’s going to binary chop most effectively and repeat rather than trying to guess anything. I suppose it’s quite top-down. Is most of the time then spent in the debugger? Absolutely, if at all possible I will never debug using print statements or logs. I don’t really hold much stock in outputting logs. If there’s any bug which can be reproduced locally, I’d rather do it in the debugger than outputting logs. And with SmartAssembly error reporting, there’s not a lot that can’t be either observed in an error report and just fixed, or reproduced locally. And in those other situations, maybe I’ll use logs. But I hate using logs. You stare at the log, trying to guess what’s going on, and that’s exactly what I don’t like doing. You have to just look at it and see does this look right or wrong. We’ve covered how you get to grip with bugs. How do you get to grips with an entire codebase? I watch it in the debugger. I find little bugs and then try to fix them, and mostly do it by watching them in the debugger and gradually getting an understanding of how the code works using my process of binary chopping. I have to do a lot of reading and watching code to choose where my slicing-in-half experiment is going to be. The last time I did it was SmartAssembly. The old code was a complete mess, but at least it did things top to bottom. There wasn’t too much of some of the big abstractions where flow of control goes all over the place, into a base class and back again. Code’s really hard to understand when that happens. So I like to choose a little bug and try to fix it, and choose a bigger bug and try to fix it. Definitely learn by doing. I want to always have an aim so that I get a little achievement after every few hours of debugging. Once I’ve learnt the codebase I might be able to fix all the bugs in an hour, but I’d rather be using them as an aim while I’m learning the codebase. If I was a maintainer of a codebase, what should I do to make it as easy as possible for you to understand? Keep distinct concepts in different places. And name your stuff so that it’s obvious which concepts live there. You shouldn’t have some variable that gets set miles up the top of somewhere, and then is read miles down to choose some later behaviour. I’m talking from a very much SmartAssembly point of view because the old SmartAssembly codebase had tons and tons of these things, where it would read some property of the code and then deal with it later. Just thousands of variables in scope. Loads of things to think about. If you can keep concepts separate, then it aids me in my process of fixing bugs one at a time, because each bug is going to more or less be understandable in the one place where it is. And what about tests? Do you think they help at all? I’ve never had the opportunity to learn a codebase which has had tests, I don’t know what it’s like! What about when you’re actually developing? How useful do you find tests in finding bugs or regressions? Finding regressions, absolutely. Running bits of code that would be quite hard to run otherwise, definitely. It doesn’t happen very often that a test finds a bug in the first place. I don’t really buy nebulous promises like tests being a good way to think about the spec of the code. My thinking goes something like “This code works at the moment, great, ship it! Ah, there’s a way that this code doesn’t work. Okay, write a test, demonstrate that it doesn’t work, fix it, use the test to demonstrate that it’s now fixed, and keep the test for future regressions.” The most valuable tests are for bugs that have actually happened at some point, because bugs that have actually happened at some point, despite the fact that you think you’ve fixed them, are way more likely to appear again than new bugs are. Does that mean that when you write your code the first time, there are no tests? Often. The chance of there being a bug in a new feature is relatively unaffected by whether I’ve written a test for that new feature because I’m not good enough at writing tests to think of bugs that I would have written into the code. So not writing regression tests for all of your code hasn’t affected you too badly? There are different kinds of features. Some of them just always work, and are just not flaky, they just continue working whatever you throw at them. Maybe because the type-checker is particularly effective around them. Writing tests for those features which just tend to always work is a waste of time. And because it’s a waste of time I’ll tend to wait until a feature has demonstrated its flakiness by having bugs in it before I start trying to test it. You can get a feel for whether it’s going to be flaky code as you’re writing it. I try to write it to make it not flaky, but there are some things that are just inherently flaky. And very occasionally, I’ll think “this is going to be flaky” as I’m writing, and then maybe do a test, but not most of the time. How do you think your programming style has changed over time? I’ve got clearer about what the right way of doing things is. I used to flip-flop a lot between different ideas. Five years ago I came up with some really good ideas and some really terrible ideas. All of them seemed great when I thought of them, but they were quite diverse ideas, whereas now I have a smaller set of reliable ideas that are actually good for structuring code. So my code is probably more similar to itself than it used to be back in the day, when I was trying stuff out. I’ve got more disciplined about encapsulation, I think. There are operational things like I use actors more now than I used to, and that forces me to use immutability more than I used to. The first code that I wrote in Red Gate was the memory profiler UI, and that was an actor, I just didn’t know the name of it at the time. I don’t really use object-orientation. By object-orientation, I mean having n objects of the same type which are mutable. I want a constant number of objects that are mutable, and they should be different types. I stick stuff in dictionaries and then have one thing that owns the dictionary and puts stuff in and out of it. That’s definitely a pattern that I’ve seen recently. I think maybe I’m doing functional programming. Possibly. It’s plausible. If you had to summarise the essence of programming in a pithy sentence, how would you do it? Programming is the form of art that, without losing any of the beauty of architecture or fine art, allows you to produce things that people love and you make money from. So you think it’s an art rather than a science? It’s a little bit of engineering, a smidgeon of maths, but it’s not science. Like architecture, programming is on that boundary between art and engineering. If you want to do it really nicely, it’s mostly art. You can get away with doing architecture and programming entirely by having a good engineering mind, but you’re not going to produce anything nice. You’re not going to have joy doing it if you’re an engineering mind. Architects who are just engineering minds are not going to enjoy their job. I suppose engineering is the foundation on which you build the art. Exactly. How do you think programming is going to change over the next ten years? There will be an unfortunate shift towards dynamically-typed languages, because of JavaScript. JavaScript has an unfair advantage. JavaScript’s unfair advantage will cause more people to be exposed to dynamically-typed languages, which means other dynamically-typed languages crop up and the best features go into dynamically-typed languages. Then people conflate the good features with the fact that it’s dynamically-typed, and more investment goes into dynamically-typed languages. They end up better, so people use them. What about the idea of compiling other languages, possibly statically-typed, to JavaScript? It’s a reasonable idea. I would like to do it, but I don’t think enough people in the world are going to do it to make it pick up. The hordes of beginners are the lifeblood of a language community. They are what makes there be good tools and what makes there be vibrant community websites. And any particular thing which is the same as JavaScript only with extra stuff added to it, although it might be technically great, is not going to have the hordes of beginners. JavaScript is always to be quickest and easiest way for a beginner to start programming in the browser. And dynamically-typed languages are great for beginners. Compilers are pretty scary and beginners don’t write big code. And having your errors come up in the same place, whether they’re statically checkable errors or not, is quite nice for a beginner. If someone asked me to teach them some programming, I’d teach them JavaScript. If dynamically-typed languages are great for beginners, when do you think the benefits of static typing start to kick in? The value of having a statically typed program is in the tools that rely on the static types to produce a smooth IDE experience rather than actually telling me my compile errors. And only once you’re experienced enough a programmer that having a really smooth IDE experience makes a blind bit of difference, does static typing make a blind bit of difference. So it’s not really about size of codebase. If I go and write up a tiny program, I’m still going to get value out of writing it in C# using ReSharper because I’m experienced with C# and ReSharper enough to be able to write code five times faster if I have that help. Any other visions of the future? Nobody’s going to use actors. Because everyone’s going to be running on single-core VMs connected over network-ready protocols like JSON over HTTP. So, parallelism within one operating system is going to die. But until then, you should use actors. More Red Gater Coder interviews

    Read the article

  • A way of doing real-world test-driven development (and some thoughts about it)

    - by Thomas Weller
    Lately, I exchanged some arguments with Derick Bailey about some details of the red-green-refactor cycle of the Test-driven development process. In short, the issue revolved around the fact that it’s not enough to have a test red or green, but it’s also important to have it red or green for the right reasons. While for me, it’s sufficient to initially have a NotImplementedException in place, Derick argues that this is not totally correct (see these two posts: Red/Green/Refactor, For The Right Reasons and Red For The Right Reason: Fail By Assertion, Not By Anything Else). And he’s right. But on the other hand, I had no idea how his insights could have any practical consequence for my own individual interpretation of the red-green-refactor cycle (which is not really red-green-refactor, at least not in its pure sense, see the rest of this article). This made me think deeply for some days now. In the end I found out that the ‘right reason’ changes in my understanding depending on what development phase I’m in. To make this clear (at least I hope it becomes clear…) I started to describe my way of working in some detail, and then something strange happened: The scope of the article slightly shifted from focusing ‘only’ on the ‘right reason’ issue to something more general, which you might describe as something like  'Doing real-world TDD in .NET , with massive use of third-party add-ins’. This is because I feel that there is a more general statement about Test-driven development to make:  It’s high time to speak about the ‘How’ of TDD, not always only the ‘Why’. Much has been said about this, and me myself also contributed to that (see here: TDD is not about testing, it's about how we develop software). But always justifying what you do is very unsatisfying in the long run, it is inherently defensive, and it costs time and effort that could be used for better and more important things. And frankly: I’m somewhat sick and tired of repeating time and again that the test-driven way of software development is highly preferable for many reasons - I don’t want to spent my time exclusively on stating the obvious… So, again, let’s say it clearly: TDD is programming, and programming is TDD. Other ways of programming (code-first, sometimes called cowboy-coding) are exceptional and need justification. – I know that there are many people out there who will disagree with this radical statement, and I also know that it’s not a description of the real world but more of a mission statement or something. But nevertheless I’m absolutely sure that in some years this statement will be nothing but a platitude. Side note: Some parts of this post read as if I were paid by Jetbrains (the manufacturer of the ReSharper add-in – R#), but I swear I’m not. Rather I think that Visual Studio is just not production-complete without it, and I wouldn’t even consider to do professional work without having this add-in installed... The three parts of a software component Before I go into some details, I first should describe my understanding of what belongs to a software component (assembly, type, or method) during the production process (i.e. the coding phase). Roughly, I come up with the three parts shown below:   First, we need to have some initial sort of requirement. This can be a multi-page formal document, a vague idea in some programmer’s brain of what might be needed, or anything in between. In either way, there has to be some sort of requirement, be it explicit or not. – At the C# micro-level, the best way that I found to formulate that is to define interfaces for just about everything, even for internal classes, and to provide them with exhaustive xml comments. The next step then is to re-formulate these requirements in an executable form. This is specific to the respective programming language. - For C#/.NET, the Gallio framework (which includes MbUnit) in conjunction with the ReSharper add-in for Visual Studio is my toolset of choice. The third part then finally is the production code itself. It’s development is entirely driven by the requirements and their executable formulation. This is the delivery, the two other parts are ‘only’ there to make its production possible, to give it a decent quality and reliability, and to significantly reduce related costs down the maintenance timeline. So while the first two parts are not really relevant for the customer, they are very important for the developer. The customer (or in Scrum terms: the Product Owner) is not interested at all in how  the product is developed, he is only interested in the fact that it is developed as cost-effective as possible, and that it meets his functional and non-functional requirements. The rest is solely a matter of the developer’s craftsmanship, and this is what I want to talk about during the remainder of this article… An example To demonstrate my way of doing real-world TDD, I decided to show the development of a (very) simple Calculator component. The example is deliberately trivial and silly, as examples always are. I am totally aware of the fact that real life is never that simple, but I only want to show some development principles here… The requirement As already said above, I start with writing down some words on the initial requirement, and I normally use interfaces for that, even for internal classes - the typical question “intf or not” doesn’t even come to mind. I need them for my usual workflow and using them automatically produces high componentized and testable code anyway. To think about their usage in every single situation would slow down the production process unnecessarily. So this is what I begin with: namespace Calculator {     /// <summary>     /// Defines a very simple calculator component for demo purposes.     /// </summary>     public interface ICalculator     {         /// <summary>         /// Gets the result of the last successful operation.         /// </summary>         /// <value>The last result.</value>         /// <remarks>         /// Will be <see langword="null" /> before the first successful operation.         /// </remarks>         double? LastResult { get; }       } // interface ICalculator   } // namespace Calculator So, I’m not beginning with a test, but with a sort of code declaration - and still I insist on being 100% test-driven. There are three important things here: Starting this way gives me a method signature, which allows to use IntelliSense and AutoCompletion and thus eliminates the danger of typos - one of the most regular, annoying, time-consuming, and therefore expensive sources of error in the development process. In my understanding, the interface definition as a whole is more of a readable requirement document and technical documentation than anything else. So this is at least as much about documentation than about coding. The documentation must completely describe the behavior of the documented element. I normally use an IoC container or some sort of self-written provider-like model in my architecture. In either case, I need my components defined via service interfaces anyway. - I will use the LinFu IoC framework here, for no other reason as that is is very simple to use. The ‘Red’ (pt. 1)   First I create a folder for the project’s third-party libraries and put the LinFu.Core dll there. Then I set up a test project (via a Gallio project template), and add references to the Calculator project and the LinFu dll. Finally I’m ready to write the first test, which will look like the following: namespace Calculator.Test {     [TestFixture]     public class CalculatorTest     {         private readonly ServiceContainer container = new ServiceContainer();           [Test]         public void CalculatorLastResultIsInitiallyNull()         {             ICalculator calculator = container.GetService<ICalculator>();               Assert.IsNull(calculator.LastResult);         }       } // class CalculatorTest   } // namespace Calculator.Test       This is basically the executable formulation of what the interface definition states (part of). Side note: There’s one principle of TDD that is just plain wrong in my eyes: I’m talking about the Red is 'does not compile' thing. How could a compiler error ever be interpreted as a valid test outcome? I never understood that, it just makes no sense to me. (Or, in Derick’s terms: this reason is as wrong as a reason ever could be…) A compiler error tells me: Your code is incorrect, but nothing more.  Instead, the ‘Red’ part of the red-green-refactor cycle has a clearly defined meaning to me: It means that the test works as intended and fails only if its assumptions are not met for some reason. Back to our Calculator. When I execute the above test with R#, the Gallio plugin will give me this output: So this tells me that the test is red for the wrong reason: There’s no implementation that the IoC-container could load, of course. So let’s fix that. With R#, this is very easy: First, create an ICalculator - derived type:        Next, implement the interface members: And finally, move the new class to its own file: So far my ‘work’ was six mouse clicks long, the only thing that’s left to do manually here, is to add the Ioc-specific wiring-declaration and also to make the respective class non-public, which I regularly do to force my components to communicate exclusively via interfaces: This is what my Calculator class looks like as of now: using System; using LinFu.IoC.Configuration;   namespace Calculator {     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         public double? LastResult         {             get             {                 throw new NotImplementedException();             }         }     } } Back to the test fixture, we have to put our IoC container to work: [TestFixture] public class CalculatorTest {     #region Fields       private readonly ServiceContainer container = new ServiceContainer();       #endregion // Fields       #region Setup/TearDown       [FixtureSetUp]     public void FixtureSetUp()     {        container.LoadFrom(AppDomain.CurrentDomain.BaseDirectory, "Calculator.dll");     }       ... Because I have a R# live template defined for the setup/teardown method skeleton as well, the only manual coding here again is the IoC-specific stuff: two lines, not more… The ‘Red’ (pt. 2) Now, the execution of the above test gives the following result: This time, the test outcome tells me that the method under test is called. And this is the point, where Derick and I seem to have somewhat different views on the subject: Of course, the test still is worthless regarding the red/green outcome (or: it’s still red for the wrong reasons, in that it gives a false negative). But as far as I am concerned, I’m not really interested in the test outcome at this point of the red-green-refactor cycle. Rather, I only want to assert that my test actually calls the right method. If that’s the case, I will happily go on to the ‘Green’ part… The ‘Green’ Making the test green is quite trivial. Just make LastResult an automatic property:     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         public double? LastResult { get; private set; }     }         One more round… Now on to something slightly more demanding (cough…). Let’s state that our Calculator exposes an Add() method:         ...   /// <summary>         /// Adds the specified operands.         /// </summary>         /// <param name="operand1">The operand1.</param>         /// <param name="operand2">The operand2.</param>         /// <returns>The result of the additon.</returns>         /// <exception cref="ArgumentException">         /// Argument <paramref name="operand1"/> is &lt; 0.<br/>         /// -- or --<br/>         /// Argument <paramref name="operand2"/> is &lt; 0.         /// </exception>         double Add(double operand1, double operand2);       } // interface ICalculator A remark: I sometimes hear the complaint that xml comment stuff like the above is hard to read. That’s certainly true, but irrelevant to me, because I read xml code comments with the CR_Documentor tool window. And using that, it looks like this:   Apart from that, I’m heavily using xml code comments (see e.g. here for a detailed guide) because there is the possibility of automating help generation with nightly CI builds (using MS Sandcastle and the Sandcastle Help File Builder), and then publishing the results to some intranet location.  This way, a team always has first class, up-to-date technical documentation at hand about the current codebase. (And, also very important for speeding up things and avoiding typos: You have IntelliSense/AutoCompletion and R# support, and the comments are subject to compiler checking…).     Back to our Calculator again: Two more R# – clicks implement the Add() skeleton:         ...           public double Add(double operand1, double operand2)         {             throw new NotImplementedException();         }       } // class Calculator As we have stated in the interface definition (which actually serves as our requirement document!), the operands are not allowed to be negative. So let’s start implementing that. Here’s the test: [Test] [Row(-0.5, 2)] public void AddThrowsOnNegativeOperands(double operand1, double operand2) {     ICalculator calculator = container.GetService<ICalculator>();       Assert.Throws<ArgumentException>(() => calculator.Add(operand1, operand2)); } As you can see, I’m using a data-driven unit test method here, mainly for these two reasons: Because I know that I will have to do the same test for the second operand in a few seconds, I save myself from implementing another test method for this purpose. Rather, I only will have to add another Row attribute to the existing one. From the test report below, you can see that the argument values are explicitly printed out. This can be a valuable documentation feature even when everything is green: One can quickly review what values were tested exactly - the complete Gallio HTML-report (as it will be produced by the Continuous Integration runs) shows these values in a quite clear format (see below for an example). Back to our Calculator development again, this is what the test result tells us at the moment: So we’re red again, because there is not yet an implementation… Next we go on and implement the necessary parameter verification to become green again, and then we do the same thing for the second operand. To make a long story short, here’s the test and the method implementation at the end of the second cycle: // in CalculatorTest:   [Test] [Row(-0.5, 2)] [Row(295, -123)] public void AddThrowsOnNegativeOperands(double operand1, double operand2) {     ICalculator calculator = container.GetService<ICalculator>();       Assert.Throws<ArgumentException>(() => calculator.Add(operand1, operand2)); }   // in Calculator: public double Add(double operand1, double operand2) {     if (operand1 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand1");     }     if (operand2 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand2");     }     throw new NotImplementedException(); } So far, we have sheltered our method from unwanted input, and now we can safely operate on the parameters without further caring about their validity (this is my interpretation of the Fail Fast principle, which is regarded here in more detail). Now we can think about the method’s successful outcomes. First let’s write another test for that: [Test] [Row(1, 1, 2)] public void TestAdd(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Add(operand1, operand2);       Assert.AreEqual(expectedResult, result); } Again, I’m regularly using row based test methods for these kinds of unit tests. The above shown pattern proved to be extremely helpful for my development work, I call it the Defined-Input/Expected-Output test idiom: You define your input arguments together with the expected method result. There are two major benefits from that way of testing: In the course of refining a method, it’s very likely to come up with additional test cases. In our case, we might add tests for some edge cases like ‘one of the operands is zero’ or ‘the sum of the two operands causes an overflow’, or maybe there’s an external test protocol that has to be fulfilled (e.g. an ISO norm for medical software), and this results in the need of testing against additional values. In all these scenarios we only have to add another Row attribute to the test. Remember that the argument values are written to the test report, so as a side-effect this produces valuable documentation. (This can become especially important if the fulfillment of some sort of external requirements has to be proven). So your test method might look something like that in the end: [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 2)] [Row(0, 999999999, 999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, double.MaxValue)] [Row(4, double.MaxValue - 2.5, double.MaxValue)] public void TestAdd(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Add(operand1, operand2);       Assert.AreEqual(expectedResult, result); } And this will produce the following HTML report (with Gallio):   Not bad for the amount of work we invested in it, huh? - There might be scenarios where reports like that can be useful for demonstration purposes during a Scrum sprint review… The last requirement to fulfill is that the LastResult property is expected to store the result of the last operation. I don’t show this here, it’s trivial enough and brings nothing new… And finally: Refactor (for the right reasons) To demonstrate my way of going through the refactoring portion of the red-green-refactor cycle, I added another method to our Calculator component, namely Subtract(). Here’s the code (tests and production): // CalculatorTest.cs:   [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 0)] [Row(0, 999999999, -999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, -double.MaxValue)] [Row(4, double.MaxValue - 2.5, -double.MaxValue)] public void TestSubtract(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Subtract(operand1, operand2);       Assert.AreEqual(expectedResult, result); }   [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 0)] [Row(0, 999999999, -999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, -double.MaxValue)] [Row(4, double.MaxValue - 2.5, -double.MaxValue)] public void TestSubtractGivesExpectedLastResult(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       calculator.Subtract(operand1, operand2);       Assert.AreEqual(expectedResult, calculator.LastResult); }   ...   // ICalculator.cs: /// <summary> /// Subtracts the specified operands. /// </summary> /// <param name="operand1">The operand1.</param> /// <param name="operand2">The operand2.</param> /// <returns>The result of the subtraction.</returns> /// <exception cref="ArgumentException"> /// Argument <paramref name="operand1"/> is &lt; 0.<br/> /// -- or --<br/> /// Argument <paramref name="operand2"/> is &lt; 0. /// </exception> double Subtract(double operand1, double operand2);   ...   // Calculator.cs:   public double Subtract(double operand1, double operand2) {     if (operand1 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand1");     }       if (operand2 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand2");     }       return (this.LastResult = operand1 - operand2).Value; }   Obviously, the argument validation stuff that was produced during the red-green part of our cycle duplicates the code from the previous Add() method. So, to avoid code duplication and minimize the number of code lines of the production code, we do an Extract Method refactoring. One more time, this is only a matter of a few mouse clicks (and giving the new method a name) with R#: Having done that, our production code finally looks like that: using System; using LinFu.IoC.Configuration;   namespace Calculator {     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         #region ICalculator           public double? LastResult { get; private set; }           public double Add(double operand1, double operand2)         {             ThrowIfOneOperandIsInvalid(operand1, operand2);               return (this.LastResult = operand1 + operand2).Value;         }           public double Subtract(double operand1, double operand2)         {             ThrowIfOneOperandIsInvalid(operand1, operand2);               return (this.LastResult = operand1 - operand2).Value;         }           #endregion // ICalculator           #region Implementation (Helper)           private static void ThrowIfOneOperandIsInvalid(double operand1, double operand2)         {             if (operand1 < 0.0)             {                 throw new ArgumentException("Value must not be negative.", "operand1");             }               if (operand2 < 0.0)             {                 throw new ArgumentException("Value must not be negative.", "operand2");             }         }           #endregion // Implementation (Helper)       } // class Calculator   } // namespace Calculator But is the above worth the effort at all? It’s obviously trivial and not very impressive. All our tests were green (for the right reasons), and refactoring the code did not change anything. It’s not immediately clear how this refactoring work adds value to the project. Derick puts it like this: STOP! Hold on a second… before you go any further and before you even think about refactoring what you just wrote to make your test pass, you need to understand something: if your done with your requirements after making the test green, you are not required to refactor the code. I know… I’m speaking heresy, here. Toss me to the wolves, I’ve gone over to the dark side! Seriously, though… if your test is passing for the right reasons, and you do not need to write any test or any more code for you class at this point, what value does refactoring add? Derick immediately answers his own question: So why should you follow the refactor portion of red/green/refactor? When you have added code that makes the system less readable, less understandable, less expressive of the domain or concern’s intentions, less architecturally sound, less DRY, etc, then you should refactor it. I couldn’t state it more precise. From my personal perspective, I’d add the following: You have to keep in mind that real-world software systems are usually quite large and there are dozens or even hundreds of occasions where micro-refactorings like the above can be applied. It’s the sum of them all that counts. And to have a good overall quality of the system (e.g. in terms of the Code Duplication Percentage metric) you have to be pedantic on the individual, seemingly trivial cases. My job regularly requires the reading and understanding of ‘foreign’ code. So code quality/readability really makes a HUGE difference for me – sometimes it can be even the difference between project success and failure… Conclusions The above described development process emerged over the years, and there were mainly two things that guided its evolution (you might call it eternal principles, personal beliefs, or anything in between): Test-driven development is the normal, natural way of writing software, code-first is exceptional. So ‘doing TDD or not’ is not a question. And good, stable code can only reliably be produced by doing TDD (yes, I know: many will strongly disagree here again, but I’ve never seen high-quality code – and high-quality code is code that stood the test of time and causes low maintenance costs – that was produced code-first…) It’s the production code that pays our bills in the end. (Though I have seen customers these days who demand an acceptance test battery as part of the final delivery. Things seem to go into the right direction…). The test code serves ‘only’ to make the production code work. But it’s the number of delivered features which solely counts at the end of the day - no matter how much test code you wrote or how good it is. With these two things in mind, I tried to optimize my coding process for coding speed – or, in business terms: productivity - without sacrificing the principles of TDD (more than I’d do either way…).  As a result, I consider a ratio of about 3-5/1 for test code vs. production code as normal and desirable. In other words: roughly 60-80% of my code is test code (This might sound heavy, but that is mainly due to the fact that software development standards only begin to evolve. The entire software development profession is very young, historically seen; only at the very beginning, and there are no viable standards yet. If you think about software development as a kind of casting process, where the test code is the mold and the resulting production code is the final product, then the above ratio sounds no longer extraordinary…) Although the above might look like very much unnecessary work at first sight, it’s not. With the aid of the mentioned add-ins, doing all the above is a matter of minutes, sometimes seconds (while writing this post took hours and days…). The most important thing is to have the right tools at hand. Slow developer machines or the lack of a tool or something like that - for ‘saving’ a few 100 bucks -  is just not acceptable and a very bad decision in business terms (though I quite some times have seen and heard that…). Production of high-quality products needs the usage of high-quality tools. This is a platitude that every craftsman knows… The here described round-trip will take me about five to ten minutes in my real-world development practice. I guess it’s about 30% more time compared to developing the ‘traditional’ (code-first) way. But the so manufactured ‘product’ is of much higher quality and massively reduces maintenance costs, which is by far the single biggest cost factor, as I showed in this previous post: It's the maintenance, stupid! (or: Something is rotten in developerland.). In the end, this is a highly cost-effective way of software development… But on the other hand, there clearly is a trade-off here: coding speed vs. code quality/later maintenance costs. The here described development method might be a perfect fit for the overwhelming majority of software projects, but there certainly are some scenarios where it’s not - e.g. if time-to-market is crucial for a software project. So this is a business decision in the end. It’s just that you have to know what you’re doing and what consequences this might have… Some last words First, I’d like to thank Derick Bailey again. His two aforementioned posts (which I strongly recommend for reading) inspired me to think deeply about my own personal way of doing TDD and to clarify my thoughts about it. I wouldn’t have done that without this inspiration. I really enjoy that kind of discussions… I agree with him in all respects. But I don’t know (yet?) how to bring his insights into the described production process without slowing things down. The above described method proved to be very “good enough” in my practical experience. But of course, I’m open to suggestions here… My rationale for now is: If the test is initially red during the red-green-refactor cycle, the ‘right reason’ is: it actually calls the right method, but this method is not yet operational. Later on, when the cycle is finished and the tests become part of the regular, automated Continuous Integration process, ‘red’ certainly must occur for the ‘right reason’: in this phase, ‘red’ MUST mean nothing but an unfulfilled assertion - Fail By Assertion, Not By Anything Else!

    Read the article

  • How to print PCL Directly to the printer? Vb.net VS 2010

    - by Justin
    Good day, I've been trying to upgrade an old Print Program that prints raw pcl directly to the printer. The print program takes a PCL Mask and a PCL Batch Spool File (just pcl with page turn commands, I think) merges them and sends them off to the printer. I'm able to send to the Printer a file stream of PCL but I get mixed results and i do not understand why printing is so difficult under .net. I mean yes there is the PrintDocument class, but to print PCL... Let's just say I'm ready to detach my printer from the network and burn it a live. Here is my class PrintDirect (rather a hybrid class) Imports System Imports System.Text Imports System.Runtime.InteropServices <StructLayout(LayoutKind.Sequential)> _ Public Structure DOCINFO <MarshalAs(UnmanagedType.LPWStr)> _ Public pDocName As String <MarshalAs(UnmanagedType.LPWStr)> _ Public pOutputFile As String <MarshalAs(UnmanagedType.LPWStr)> _ Public pDataType As String End Structure 'DOCINFO Public Class PrintDirect <DllImport("winspool.drv", CharSet:=CharSet.Unicode, ExactSpelling:=False, CallingConvention:=CallingConvention.StdCall)> _ Public Shared Function OpenPrinter(ByVal pPrinterName As String, ByRef phPrinter As IntPtr, ByVal pDefault As Integer) As Long End Function <DllImport("winspool.drv", CharSet:=CharSet.Unicode, ExactSpelling:=False, CallingConvention:=CallingConvention.StdCall)> _ Public Shared Function StartDocPrinter(ByVal hPrinter As IntPtr, ByVal Level As Integer, ByRef pDocInfo As DOCINFO) As Long End Function <DllImport("winspool.drv", CharSet:=CharSet.Unicode, ExactSpelling:=True, CallingConvention:=CallingConvention.StdCall)> _ Public Shared Function StartPagePrinter(ByVal hPrinter As IntPtr) As Long End Function <DllImport("winspool.drv", CharSet:=CharSet.Ansi, ExactSpelling:=True, CallingConvention:=CallingConvention.StdCall)> _ Public Shared Function WritePrinter(ByVal hPrinter As IntPtr, ByVal data As String, ByVal buf As Integer, ByRef pcWritten As Integer) As Long End Function <DllImport("winspool.drv", CharSet:=CharSet.Unicode, ExactSpelling:=True, CallingConvention:=CallingConvention.StdCall)> _ Public Shared Function EndPagePrinter(ByVal hPrinter As IntPtr) As Long End Function <DllImport("winspool.drv", CharSet:=CharSet.Unicode, ExactSpelling:=True, CallingConvention:=CallingConvention.StdCall)> _ Public Shared Function EndDocPrinter(ByVal hPrinter As IntPtr) As Long End Function <DllImport("winspool.drv", CharSet:=CharSet.Unicode, ExactSpelling:=True, CallingConvention:=CallingConvention.StdCall)> _ Public Shared Function ClosePrinter(ByVal hPrinter As IntPtr) As Long End Function 'Helps uses to work with the printer Public Class PrintJob ''' <summary> ''' The address of the printer to print to. ''' </summary> ''' <remarks></remarks> Public PrinterName As String = "" Dim lhPrinter As New System.IntPtr() ''' <summary> ''' The object deriving from the Win32 API, Winspool.drv. ''' Use this object to overide the settings defined in the PrintJob Class. ''' </summary> ''' <remarks>Only use this when absolutly nessacary.</remarks> Public OverideDocInfo As New DOCINFO() ''' <summary> ''' The PCL Control Character or the ascii escape character. \x1b ''' </summary> ''' <remarks>ChrW(27)</remarks> Public Const PCL_Control_Character As Char = ChrW(27) ''' <summary> ''' Opens a connection to a printer, if false the connection could not be established. ''' </summary> ''' <returns></returns> ''' <remarks></remarks> Public Function OpenPrint() As Boolean Dim rtn_val As Boolean = False PrintDirect.OpenPrinter(PrinterName, lhPrinter, 0) If Not lhPrinter = 0 Then rtn_val = True End If Return rtn_val End Function ''' <summary> ''' The name of the Print Document. ''' </summary> ''' <value></value> ''' <returns></returns> ''' <remarks></remarks> Public Property DocumentName As String Get Return OverideDocInfo.pDocName End Get Set(ByVal value As String) OverideDocInfo.pDocName = value End Set End Property ''' <summary> ''' The type of Data that will be sent to the printer. ''' </summary> ''' <value>Send the print data type, usually "RAW" or "LPR". To overide use the OverideDocInfo Object</value> ''' <returns>The DataType as it was provided, if it is not part of the enumeration it is set to "UNKNOWN".</returns> ''' <remarks></remarks> Public Property DocumentDataType As PrintDataType Get Select Case OverideDocInfo.pDataType.ToUpper Case "RAW" Return PrintDataType.RAW Case "LPR" Return PrintDataType.LPR Case Else Return PrintDataType.UNKOWN End Select End Get Set(ByVal value As PrintDataType) OverideDocInfo.pDataType = [Enum].GetName(GetType(PrintDataType), value) End Set End Property Public Property DocumentOutputFile As String Get Return OverideDocInfo.pOutputFile End Get Set(ByVal value As String) OverideDocInfo.pOutputFile = value End Set End Property Enum PrintDataType UNKOWN = 0 RAW = 1 LPR = 2 End Enum ''' <summary> ''' Initializes the printing matrix ''' </summary> ''' <param name="PrintLevel">I have no idea what the hack this is...</param> ''' <remarks></remarks> Public Sub OpenDocument(Optional ByVal PrintLevel As Integer = 1) PrintDirect.StartDocPrinter(lhPrinter, PrintLevel, OverideDocInfo) End Sub ''' <summary> ''' Starts the next page. ''' </summary> ''' <remarks></remarks> Public Sub StartPage() PrintDirect.StartPagePrinter(lhPrinter) End Sub ''' <summary> ''' Writes a string to the printer, can be used to write out a section of the document at a time. ''' </summary> ''' <param name="data">The String to Send out to the Printer.</param> ''' <returns>The pcWritten as an integer, 0 may mean the writter did not write out anything.</returns> ''' <remarks>Warning the buffer is automatically created by the lenegth of the string.</remarks> Public Function WriteToPrinter(ByVal data As String) As Integer Dim pcWritten As Integer = 0 PrintDirect.WritePrinter(lhPrinter, data, data.Length - 1, pcWritten) Return pcWritten End Function Public Sub EndPage() PrintDirect.EndPagePrinter(lhPrinter) End Sub Public Sub CloseDocument() PrintDirect.EndDocPrinter(lhPrinter) End Sub Public Sub ClosePrint() PrintDirect.ClosePrinter(lhPrinter) End Sub ''' <summary> ''' Opens a connection to a printer and starts a new document. ''' </summary> ''' <returns>If false the connection could not be established. </returns> ''' <remarks></remarks> Public Function Open() As Boolean Dim rtn_val As Boolean = False rtn_val = OpenPrint() If rtn_val Then OpenDocument() End If Return rtn_val End Function ''' <summary> ''' Closes the document and closes the connection to the printer. ''' </summary> ''' <remarks></remarks> Public Sub Close() CloseDocument() ClosePrint() End Sub End Class End Class 'PrintDirect Here is how I print my file. I'm simple printing the PCL Masks, to show proof of concept. But I can't even do that. I can effectively create PCL and send it the printer without reading the file and it works fine... Plus I get mixed results with different stream reader text encoding as well. Dim pJob As New PrintDirect.PrintJob pJob.DocumentName = " test doc" pJob.DocumentDataType = PrintDirect.PrintJob.PrintDataType.RAW pJob.PrinterName = sPrinter.GetDevicePath '//This is where you'd stick your device name, sPrinter stands for Selected Printer from a dropdown (combobox). 'pJob.DocumentOutputFile = "C:\temp\test-spool.txt" If Not pJob.OpenPrint() Then MsgBox("Unable to connect to " & pJob.PrinterName, MsgBoxStyle.OkOnly, "Print Error") Exit Sub End If pJob.OpenDocument() pJob.StartPage() Dim sr As New IO.StreamReader(Me.txtFile.Text, Text.Encoding.ASCII) '//I've got best results with ASCII before, but only mized. Dim line As String = sr.ReadLine 'Fix for fly code on first run 'If line = 27 Then line = PrintDirect.PrintJob.PCL_Control_Character Do While (Not line Is Nothing) pJob.WriteToPrinter(line) line = sr.ReadLine Loop 'pJob.WriteToPrinter(ControlChars.FormFeed) pJob.EndPage() pJob.CloseDocument() sr.Close() sr.Dispose() sr = Nothing pJob.Close() I was able to get to print the mask, in the morning... now I'm getting strange printer characters scattered through 5 pages... I'm totally clueless. I must have changed something or the printer is at fault. You can access my test Check Mask, here http://kscserver.com/so/chk_mask.zip .

    Read the article

  • Windows Azure: Import/Export Hard Drives, VM ACLs, Web Sockets, Remote Debugging, Continuous Delivery, New Relic, Billing Alerts and More

    - by ScottGu
    Two weeks ago we released a giant set of improvements to Windows Azure, as well as a significant update of the Windows Azure SDK. This morning we released another massive set of enhancements to Windows Azure.  Today’s new capabilities include: Storage: Import/Export Hard Disk Drives to your Storage Accounts HDInsight: General Availability of our Hadoop Service in the cloud Virtual Machines: New VM Gallery, ACL support for VIPs Web Sites: WebSocket and Remote Debugging Support Notification Hubs: Segmented customer push notification support with tag expressions TFS & GIT: Continuous Delivery Support for Web Sites + Cloud Services Developer Analytics: New Relic support for Web Sites + Mobile Services Service Bus: Support for partitioned queues and topics Billing: New Billing Alert Service that sends emails notifications when your bill hits a threshold you define All of these improvements are now available to use immediately (note that some features are still in preview).  Below are more details about them. Storage: Import/Export Hard Disk Drives to Windows Azure I am excited to announce the preview of our new Windows Azure Import/Export Service! The Windows Azure Import/Export Service enables you to move large amounts of on-premises data into and out of your Windows Azure Storage accounts. It does this by enabling you to securely ship hard disk drives directly to our Windows Azure data centers. Once we receive the drives we’ll automatically transfer the data to or from your Windows Azure Storage account.  This enables you to import or export massive amounts of data more quickly and cost effectively (and not be constrained by available network bandwidth). Encrypted Transport Our Import/Export service provides built-in support for BitLocker disk encryption – which enables you to securely encrypt data on the hard drives before you send it, and not have to worry about it being compromised even if the disk is lost/stolen in transit (since the content on the transported hard drives is completely encrypted and you are the only one who has the key to it).  The drive preparation tool we are shipping today makes setting up bitlocker encryption on these hard drives easy. How to Import/Export your first Hard Drive of Data You can read our Getting Started Guide to learn more about how to begin using the import/export service.  You can create import and export jobs via the Windows Azure Management Portal as well as programmatically using our Server Management APIs. It is really easy to create a new import or export job using the Windows Azure Management Portal.  Simply navigate to a Windows Azure storage account, and then click the new Import/Export tab now available within it (note: if you don’t have this tab make sure to sign-up for the Import/Export preview): Then click the “Create Import Job” or “Create Export Job” commands at the bottom of it.  This will launch a wizard that easily walks you through the steps required: For more comprehensive information about Import/Export, refer to Windows Azure Storage team blog.  You can also send questions and comments to the [email protected] email address. We think you’ll find this new service makes it much easier to move data into and out of Windows Azure, and it will dramatically cut down the network bandwidth required when working on large data migration projects.  We hope you like it. HDInsight: 100% Compatible Hadoop Service in the Cloud Last week we announced the general availability release of Windows Azure HDInsight. HDInsight is a 100% compatible Hadoop service that allows you to easily provision and manage Hadoop clusters for big data processing in Windows Azure.  This release is now live in production, backed by an enterprise SLA, supported 24x7 by Microsoft Support, and is ready to use for production scenarios. HDInsight allows you to use Apache Hadoop tools, such as Pig and Hive, to process large amounts of data in Windows Azure Blob Storage. Because data is stored in Windows Azure Blob Storage, you can choose to dynamically create Hadoop clusters only when you need them, and then shut them down when they are no longer required (since you pay only for the time the Hadoop cluster instances are running this provides a super cost effective way to use them).  You can create Hadoop clusters using either the Windows Azure Management Portal (see below) or using our PowerShell and Cross Platform Command line tools: The import/export hard drive support that came out today is a perfect companion service to use with HDInsight – the combination allows you to easily ingest, process and optionally export a limitless amount of data.  We’ve also integrated HDInsight with our Business Intelligence tools, so users can leverage familiar tools like Excel in order to analyze the output of jobs.  You can find out more about how to get started with HDInsight here. Virtual Machines: VM Gallery Enhancements Today’s update of Windows Azure brings with it a new Virtual Machine gallery that you can use to create new VMs in the cloud.  You can launch the gallery by doing New->Compute->Virtual Machine->From Gallery within the Windows Azure Management Portal: The new Virtual Machine Gallery includes some nice enhancements that make it even easier to use: Search: You can now easily search and filter images using the search box in the top-right of the dialog.  For example, simply type “SQL” and we’ll filter to show those images in the gallery that contain that substring. Category Tree-view: Each month we add more built-in VM images to the gallery.  You can continue to browse these using the “All” view within the VM Gallery – or now quickly filter them using the category tree-view on the left-hand side of the dialog.  For example, by selecting “Oracle” in the tree-view you can now quickly filter to see the official Oracle supplied images. MSDN and Supported checkboxes: With today’s update we are also introducing filters that makes it easy to filter out types of images that you may not be interested in. The first checkbox is MSDN: using this filter you can exclude any image that is not part of the Windows Azure benefits for MSDN subscribers (which have highly discounted pricing - you can learn more about the MSDN pricing here). The second checkbox is Supported: this filter will exclude any image that contains prerelease software, so you can feel confident that the software you choose to deploy is fully supported by Windows Azure and our partners. Sort options: We sort gallery images by what we think customers are most interested in, but sometimes you might want to sort using different views. So we’re providing some additional sort options, like “Newest,” to customize the image list for what suits you best. Pricing information: We now provide additional pricing information about images and options on how to cost effectively run them directly within the VM Gallery. The above improvements make it even easier to use the VM Gallery and quickly create launch and run Virtual Machines in the cloud. Virtual Machines: ACL Support for VIPs A few months ago we exposed the ability to configure Access Control Lists (ACLs) for Virtual Machines using Windows PowerShell cmdlets and our Service Management API. With today’s release, you can now configure VM ACLs using the Windows Azure Management Portal as well. You can now do this by clicking the new Manage ACL command in the Endpoints tab of a virtual machine instance: This will enable you to configure an ordered list of permit and deny rules to scope the traffic that can access your VM’s network endpoints. For example, if you were on a virtual network, you could limit RDP access to a Windows Azure virtual machine to only a few computers attached to your enterprise. Or if you weren’t on a virtual network you could alternatively limit traffic from public IPs that can access your workloads: Here is the default behaviors for ACLs in Windows Azure: By default (i.e. no rules specified), all traffic is permitted. When using only Permit rules, all other traffic is denied. When using only Deny rules, all other traffic is permitted. When there is a combination of Permit and Deny rules, all other traffic is denied. Lastly, remember that configuring endpoints does not automatically configure them within the VM if it also has firewall rules enabled at the OS level.  So if you create an endpoint using the Windows Azure Management Portal, Windows PowerShell, or REST API, be sure to also configure your guest VM firewall appropriately as well. Web Sites: Web Sockets Support With today’s release you can now use Web Sockets with Windows Azure Web Sites.  This feature enables you to easily integrate real-time communication scenarios within your web based applications, and is available at no extra charge (it even works with the free tier).  Higher level programming libraries like SignalR and socket.io are also now supported with it. You can enable Web Sockets support on a web site by navigating to the Configure tab of a Web Site, and by toggling Web Sockets support to “on”: Once Web Sockets is enabled you can start to integrate some really cool scenarios into your web applications.  Check out the new SignalR documentation hub on www.asp.net to learn more about some of the awesome scenarios you can do with it. Web Sites: Remote Debugging Support The Windows Azure SDK 2.2 we released two weeks ago introduced remote debugging support for Windows Azure Cloud Services. With today’s Windows Azure release we are extending this remote debugging support to also work with Windows Azure Web Sites. With live, remote debugging support inside of Visual Studio, you are able to have more visibility than ever before into how your code is operating live in Windows Azure. It is now super easy to attach the debugger and quickly see what is going on with your application in the cloud. Remote Debugging of a Windows Azure Web Site using VS 2013 Enabling the remote debugging of a Windows Azure Web Site using VS 2013 is really easy.  Start by opening up your web application’s project within Visual Studio. Then navigate to the “Server Explorer” tab within Visual Studio, and click on the deployed web-site you want to debug that is running within Windows Azure using the Windows Azure->Web Sites node in the Server Explorer.  Then right-click and choose the “Attach Debugger” option on it: When you do this Visual Studio will remotely attach the debugger to the Web Site running within Windows Azure.  The debugger will then stop the web site’s execution when it hits any break points that you have set within your web application’s project inside Visual Studio.  For example, below I set a breakpoint on the “ViewBag.Message” assignment statement within the HomeController of the standard ASP.NET MVC project template.  When I hit refresh on the “About” page of the web site within the browser, the breakpoint was triggered and I am now able to debug the app remotely using Visual Studio: Note above how we can debug variables (including autos/watchlist/etc), as well as use the Immediate and Command Windows. In the debug session above I used the Immediate Window to explore some of the request object state, as well as to dynamically change the ViewBag.Message property.  When we click the the “Continue” button (or press F5) the app will continue execution and the Web Site will render the content back to the browser.  This makes it super easy to debug web apps remotely. Tips for Better Debugging To get the best experience while debugging, we recommend publishing your site using the Debug configuration within Visual Studio’s Web Publish dialog. This will ensure that debug symbol information is uploaded to the Web Site which will enable a richer debug experience within Visual Studio.  You can find this option on the Web Publish dialog on the Settings tab: When you ultimately deploy/run the application in production we recommend using the “Release” configuration setting – the release configuration is memory optimized and will provide the best production performance.  To learn more about diagnosing and debugging Windows Azure Web Sites read our new Troubleshooting Windows Azure Web Sites in Visual Studio guide. Notification Hubs: Segmented Push Notification support with tag expressions In August we announced the General Availability of Windows Azure Notification Hubs - a powerful Mobile Push Notifications service that makes it easy to send high volume push notifications with low latency from any mobile app back-end.  Notification hubs can be used with any mobile app back-end (including ones built using our Mobile Services capability) and can also be used with back-ends that run in the cloud as well as on-premises. Beginning with the initial release, Notification Hubs allowed developers to send personalized push notifications to both individual users as well as groups of users by interest, by associating their devices with tags representing the logical target of the notification. For example, by registering all devices of customers interested in a favorite MLB team with a corresponding tag, it is possible to broadcast one message to millions of Boston Red Sox fans and another message to millions of St. Louis Cardinals fans with a single API call respectively. New support for using tag expressions to enable advanced customer segmentation With today’s release we are adding support for even more advanced customer targeting.  You can now identify customers that you want to send push notifications to by defining rich tag expressions. With tag expressions, you can now not only broadcast notifications to Boston Red Sox fans, but take that segmenting a step farther and reach more granular segments. This opens up a variety of scenarios, for example: Offers based on multiple preferences—e.g. send a game day vegetarian special to users tagged as both a Boston Red Sox fan AND a vegetarian Push content to multiple segments in a single message—e.g. rain delay information only to users who are tagged as either a Boston Red Sox fan OR a St. Louis Cardinal fan Avoid presenting subsets of a segment with irrelevant content—e.g. season ticket availability reminder to users who are tagged as a Boston Red Sox fan but NOT also a season ticket holder To illustrate with code, consider a restaurant chain app that sends an offer related to a Red Sox vs Cardinals game for users in Boston. Devices can be tagged by your app with location tags (e.g. “Loc:Boston”) and interest tags (e.g. “Follows:RedSox”, “Follows:Cardinals”), and then a notification can be sent by your back-end to “(Follows:RedSox || Follows:Cardinals) && Loc:Boston” in order to deliver an offer to all devices in Boston that follow either the RedSox or the Cardinals. This can be done directly in your server backend send logic using the code below: var notification = new WindowsNotification(messagePayload); hub.SendNotificationAsync(notification, "(Follows:RedSox || Follows:Cardinals) && Loc:Boston"); In your expressions you can use all Boolean operators: AND (&&), OR (||), and NOT (!).  Some other cool use cases for tag expressions that are now supported include: Social: To “all my group except me” - group:id && !user:id Events: Touchdown event is sent to everybody following either team or any of the players involved in the action: Followteam:A || Followteam:B || followplayer:1 || followplayer:2 … Hours: Send notifications at specific times. E.g. Tag devices with time zone and when it is 12pm in Seattle send to: GMT8 && follows:thaifood Versions and platforms: Send a reminder to people still using your first version for Android - version:1.0 && platform:Android For help on getting started with Notification Hubs, visit the Notification Hub documentation center.  Then download the latest NuGet package (or use the Notification Hubs REST APIs directly) to start sending push notifications using tag expressions.  They are really powerful and enable a bunch of great new scenarios. TFS & GIT: Continuous Delivery Support for Web Sites + Cloud Services With today’s Windows Azure release we are making it really easy to enable continuous delivery support with Windows Azure and Team Foundation Services.  Team Foundation Services is a cloud based offering from Microsoft that provides integrated source control (with both TFS and Git support), build server, test execution, collaboration tools, and agile planning support.  It makes it really easy to setup a team project (complete with automated builds and test runners) in the cloud, and it has really rich integration with Visual Studio. With today’s Windows Azure release it is now really easy to enable continuous delivery support with both TFS and Git based repositories hosted using Team Foundation Services.  This enables a workflow where when code is checked in, built successfully on an automated build server, and all tests pass on it – I can automatically have the app deployed on Windows Azure with zero manual intervention or work required. The below screen-shots demonstrate how to quickly setup a continuous delivery workflow to Windows Azure with a Git-based ASP.NET MVC project hosted using Team Foundation Services. Enabling Continuous Delivery to Windows Azure with Team Foundation Services The project I’m going to enable continuous delivery with is a simple ASP.NET MVC project whose source code I’m hosting using Team Foundation Services.  I did this by creating a “SimpleContinuousDeploymentTest” repository there using Git – and then used the new built-in Git tooling support within Visual Studio 2013 to push the source code to it.  Below is a screen-shot of the Git repository hosted within Team Foundation Services: I can access the repository within Visual Studio 2013 and easily make commits with it (as well as branch, merge and do other tasks).  Using VS 2013 I can also setup automated builds to take place in the cloud using Team Foundation Services every time someone checks in code to the repository: The cool thing about this is that I don’t have to buy or rent my own build server – Team Foundation Services automatically maintains its own build server farm and can automatically queue up a build for me (for free) every time someone checks in code using the above settings.  This build server (and automated testing) support now works with both TFS and Git based source control repositories. Connecting a Team Foundation Services project to Windows Azure Once I have a source repository hosted in Team Foundation Services with Automated Builds and Testing set up, I can then go even further and set it up so that it will be automatically deployed to Windows Azure when a source code commit is made to the repository (assuming the Build + Tests pass).  Enabling this is now really easy.  To set this up with a Windows Azure Web Site simply use the New->Compute->Web Site->Custom Create command inside the Windows Azure Management Portal.  This will create a dialog like below.  I gave the web site a name and then made sure the “Publish from source control” checkbox was selected: When we click next we’ll be prompted for the location of the source repository.  We’ll select “Team Foundation Services”: Once we do this we’ll be prompted for our Team Foundation Services account that our source repository is hosted under (in this case my TFS account is “scottguthrie”): When we click the “Authorize Now” button we’ll be prompted to give Windows Azure permissions to connect to the Team Foundation Services account.  Once we do this we’ll be prompted to pick the source repository we want to connect to.  Starting with today’s Windows Azure release you can now connect to both TFS and Git based source repositories.  This new support allows me to connect to the “SimpleContinuousDeploymentTest” respository we created earlier: Clicking the finish button will then create the Web Site with the continuous delivery hooks setup with Team Foundation Services.  Now every time someone pushes source control to the repository in Team Foundation Services, it will kick off an automated build, run all of the unit tests in the solution , and if they pass the app will be automatically deployed to our Web Site in Windows Azure.  You can monitor the history and status of these automated deployments using the Deployments tab within the Web Site: This enables a really slick continuous delivery workflow, and enables you to build and deploy apps in a really nice way. Developer Analytics: New Relic support for Web Sites + Mobile Services With today’s Windows Azure release we are making it really easy to enable Developer Analytics and Monitoring support with both Windows Azure Web Site and Windows Azure Mobile Services.  We are partnering with New Relic, who provide a great dev analytics and app performance monitoring offering, to enable this - and we have updated the Windows Azure Management Portal to make it really easy to configure. Enabling New Relic with a Windows Azure Web Site Enabling New Relic support with a Windows Azure Web Site is now really easy.  Simply navigate to the Configure tab of a Web Site and scroll down to the “developer analytics” section that is now within it: Clicking the “add-on” button will display some additional UI.  If you don’t already have a New Relic subscription, you can click the “view windows azure store” button to obtain a subscription (note: New Relic has a perpetually free tier so you can enable it even without paying anything): Clicking the “view windows azure store” button will launch the integrated Windows Azure Store experience we have within the Windows Azure Management Portal.  You can use this to browse from a variety of great add-on services – including New Relic: Select “New Relic” within the dialog above, then click the next button, and you’ll be able to choose which type of New Relic subscription you wish to purchase.  For this demo we’ll simply select the “Free Standard Version” – which does not cost anything and can be used forever:  Once we’ve signed-up for our New Relic subscription and added it to our Windows Azure account, we can go back to the Web Site’s configuration tab and choose to use the New Relic add-on with our Windows Azure Web Site.  We can do this by simply selecting it from the “add-on” dropdown (it is automatically populated within it once we have a New Relic subscription in our account): Clicking the “Save” button will then cause the Windows Azure Management Portal to automatically populate all of the needed New Relic configuration settings to our Web Site: Deploying the New Relic Agent as part of a Web Site The final step to enable developer analytics using New Relic is to add the New Relic runtime agent to our web app.  We can do this within Visual Studio by right-clicking on our web project and selecting the “Manage NuGet Packages” context menu: This will bring up the NuGet package manager.  You can search for “New Relic” within it to find the New Relic agent.  Note that there is both a 32-bit and 64-bit edition of it – make sure to install the version that matches how your Web Site is running within Windows Azure (note: you can configure your Web Site to run in either 32-bit or 64-bit mode using the Web Site’s “Configuration” tab within the Windows Azure Management Portal): Once we install the NuGet package we are all set to go.  We’ll simply re-publish the web site again to Windows Azure and New Relic will now automatically start monitoring the application Monitoring a Web Site using New Relic Now that the application has developer analytics support with New Relic enabled, we can launch the New Relic monitoring portal to start monitoring the health of it.  We can do this by clicking on the “Add Ons” tab in the left-hand side of the Windows Azure Management Portal.  Then select the New Relic add-on we signed-up for within it.  The Windows Azure Management Portal will provide some default information about the add-on when we do this.  Clicking the “Manage” button in the tray at the bottom will launch a new browser tab and single-sign us into the New Relic monitoring portal associated with our account: When we do this a new browser tab will launch with the New Relic admin tool loaded within it: We can now see insights into how our app is performing – without having to have written a single line of monitoring code.  The New Relic service provides a ton of great built-in monitoring features allowing us to quickly see: Performance times (including browser rendering speed) for the overall site and individual pages.  You can optionally set alert thresholds to trigger if the speed does not meet a threshold you specify. Information about where in the world your customers are hitting the site from (and how performance varies by region) Details on the latency performance of external services your web apps are using (for example: SQL, Storage, Twitter, etc) Error information including call stack details for exceptions that have occurred at runtime SQL Server profiling information – including which queries executed against your database and what their performance was And a whole bunch more… The cool thing about New Relic is that you don’t need to write monitoring code within your application to get all of the above reports (plus a lot more).  The New Relic agent automatically enables the CLR profiler within applications and automatically captures the information necessary to identify these.  This makes it super easy to get started and immediately have a rich developer analytics view for your solutions with very little effort. If you haven’t tried New Relic out yet with Windows Azure I recommend you do so – I think you’ll find it helps you build even better cloud applications.  Following the above steps will help you get started and deliver you a really good application monitoring solution in only minutes. Service Bus: Support for partitioned queues and topics With today’s release, we are enabling support within Service Bus for partitioned queues and topics. Enabling partitioning enables you to achieve a higher message throughput and better availability from your queues and topics. Higher message throughput is achieved by implementing multiple message brokers for each partitioned queue and topic.  The  multiple messaging stores will also provide higher availability. You can create a partitioned queue or topic by simply checking the Enable Partitioning option in the custom create wizard for a Queue or Topic: Read this article to learn more about partitioned queues and topics and how to take advantage of them today. Billing: New Billing Alert Service Today’s Windows Azure update enables a new Billing Alert Service Preview that enables you to get proactive email notifications when your Windows Azure bill goes above a certain monetary threshold that you configure.  This makes it easier to manage your bill and avoid potential surprises at the end of the month. With the Billing Alert Service Preview, you can now create email alerts to monitor and manage your monetary credits or your current bill total.  To set up an alert first sign-up for the free Billing Alert Service Preview.  Then visit the account management page, click on a subscription you have setup, and then navigate to the new Alerts tab that is available: The alerts tab allows you to setup email alerts that will be sent automatically once a certain threshold is hit.  For example, by clicking the “add alert” button above I can setup a rule to send myself email anytime my Windows Azure bill goes above $100 for the month: The Billing Alert Service will evolve to support additional aspects of your bill as well as support multiple forms of alerts such as SMS.  Try out the new Billing Alert Service Preview today and give us feedback. Summary Today’s Windows Azure release enables a ton of great new scenarios, and makes building applications hosted in the cloud even easier. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Custom rails route problem with 2.3.8 and Mongrel

    - by CHsurfer
    I have a controller called 'exposures' which I created automatically with the script/generate scaffold call. The scaffold pages work fine. I created a custom action called 'test' in the exposures controller. When I try to call the page (http://127.0.0.1:3000/exposures/test/1) I get a blank, white screen with no text at all in the source. I am using Rails 2.3.8 and mongrel in the development environment. There are no entries in development.log and the console that was used to open mongrel has the following error: You might have expected an instance of Array. The error occurred while evaluating nil.split D:/Rails/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.8/lib/action_controller/cgi_process.rb:52:in dispatch_cgi' D:/Rails/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.8/lib/action_controller/dispatcher.rb:101:in dispatch_cgi' D:/Rails/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.8/lib/action_controller/dispatcher.rb:27:in dispatch' D:/Rails/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.2-x86-mswin32/bin/../lib/mongrel/rails.rb:76:in process' D:/Rails/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.2-x86-mswin32/bin/../lib/mongrel/rails.rb:74:in synchronize' D:/Rails/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.2-x86-mswin32/bin/../lib/mongrel/rails.rb:74:in process' D:/Rails/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.2-x86-mswin32/bin/../lib/mongrel.rb:159:in process_client' D:/Rails/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.2-x86-mswin32/bin/../lib/mongrel.rb:158:in each' D:/Rails/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.2-x86-mswin32/bin/../lib/mongrel.rb:158:in process_client' D:/Rails/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.2-x86-mswin32/bin/../lib/mongrel.rb:285:in run' D:/Rails/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.2-x86-mswin32/bin/../lib/mongrel.rb:285:in initialize' D:/Rails/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.2-x86-mswin32/bin/../lib/mongrel.rb:285:in new' D:/Rails/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.2-x86-mswin32/bin/../lib/mongrel.rb:285:in run' D:/Rails/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.2-x86-mswin32/bin/../lib/mongrel.rb:268:in initialize' D:/Rails/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.2-x86-mswin32/bin/../lib/mongrel.rb:268:in new' D:/Rails/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.2-x86-mswin32/bin/../lib/mongrel.rb:268:in run' D:/Rails/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.2-x86-mswin32/bin/../lib/mongrel/configurator.rb:282:in run' D:/Rails/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.2-x86-mswin32/bin/../lib/mongrel/configurator.rb:281:in each' D:/Rails/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.2-x86-mswin32/bin/../lib/mongrel/configurator.rb:281:in run' D:/Rails/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.2-x86-mswin32/bin/mongrel_rails:128:in run' D:/Rails/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.2-x86-mswin32/bin/../lib/mongrel/command.rb:212:in run' D:/Rails/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.2-x86-mswin32/bin/mongrel_rails:281 D:/Rails/ruby/bin/mongrel_rails:19:in load' D:/Rails/ruby/bin/mongrel_rails:19 Here is the exposures_controller code: class ExposuresController < ApplicationController # GET /exposures # GET /exposures.xml def index @exposures = Exposure.all respond_to do |format| format.html # index.html.erb format.xml { render :xml => @exposures } end end #/exposure/graph/1 def graph @exposure = Exposure.find(params[:id]) project_name = @exposure.tender.project.name group_name = @exposure.tender.user.group.name tender_desc = @exposure.tender.description direction = "Cash Out" direction = "Cash In" if @exposure.supply currency_1_and_2 = "#{@exposure.currency_in} = #{@exposure.currency_out}" title = "#{project_name}:#{group_name}:#{tender_desc}/n" title += "#{direction}:#{currency_1_and_2}" factors = Array.new carrieds = Array.new days = Array.new @exposure.rates.each do |r| factors << r.factor carrieds << r.carried days << r.day.to_s end max = (factors+carrieds).max min = (factors+carrieds).min g = Graph.new g.title(title, '{font-size: 12px;}') g.set_data(factors) g.line_hollow(2, 4, '0x80a033', 'Bounces', 10) g.set_x_labels(days) g.set_x_label_style( 10, '#CC3399', 2 ); g.set_y_min(min*0.9) g.set_y_max(max*1.1) g.set_y_label_steps(5) render :text = g.render end def test render :text = "this works" end # GET /exposures/1 # GET /exposures/1.xml def show @exposure = Exposure.find(params[:id]) @graph = open_flash_chart_object(700,250, "/exposures/graph/#{@exposure.id}") #@graph = "/exposures/graph/#{@exposure.id}" respond_to do |format| format.html # show.html.erb format.xml { render :xml => @exposure } end end # GET /exposures/new # GET /exposures/new.xml def new @exposure = Exposure.new respond_to do |format| format.html # new.html.erb format.xml { render :xml => @exposure } end end # GET /exposures/1/edit def edit @exposure = Exposure.find(params[:id]) end # POST /exposures # POST /exposures.xml def create @exposure = Exposure.new(params[:exposure]) respond_to do |format| if @exposure.save flash[:notice] = 'Exposure was successfully created.' format.html { redirect_to(@exposure) } format.xml { render :xml => @exposure, :status => :created, :location => @exposure } else format.html { render :action => "new" } format.xml { render :xml => @exposure.errors, :status => :unprocessable_entity } end end end # PUT /exposures/1 # PUT /exposures/1.xml def update @exposure = Exposure.find(params[:id]) respond_to do |format| if @exposure.update_attributes(params[:exposure]) flash[:notice] = 'Exposure was successfully updated.' format.html { redirect_to(@exposure) } format.xml { head :ok } else format.html { render :action => "edit" } format.xml { render :xml => @exposure.errors, :status => :unprocessable_entity } end end end # DELETE /exposures/1 # DELETE /exposures/1.xml def destroy @exposure = Exposure.find(params[:id]) @exposure.destroy respond_to do |format| format.html { redirect_to(exposures_url) } format.xml { head :ok } end end end Clever readers will notice the 'graph' action. This is what I really want to work, but if I can't even get the test action working, then I'm sure I have no chance. Any ideas? I have restarted mongrel a few times with no change. Here is the output of Rake routes (but I don't believe this is the problem. The error would be in the form of and HTML error response). D:\Rails\rails_apps\fxrake routes (in D:/Rails/rails_apps/fx) DEPRECATION WARNING: Rake tasks in vendor/plugins/open_flash_chart/tasks are deprecated. Use lib/tasks instead. (called from D:/ by/gems/1.8/gems/rails-2.3.8/lib/tasks/rails.rb:10) rates GET /rates(.:format) {:controller="rates", :action="index"} POST /rates(.:format) {:controller="rates", :action="create"} new_rate GET /rates/new(.:format) {:controller="rates", :action="new"} edit_rate GET /rates/:id/edit(.:format) {:controller="rates", :action="edit"} rate GET /rates/:id(.:format) {:controller="rates", :action="show"} PUT /rates/:id(.:format) {:controller="rates", :action="update"} DELETE /rates/:id(.:format) {:controller="rates", :action="destroy"} tenders GET /tenders(.:format) {:controller="tenders", :action="index"} POST /tenders(.:format) {:controller="tenders", :action="create"} new_tender GET /tenders/new(.:format) {:controller="tenders", :action="new"} edit_tender GET /tenders/:id/edit(.:format) {:controller="tenders", :action="edit"} tender GET /tenders/:id(.:format) {:controller="tenders", :action="show"} PUT /tenders/:id(.:format) {:controller="tenders", :action="update"} DELETE /tenders/:id(.:format) {:controller="tenders", :action="destroy"} exposures GET /exposures(.:format) {:controller="exposures", :action="index"} POST /exposures(.:format) {:controller="exposures", :action="create"} new_exposure GET /exposures/new(.:format) {:controller="exposures", :action="new"} edit_exposure GET /exposures/:id/edit(.:format) {:controller="exposures", :action="edit"} exposure GET /exposures/:id(.:format) {:controller="exposures", :action="show"} PUT /exposures/:id(.:format) {:controller="exposures", :action="update"} DELETE /exposures/:id(.:format) {:controller="exposures", :action="destroy"} currencies GET /currencies(.:format) {:controller="currencies", :action="index"} POST /currencies(.:format) {:controller="currencies", :action="create"} new_currency GET /currencies/new(.:format) {:controller="currencies", :action="new"} edit_currency GET /currencies/:id/edit(.:format) {:controller="currencies", :action="edit"} currency GET /currencies/:id(.:format) {:controller="currencies", :action="show"} PUT /currencies/:id(.:format) {:controller="currencies", :action="update"} DELETE /currencies/:id(.:format) {:controller="currencies", :action="destroy"} projects GET /projects(.:format) {:controller="projects", :action="index"} POST /projects(.:format) {:controller="projects", :action="create"} new_project GET /projects/new(.:format) {:controller="projects", :action="new"} edit_project GET /projects/:id/edit(.:format) {:controller="projects", :action="edit"} project GET /projects/:id(.:format) {:controller="projects", :action="show"} PUT /projects/:id(.:format) {:controller="projects", :action="update"} DELETE /projects/:id(.:format) {:controller="projects", :action="destroy"} groups GET /groups(.:format) {:controller="groups", :action="index"} POST /groups(.:format) {:controller="groups", :action="create"} new_group GET /groups/new(.:format) {:controller="groups", :action="new"} edit_group GET /groups/:id/edit(.:format) {:controller="groups", :action="edit"} group GET /groups/:id(.:format) {:controller="groups", :action="show"} PUT /groups/:id(.:format) {:controller="groups", :action="update"} DELETE /groups/:id(.:format) {:controller="groups", :action="destroy"} users GET /users(.:format) {:controller="users", :action="index"} POST /users(.:format) {:controller="users", :action="create"} new_user GET /users/new(.:format) {:controller="users", :action="new"} edit_user GET /users/:id/edit(.:format) {:controller="users", :action="edit"} user GET /users/:id(.:format) {:controller="users", :action="show"} PUT /users/:id(.:format) {:controller="users", :action="update"} DELETE /users/:id(.:format) {:controller="users", :action="destroy"} /:controller/:action/:id /:controller/:action/:id(.:format) D:\Rails\rails_apps\fxrails -v Rails 2.3.8 Thanks in advance for the help -Jon

    Read the article

  • Hosted bug tracking system with mercurial repositories (Summary of options & request for opinions)

    - by Mark Booth
    The Question What hosted mercurial repository/bug tracking system or systems have you used? Would you recommend it to others? Are there serious flaws, either in the repository hosting or the bug tracking features that would make it difficult to recommend it? Do you have any other experiences with it or opinions of it that you would like to share? If you have used other non mercurial hosted repository/bug tracking systems, how does it compare? (If I understand correctly, the best format for this type of community-wiki style question is one answer per option, if you have experienced if several) Background I have been looking into options for setting up a bug/issue tracking database and found some valuable advice in this thread and this. But then I got to thinking that a hosted solution might not only solve the problem of tracking bugs, but might also solve the problem we have accessing our mercurial source code repositories while at customer sites around the world. Since we currently have no way to serve mercurial repositories over ssl, when I am at a customer site I have to connect my laptop via VPN to my work network and access the mercurial repositories over a samba share (even if it is just to synce twice a day). This is excruciatingly slow on high latency networks and can be impossible with some customers' firewalls. Even if we could run a TRAC or Redmine server here (thanks turnkey), I'm not sure it would be much quicker as our internet connection is over-stretched as it is. What I would like is for developers to be able to be able to push/pull to/from a remote repository, servicing engineers to be able to pull from a remote repository and for customers (both internal and external) to be able to submit bug/issue reports. Initial options The two options I found were Assembla and Jira. Looking at Assembla I thought the 'group' price looked reasonable, but after enquiring, found that each workspace could only contain a single repository. Since each of our products might have up to a dozen repositories (mostly for libraries) which need to be managed seperately for each product, I could see it getting expensive really quickly. On the plus side, it appears that 'users' are just workspace members, so you can have as many client users (people who can only submit support tickets and track their own tickets) without using up your user allocation. Jira only charges based on the number of users, unfortunately client users also count towards this, if you want them to be able to track their tickets. If you only want clients to be able to submit untracked issues, you can let them submit anonymously, but that doesn't feel very professional to me. More options Looking through MercurialHosting page that @Paidhi suggested, I've added the options which appear to offer private repositories, along with another that I found with a web search. Prices are as per their website today (29th March 2010). Corrections welcome in the future. Anyway, here is my summary, according to the information given on their websites: Assembla, http://www.assembla.com/, looks to be a reasonable price, but suffers only one repository per workspace, so three projects with 6 repos each would use up most of the spaces associated with a $99/month professional account (20 spaces). Bug tracking is based on Trac. Mercurial+Trac support was announced in a blog entry in 2007, but they only list SVN and Git on their Features web page. Cost: $24, $49, $99 & $249/month for 40, 40, unlimited, unlimited users and 1, 10, 20, 100 workspaces. SSL based push/pull? Website https login. BitBucket, http://bitbucket.org/plans/, is primarily a mercurial hosting site for open source projects, with SSL support, but they have an integrated bug tracker and they are cheap for private repositories. It has it’s own issues tracker, but also integrates with Lighthouse & FogBugz. Cost: $0, $5, $12, $50 & $100/month for 1, 5, 15, 25 & 150 private repositories. SSL based push/pull. No https on website login, but supports OpenID, so you can chose an OpenID provider with https login. Codebase HQ, http://www.codebasehq.com/, supports Hg and is almost as cheap as BitBucket. Cost: £5, £13, £21 & £40/month for 3, 15, 30 & 60 active projects, unlimited repositories, unlimited users (except 10 users at £5/month) and 0.5, 2, 4 & 10GB. SSL based push/pull? Website https login? Firefly, http://www.activestate.com/firefly/, by ActiveState looks interesting, but the website is a little light on details, such as whether you can only have one repository per project or not. Cost: $9, $19, & £39/month for 1, 5 & 30 private projects, with a 0.5, 1.5 & 3 GB storage limit. SSL based push/pull? Website https login. Jira, http://www.atlassian.com/software/jira/, isn’t limited by the number of repositories you can have, but by ‘user’. It could work out quite expensive if we want client users to be able to track their issues, since they would need a full user account to be created for them. Also, while there is a Mercurial extension to support jira, there is no ‘Advanced integration’ for Mercurial from Atlassian Fisheye. Cost: $150, $300, $400, $500, $700/month for 10, 25, 50, 100, 100+ users. SSL based push/pull? Website https login. Kiln & FogBugz On Demand, http://fogcreek.com/Kiln/IntrotoOnDemand.html, integrates Kilns mercurial DVCS features with FogBugz, where the combined package is much cheaper than the component parts. Also, the Fogbugz integration is supposedly excellent. *8’) Cost: £30/developer/month ($5/d/m more than either on their own). SSL based push/pull? SourceRepo, http://sourcerepo.com/, also supports HG and is even cheaper than BitBucket & Codebase. Cost: $4, $7 & $13/month for 1, unlimited & unlimited repositories/trac/redmine instances and 500MB, 1GB & 3GB storage. SSL based push/pull. Website https login. Edit: 29th March 2010 & Bounty I split this question into sections, made the questions themselves more explicit, added other options from the research I have done since my first posting and made this community wiki, since I now understand what CW is for. *8') Also, I've added a bounty to encourage people to offer their opinions. At the end of the bounty period, I will award the bounty to whoever writes the best review (good or bad), irrespective of the number of up/down votes it gets. Given that it's probably more important to avoid bad providers than find the absolute best one, 'bad reviews' could be considered more important than good ones.

    Read the article

  • what does calling ´this´ outside of a jquery plugin refer to

    - by Richard
    Hi, I am using the liveTwitter plugin The problem is that I need to stop the plugin from hitting the Twitter api. According to the documentation I need to do this $("#tab1 .container_twitter_status").each(function(){ this.twitter.stop(); }); Already, the each does not make sense on an id and what does this refer to? Anyway, I get an undefined error. I will paste the plugin code and hope it makes sense to somebody MY only problem thusfar with this plugin is that I need to be able to stop it. thanks in advance, Richard /* * jQuery LiveTwitter 1.5.0 * - Live updating Twitter plugin for jQuery * * Copyright (c) 2009-2010 Inge Jørgensen (elektronaut.no) * Licensed under the MIT license (MIT-LICENSE.txt) * * $Date: 2010/05/30$ */ /* * Usage example: * $("#twitterSearch").liveTwitter('bacon', {limit: 10, rate: 15000}); */ (function($){ if(!$.fn.reverse){ $.fn.reverse = function() { return this.pushStack(this.get().reverse(), arguments); }; } $.fn.liveTwitter = function(query, options, callback){ var domNode = this; $(this).each(function(){ var settings = {}; // Handle changing of options if(this.twitter) { settings = jQuery.extend(this.twitter.settings, options); this.twitter.settings = settings; if(query) { this.twitter.query = query; } this.twitter.limit = settings.limit; this.twitter.mode = settings.mode; if(this.twitter.interval){ this.twitter.refresh(); } if(callback){ this.twitter.callback = callback; } // ..or create a new twitter object } else { // Extend settings with the defaults settings = jQuery.extend({ mode: 'search', // Mode, valid options are: 'search', 'user_timeline' rate: 15000, // Refresh rate in ms limit: 10, // Limit number of results refresh: true }, options); // Default setting for showAuthor if not provided if(typeof settings.showAuthor == "undefined"){ settings.showAuthor = (settings.mode == 'user_timeline') ? false : true; } // Set up a dummy function for the Twitter API callback if(!window.twitter_callback){ window.twitter_callback = function(){return true;}; } this.twitter = { settings: settings, query: query, limit: settings.limit, mode: settings.mode, interval: false, container: this, lastTimeStamp: 0, callback: callback, // Convert the time stamp to a more human readable format relativeTime: function(timeString){ var parsedDate = Date.parse(timeString); var delta = (Date.parse(Date()) - parsedDate) / 1000; var r = ''; if (delta < 60) { r = delta + ' seconds ago'; } else if(delta < 120) { r = 'a minute ago'; } else if(delta < (45*60)) { r = (parseInt(delta / 60, 10)).toString() + ' minutes ago'; } else if(delta < (90*60)) { r = 'an hour ago'; } else if(delta < (24*60*60)) { r = '' + (parseInt(delta / 3600, 10)).toString() + ' hours ago'; } else if(delta < (48*60*60)) { r = 'a day ago'; } else { r = (parseInt(delta / 86400, 10)).toString() + ' days ago'; } return r; }, // Update the timestamps in realtime refreshTime: function() { var twitter = this; $(twitter.container).find('span.time').each(function(){ $(this).html(twitter.relativeTime(this.timeStamp)); }); }, // Handle reloading refresh: function(initialize){ var twitter = this; if(this.settings.refresh || initialize) { var url = ''; var params = {}; if(twitter.mode == 'search'){ params.q = this.query; if(this.settings.geocode){ params.geocode = this.settings.geocode; } if(this.settings.lang){ params.lang = this.settings.lang; } if(this.settings.rpp){ params.rpp = this.settings.rpp; } else { params.rpp = this.settings.limit; } // Convert params to string var paramsString = []; for(var param in params){ if(params.hasOwnProperty(param)){ paramsString[paramsString.length] = param + '=' + encodeURIComponent(params[param]); } } paramsString = paramsString.join("&"); url = "http://search.twitter.com/search.json?"+paramsString+"&callback=?"; } else if(twitter.mode == 'user_timeline') { url = "http://api.twitter.com/1/statuses/user_timeline/"+encodeURIComponent(this.query)+".json?count="+twitter.limit+"&callback=?"; } else if(twitter.mode == 'list') { var username = encodeURIComponent(this.query.user); var listname = encodeURIComponent(this.query.list); url = "http://api.twitter.com/1/"+username+"/lists/"+listname+"/statuses.json?per_page="+twitter.limit+"&callback=?"; } $.getJSON(url, function(json) { var results = null; if(twitter.mode == 'search'){ results = json.results; } else { results = json; } var newTweets = 0; $(results).reverse().each(function(){ var screen_name = ''; var profile_image_url = ''; if(twitter.mode == 'search') { screen_name = this.from_user; profile_image_url = this.profile_image_url; created_at_date = this.created_at; } else { screen_name = this.user.screen_name; profile_image_url = this.user.profile_image_url; // Fix for IE created_at_date = this.created_at.replace(/^(\w+)\s(\w+)\s(\d+)(.*)(\s\d+)$/, "$1, $3 $2$5$4"); } var userInfo = this.user; var linkified_text = this.text.replace(/[A-Za-z]+:\/\/[A-Za-z0-9-_]+\.[A-Za-z0-9-_:%&\?\/.=]+/, function(m) { return m.link(m); }); linkified_text = linkified_text.replace(/@[A-Za-z0-9_]+/g, function(u){return u.link('http://twitter.com/'+u.replace(/^@/,''));}); linkified_text = linkified_text.replace(/#[A-Za-z0-9_\-]+/g, function(u){return u.link('http://search.twitter.com/search?q='+u.replace(/^#/,'%23'));}); if(!twitter.settings.filter || twitter.settings.filter(this)) { if(Date.parse(created_at_date) > twitter.lastTimeStamp) { newTweets += 1; var tweetHTML = '<div class="tweet tweet-'+this.id+'">'; if(twitter.settings.showAuthor) { tweetHTML += '<img width="24" height="24" src="'+profile_image_url+'" />' + '<p class="text"><span class="username"><a href="http://twitter.com/'+screen_name+'">'+screen_name+'</a>:</span> '; } else { tweetHTML += '<p class="text"> '; } tweetHTML += linkified_text + ' <span class="time">'+twitter.relativeTime(created_at_date)+'</span>' + '</p>' + '</div>'; $(twitter.container).prepend(tweetHTML); var timeStamp = created_at_date; $(twitter.container).find('span.time:first').each(function(){ this.timeStamp = timeStamp; }); if(!initialize) { $(twitter.container).find('.tweet-'+this.id).hide().fadeIn(); } twitter.lastTimeStamp = Date.parse(created_at_date); } } }); if(newTweets > 0) { // Limit number of entries $(twitter.container).find('div.tweet:gt('+(twitter.limit-1)+')').remove(); // Run callback if(twitter.callback){ twitter.callback(domNode, newTweets); } // Trigger event $(domNode).trigger('tweets'); } }); } }, start: function(){ var twitter = this; if(!this.interval){ this.interval = setInterval(function(){twitter.refresh();}, twitter.settings.rate); this.refresh(true); } }, stop: function(){ if(this.interval){ clearInterval(this.interval); this.interval = false; } } }; var twitter = this.twitter; this.timeInterval = setInterval(function(){twitter.refreshTime();}, 5000); this.twitter.start(); } }); return this; }; })(jQuery);

    Read the article

  • selectOneMenu - java.lang.NullPointerException when adding record to the database (JSF2 and JPA2-OpenJPA)

    - by rogie
    Good day to all; I'm developing a program using JSF2 and JPA2 (OpenJPA). Im also using IBM Rapid App Dev't v8 with WebSphere App Server v8 test server. I have two simple entities, Employee and Department. Each Department has many Employees and each Employee belongs to a Department (using deptno and workdept). My problem occurs when i tried to add a new employee and selecting a department from a combo box (using selectOneMenu - populated from Department table): when i run the program, the following error messages appeared: An Error Occurred: java.lang.NullPointerException Caused by: java.lang.NullPointerException - java.lang.NullPointerException I also tried to make another program using Deptno and Workdept as String instead of integer, still doesn't work. Pls help. Im also a newbie. Tnx and God bless. Below are my codes, configurations and setup. Just tell me if there are some codes that I forgot to include. Im also using Derby v10.5 as my database: CREATE SCHEMA RTS; CREATE TABLE RTS.DEPARTMENT (DEPTNO INTEGER NOT NULL GENERATED ALWAYS AS IDENTITY (START WITH 1, INCREMENT BY 1), DEPTNAME VARCHAR(30)); ALTER TABLE RTS.DEPARTMENT ADD CONSTRAINT PK_DEPARTMNET PRIMARY KEY (DEPTNO); CREATE TABLE RTS.EMPLOYEE (EMPNO INTEGER NOT NULL GENERATED ALWAYS AS IDENTITY (START WITH 1, INCREMENT BY 1), NAME VARCHAR(50), WORKDEPT INTEGER); ALTER TABLE RTS.EMPLOYEE ADD CONSTRAINT PK_EMPLOYEE PRIMARY KEY (EMPNO); Employee and Department Entities package rts.entities; import java.io.Serializable; import javax.persistence.*; @Entity public class Employee implements Serializable { private static final long serialVersionUID = 1L; @Id @GeneratedValue(strategy=GenerationType.IDENTITY) private int empno; private String name; //bi-directional many-to-one association to Department @ManyToOne @JoinColumn(name="WORKDEPT") private Department department; ....... getter and setter methods package rts.entities; import java.io.Serializable; import javax.persistence.*; import java.util.List; @Entity public class Department implements Serializable { private static final long serialVersionUID = 1L; @Id @GeneratedValue(strategy=GenerationType.IDENTITY) private int deptno; private String deptname; //bi-directional many-to-one association to Employee @OneToMany(mappedBy="department") private List<Employee> employees; ....... getter and setter methods JSF 2 snipet using combo box (populated from Department table) <tr> <td align="left">Department</td> <td style="width: 5px">&#160;</td> <td><h:selectOneMenu styleClass="selectOneMenu" id="department1" value="#{pc_EmployeeAdd.employee.department}"> <f:selectItems value="#{DepartmentManager.departmentSelectList}" id="selectItems1"></f:selectItems> </h:selectOneMenu></td> </tr> package rts.entities.controller; import com.ibm.jpa.web.JPAManager; import javax.persistence.EntityManager; import javax.persistence.EntityManagerFactory; import com.ibm.jpa.web.NamedQueryTarget; import com.ibm.jpa.web.Action; import javax.persistence.PersistenceUnit; import javax.annotation.Resource; import javax.transaction.UserTransaction; import rts.entities.Department; import java.util.List; import javax.persistence.Query; import java.util.ArrayList; import java.text.MessageFormat; import javax.faces.model.SelectItem; @SuppressWarnings("unchecked") @JPAManager(targetEntity = rts.entities.Department.class) public class DepartmentManager { ....... public List<SelectItem> getDepartmentSelectList() { List<Department> departmentList = getDepartment(); List<SelectItem> selectList = new ArrayList<SelectItem>(); MessageFormat mf = new MessageFormat("{0}"); for (Department department : departmentList) { selectList.add(new SelectItem(department, mf.format( new Object[] { department.getDeptname() }, new StringBuffer(), null).toString())); } return selectList; } Converter: package rts.entities.converter; import javax.faces.component.UIComponent; import javax.faces.context.FacesContext; import javax.faces.convert.Converter; import rts.entities.Department; import rts.entities.controller.DepartmentManager; import com.ibm.jpa.web.TypeCoercionUtility; public class DepartmentConverter implements Converter { public Object getAsObject(FacesContext facesContext, UIComponent arg1, String entityId) { DepartmentManager departmentManager = (DepartmentManager) facesContext .getApplication().createValueBinding("#{DepartmentManager}") .getValue(facesContext); int deptno = (Integer) TypeCoercionUtility.coerceType("int", entityId); Department result = departmentManager.findDepartmentByDeptno(deptno); return result; } public String getAsString(FacesContext arg0, UIComponent arg1, Object object) { if (object instanceof Department) { return "" + ((Department) object).getDeptno(); } else { throw new IllegalArgumentException("Invalid object type:" + object.getClass().getName()); } } } Method for Add button: public String createEmployeeAction() { EmployeeManager employeeManager = (EmployeeManager) getManagedBean("EmployeeManager"); try { employeeManager.createEmployee(employee); } catch (Exception e) { logException(e); } return ""; } faces-conf.xml <converter> <converter-for-class>rts.entities.Department</converter-for-class> <converter-class>rts.entities.converter.DepartmentConverter</converter-class> </converter> Stack trace javax.faces.FacesException: java.lang.NullPointerException at org.apache.myfaces.shared_impl.context.ExceptionHandlerImpl.wrap(ExceptionHandlerImpl.java:241) at org.apache.myfaces.shared_impl.context.ExceptionHandlerImpl.handle(ExceptionHandlerImpl.java:156) at org.apache.myfaces.lifecycle.LifecycleImpl.render(LifecycleImpl.java:258) at javax.faces.webapp.FacesServlet.service(FacesServlet.java:191) at com.ibm.ws.webcontainer.servlet.ServletWrapper.service(ServletWrapper.java:1147) at com.ibm.ws.webcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java:722) at com.ibm.ws.webcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java:449) at com.ibm.ws.webcontainer.servlet.ServletWrapperImpl.handleRequest(ServletWrapperImpl.java:178) at com.ibm.ws.webcontainer.filter.WebAppFilterManager.invokeFilters(WebAppFilterManager.java:1020) at com.ibm.ws.webcontainer.servlet.CacheServletWrapper.handleRequest(CacheServletWrapper.java:87) at com.ibm.ws.webcontainer.WebContainer.handleRequest(WebContainer.java:886) at com.ibm.ws.webcontainer.WSWebContainer.handleRequest(WSWebContainer.java:1655) at com.ibm.ws.webcontainer.channel.WCChannelLink.ready(WCChannelLink.java:195) at com.ibm.ws.http.channel.inbound.impl.HttpInboundLink.handleDiscrimination(HttpInboundLink.java:452) at com.ibm.ws.http.channel.inbound.impl.HttpInboundLink.handleNewRequest(HttpInboundLink.java:511) at com.ibm.ws.http.channel.inbound.impl.HttpInboundLink.processRequest(HttpInboundLink.java:305) at com.ibm.ws.http.channel.inbound.impl.HttpInboundLink.ready(HttpInboundLink.java:276) at com.ibm.ws.tcp.channel.impl.NewConnectionInitialReadCallback.sendToDiscriminators(NewConnectionInitialReadCallback.java:214) at com.ibm.ws.tcp.channel.impl.NewConnectionInitialReadCallback.complete(NewConnectionInitialReadCallback.java:113) at com.ibm.ws.tcp.channel.impl.AioReadCompletionListener.futureCompleted(AioReadCompletionListener.java:165) at com.ibm.io.async.AbstractAsyncFuture.invokeCallback(AbstractAsyncFuture.java:217) at com.ibm.io.async.AsyncChannelFuture.fireCompletionActions(AsyncChannelFuture.java:161) at com.ibm.io.async.AsyncFuture.completed(AsyncFuture.java:138) at com.ibm.io.async.ResultHandler.complete(ResultHandler.java:204) at com.ibm.io.async.ResultHandler.runEventProcessingLoop(ResultHandler.java:775) at com.ibm.io.async.ResultHandler$2.run(ResultHandler.java:905) at com.ibm.ws.util.ThreadPool$Worker.run(ThreadPool.java:1650) Caused by: java.lang.NullPointerException at rts.entities.converter.DepartmentConverter.getAsString(DepartmentConverter.java:29) at org.apache.myfaces.shared_impl.renderkit.RendererUtils.getConvertedStringValue(RendererUtils.java:656) at org.apache.myfaces.shared_impl.renderkit.html.HtmlRendererUtils.getSubmittedOrSelectedValuesAsSet(HtmlRendererUtils.java:444) at org.apache.myfaces.shared_impl.renderkit.html.HtmlRendererUtils.internalRenderSelect(HtmlRendererUtils.java:421) at org.apache.myfaces.shared_impl.renderkit.html.HtmlRendererUtils.renderMenu(HtmlRendererUtils.java:359) at org.apache.myfaces.shared_impl.renderkit.html.HtmlMenuRendererBase.encodeEnd(HtmlMenuRendererBase.java:76) at javax.faces.component.UIComponentBase.encodeEnd(UIComponentBase.java:519) at javax.faces.component.UIComponent.encodeAll(UIComponent.java:626) at javax.faces.component.UIComponent.encodeAll(UIComponent.java:622) at javax.faces.component.UIComponent.encodeAll(UIComponent.java:622) at javax.faces.component.UIComponent.encodeAll(UIComponent.java:622) at org.apache.myfaces.view.facelets.FaceletViewDeclarationLanguage.renderView(FaceletViewDeclarationLanguage.java:1320) at org.apache.myfaces.application.ViewHandlerImpl.renderView(ViewHandlerImpl.java:263) at org.apache.myfaces.lifecycle.RenderResponseExecutor.execute(RenderResponseExecutor.java:85) at org.apache.myfaces.lifecycle.LifecycleImpl.render(LifecycleImpl.java:239) ... 24 more

    Read the article

  • Questions related to writing your own file downloader using multiple threads java

    - by Shekhar
    Hello In my current company, i am doing a PoC on how we can write a file downloader utility. We have to use socket programming(TCP/IP) for downloading the files. One of the requirements of the client is that a file(which will be large in size) should be transfered in chunks for example if we have a file of 5Mb size then we can have 5 threads which transfer 1 Mb each. I have written a small application which downloads a file. You can download the eclipe project from http://www.fileflyer.com/view/QM1JSC0 A brief explanation of my classes FileSender.java This class provides the bytes of file. It has a method called sendBytesOfFile(long start,long end, long sequenceNo) which gives the number of bytes. import java.io.File; import java.io.IOException; import java.util.zip.CRC32; import org.apache.commons.io.FileUtils; public class FileSender { private static final String FILE_NAME = "C:\\shared\\test.pdf"; public ByteArrayWrapper sendBytesOfFile(long start,long end, long sequenceNo){ try { File file = new File(FILE_NAME); byte[] fileBytes = FileUtils.readFileToByteArray(file); System.out.println("Size of file is " +fileBytes.length); System.out.println(); System.out.println("Start "+start +" end "+end); byte[] bytes = getByteArray(fileBytes, start, end); ByteArrayWrapper wrapper = new ByteArrayWrapper(bytes, sequenceNo); return wrapper; } catch (IOException e) { throw new RuntimeException(e); } } private byte[] getByteArray(byte[] bytes, long start, long end){ long arrayLength = end-start; System.out.println("Start : "+start +" end : "+end + " Arraylength : "+arrayLength +" length of source array : "+bytes.length); byte[] arr = new byte[(int)arrayLength]; for(int i = (int)start, j =0; i < end;i++,j++){ arr[j] = bytes[i]; } return arr; } public static long fileSize(){ File file = new File(FILE_NAME); return file.length(); } } Second Class is FileReceiver.java - This class receives the file. Small Explanation what this file does This class finds the size of the file to be fetched from Sender Depending upon the size of the file it finds the start and end position till the bytes needs to be read. It starts n number of threads giving each thread start,end, sequence number and a list which all the threads share. Each thread reads the number of bytes and creates a ByteArrayWrapper. ByteArrayWrapper objects are added to the list Then i have while loop which basically make sure that all threads have done their work finally it sorts the list based on the sequence number. then the bytes are joined, and a complete byte array is formed which is converted to a file. Code of File Receiver package com.filedownloader; import java.io.File; import java.io.IOException; import java.util.ArrayList; import java.util.Collections; import java.util.Comparator; import java.util.List; import java.util.zip.CRC32; import org.apache.commons.io.FileUtils; public class FileReceiver { public static void main(String[] args) { FileReceiver receiver = new FileReceiver(); receiver.receiveFile(); } public void receiveFile(){ long startTime = System.currentTimeMillis(); long numberOfThreads = 10; long filesize = FileSender.fileSize(); System.out.println("File size received "+filesize); long start = filesize/numberOfThreads; List<ByteArrayWrapper> list = new ArrayList<ByteArrayWrapper>(); for(long threadCount =0; threadCount<numberOfThreads ;threadCount++){ FileDownloaderTask task = new FileDownloaderTask(threadCount*start,(threadCount+1)*start,threadCount,list); new Thread(task).start(); } while(list.size() != numberOfThreads){ // this is done so that all the threads should complete their work before processing further. //System.out.println("Waiting for threads to complete. List size "+list.size()); } if(list.size() == numberOfThreads){ System.out.println("All bytes received "+list); Collections.sort(list, new Comparator<ByteArrayWrapper>() { @Override public int compare(ByteArrayWrapper o1, ByteArrayWrapper o2) { long sequence1 = o1.getSequence(); long sequence2 = o2.getSequence(); if(sequence1 < sequence2){ return -1; }else if(sequence1 > sequence2){ return 1; } else{ return 0; } } }); byte[] totalBytes = list.get(0).getBytes(); byte[] firstArr = null; byte[] secondArr = null; for(int i = 1;i<list.size();i++){ firstArr = totalBytes; secondArr = list.get(i).getBytes(); totalBytes = concat(firstArr, secondArr); } System.out.println(totalBytes.length); convertToFile(totalBytes,"c:\\tmp\\test.pdf"); long endTime = System.currentTimeMillis(); System.out.println("Total time taken with "+numberOfThreads +" threads is "+(endTime-startTime)+" ms" ); } } private byte[] concat(byte[] A, byte[] B) { byte[] C= new byte[A.length+B.length]; System.arraycopy(A, 0, C, 0, A.length); System.arraycopy(B, 0, C, A.length, B.length); return C; } private void convertToFile(byte[] totalBytes,String name) { try { FileUtils.writeByteArrayToFile(new File(name), totalBytes); } catch (IOException e) { throw new RuntimeException(e); } } } Code of ByteArrayWrapper package com.filedownloader; import java.io.Serializable; public class ByteArrayWrapper implements Serializable{ private static final long serialVersionUID = 3499562855188457886L; private byte[] bytes; private long sequence; public ByteArrayWrapper(byte[] bytes, long sequenceNo) { this.bytes = bytes; this.sequence = sequenceNo; } public byte[] getBytes() { return bytes; } public long getSequence() { return sequence; } } Code of FileDownloaderTask import java.util.List; public class FileDownloaderTask implements Runnable { private List<ByteArrayWrapper> list; private long start; private long end; private long sequenceNo; public FileDownloaderTask(long start,long end,long sequenceNo,List<ByteArrayWrapper> list) { this.list = list; this.start = start; this.end = end; this.sequenceNo = sequenceNo; } @Override public void run() { ByteArrayWrapper wrapper = new FileSender().sendBytesOfFile(start, end, sequenceNo); list.add(wrapper); } } Questions related to this code 1) Does file downloading becomes fast when multiple threads is used? In this code i am not able to see the benefit. 2) How should i decide how many threads should i create ? 3) Are their any opensource libraries which does that 4) The file which file receiver receives is valid and not corrupted but checksum (i used FileUtils of common-io) does not match. Whats the problem? 5) This code gives out of memory when used with large file(above 100 Mb) i.e. because byte array which is created. How can i avoid? I know this is a very bad code but i have to write this in one day -:). Please suggest any other good way to do this? Thanks Shekhar

    Read the article

  • whats wrong with the following code

    - by giri
    Hi i am trying to send sms to my mobile using java.When I run the application I am getting the the follwing error. package HelloWorld; import java.io.*; import java.util.BitSet; import javax.comm.*; import java.lang.*; public class SerialToGsm { InputStream in; OutputStream out; String lastIndexRead; String senderNum; String smsMsg; SerialToGsm(String porta) { try { // CommPortIdentifier portId = CommPortIdentifier.getPortIdentifier("serial0"); CommPortIdentifier portId = CommPortIdentifier.getPortIdentifier(porta); SerialPort sp = (SerialPort)portId.open("Sms_GSM", 0); sp.setSerialPortParams(9600, SerialPort.DATABITS_8, SerialPort.STOPBITS_1, SerialPort.PARITY_NONE); sp.setFlowControlMode(sp.FLOWCONTROL_NONE); in = sp.getInputStream(); out = sp.getOutputStream(); // modem reset sendAndRecv("+++AT", 30); // delay for 20 sec/10 sendAndRecv("AT&F", 30); sendAndRecv("ATE0", 30); // echo off sendAndRecv("AT +CMEE=1", 30); // verbose error messages sendAndRecv("AT+CMGF=0", 70); // set pdu mode // sendAndRecv("AT V1E0S0=0&D2&C1", 1000000); } catch (Exception e) { System.out.println("Exception " + e); System.exit(1); } } private String sendAndRecv(String s, int timeout) { try { // clean serial port input buffer in.skip(in.available()); System.out.println("=> " + s); s = s + "\r"; // add CR out.write(s.getBytes()); out.flush(); String strIn = new String(); for (int i = 0; i < timeout; i++){ int numChars = in.available(); if (numChars > 0) { byte[] bb = new byte[numChars]; in.read(bb,0,numChars); strIn += new String(bb); } // start exit conditions // --------------------- if (strIn.indexOf(">\r\n") != -1) { break; } if (strIn.indexOf("OK\r\n") != -1){ break; } if (strIn.indexOf("ERROR") != -1) { // if find 'error' wait for CR+LF if (strIn.indexOf("\r\n",strIn.indexOf("ERROR") + 1) != -1) { break; } } Thread.sleep(100); // delay 1/10 sec } System.out.println("<= " + strIn); if (strIn.length() == 0) { return "ERROR: len 0"; } return strIn; } catch (Exception e) { System.out.println("send e recv Exception " + e); return "ERROR: send e recv Exception"; } } public String sendSms (String numToSend, String whatToSend) { ComputSmsData sms = new ComputSmsData(); sms.setAsciiTxt(whatToSend); sms.setTelNum(numToSend); // sms.setSMSCTelNum("+393359609600"); // SC fixed String s = new String(); s = sendAndRecv("AT+CMGS=" + (sms.getCompletePduData().length() / 2) + "\r", 30); // System.out.println("==> AT+CMGS=" + (sms.getCompletePduData().length() / 2)); // System.out.println("<== " + s); if (s.indexOf(">") != -1) { // s = sendAndRecv(sms.getSMSCPduData() + sms.getCompletePduData() + "\u001A"); // usefull one day? // System.out.println("Inviero questo >>>> " + sms.getCompletePduData()); // if this sintax won't work try remove 00 prefix s = sendAndRecv("00" + sms.getCompletePduData() + "\u001A", 150); // System.out.println("<== " + s); return s; } else { return "ERROR"; } } // used to reset message data private void resetGsmObj() { lastIndexRead = null; senderNum = null; smsMsg = null; } public String checkSms (){ String str = new String(); String strGsm = new String(); strGsm = sendAndRecv("AT+CMGL=0", 30); // list unread msg and sign them as read // if answer contain ERROR then ERROR if (strGsm.indexOf("ERROR") != -1) { resetGsmObj(); return strGsm; // error } strGsm = sendAndRecv("AT+CMGL=1", 30); // list read msg // if answer contain ERROR then ERROR if (strGsm.indexOf("ERROR") != -1) { resetGsmObj(); return strGsm; // error } // evaluate message index if (strGsm.indexOf(':') <= 0) { resetGsmObj(); return ("ERROR unexpected answer"); } str = strGsm.substring(strGsm.indexOf(':') + 1,strGsm.indexOf(',')); str = str.trim(); // remove white spaces // System.out.println("Index: " + str); lastIndexRead = str; // find message string // ------------------- // look for start point (search \r, then skip \n, add and one more for right char int startPoint = strGsm.indexOf("\r",(strGsm.indexOf(":") + 1)) + 2; int endPoint = strGsm.indexOf("\r",startPoint + 1); if (endPoint == -1) { // only one message endPoint = strGsm.length(); } // extract string str = strGsm.substring(startPoint, endPoint); System.out.println("String to be decoded :" + str); ComputSmsData sms = new ComputSmsData(); sms.setRcvdPdu(str); // SMSCNum = new String(sms.getRcvdPduSMSC()); senderNum = new String(sms.getRcvdSenderNumber()); smsMsg = new String(sms.getRcvdPduTxt()); System.out.println("SMSC number: " + sms.getRcvdPduSMSC()); System.out.println("Sender number: " + sms.getRcvdSenderNumber()); System.out.println("Message: " + sms.getRcvdPduTxt()); return "OK"; } public String readSmsSender() { return senderNum; } public String readSms() { return smsMsg; } public String delSms() { if (lastIndexRead != "") { return sendAndRecv("AT+CMGD=" + lastIndexRead, 30); } return ("ERROR"); } } ERROR: Error loading SolarisSerial: java.lang.UnsatisfiedLinkError: no SolarisSerialParallel in java.library.path Caught java.lang.UnsatisfiedLinkError: com.sun.comm.SolarisDriver.readRegistrySerial(Ljava/util/Vector;Ljava/lang/String;)I while loading driver com.sun.comm.SolarisDriver Exception javax.comm.NoSuchPortException

    Read the article

  • Form Loop Error

    - by JM4
    I have a form which loops if the value indicated is less than or equal the number of 'enrollee's needed. The while loop works perfectly with one exception, I use DOB fields which ALSO use FOR loops to display their values. If I remove the DOB fields, the form loop works fine, when left in, it errors out. Any ideas? <form id="Enroll_Form" action="<?php $_SERVER['PHP_SELF']; ?>" method="post" name="Enroll_Form" > <?php $i=1; while ($i <= ($_SESSION['Num_Members'])): {?> <table class="demoTable"> <tr> <td>First Name: </td> <td><input type="text" name="F1FirstName" value="<?php echo $fields['F1FirstName']; ?>" /></td> </tr> <tr> <td>Middle Initial: </td> <td><input type="text" name="F1MI" size="2" maxlength="1" value="<?php echo $fields['F1MI']; ?>" /></td> </tr> <tr> <td>Last Name: </td> <td><input type="text" name="F1LastName" value="<?php echo $fields['F1LastName']; ?>" /></td> </tr> <tr> <td>Federation No: </td> <td><input type="text" name="F1FedNum" maxlength="10" value="<?php echo $fields['F1FedNum']; ?>" /></td> </tr> <tr> <td>SSN: </td> <td><input type="text" name="F1SSN1" size="3" maxlength="3" value="<?php echo $fields['F1SSN1']; ?>" /> - <input type="text" name="F1SSN2" size="2" maxlength="2" value="<?php echo $fields['F1SSN2']; ?>" /> - <input type="text" name="F1SSN3" size="4" maxlength="4" value="<?php echo $fields['F1SSN3']; ?>" /> </td> </tr> <tr> <td>Date of Birth</td> <td> <select name="F1DOB1"> <option value="">Month</option> <?php for ($i=1; $i<=12; $i++) { echo "<option value='$i'"; if ($fields["F1DOB1"] == $i) echo " selected"; echo ">$i</option>"; } ?> </select> / <select name="F1DOB2"> <option value="">Day</option> <?php for ($i=1; $i<=31; $i++) { echo "<option value='$i'"; if ($fields["F1DOB2"] == $i) echo " selected"; echo ">$i</option>"; } ?> </select> / <select name="F1DOB3"> <option value="">Year</option> <?php for ($i=date('Y'); $i>=1900; $i--) { echo "<option value='$i'"; if ($fields["F1DOB3"] == $i) echo " selected"; echo ">$i</option>"; } ?> </select> </td> </tr> <tr> <td>Address: </td> <td><input type="text" name="F1Address" value="<?php echo $fields['F1Address']; ?>" /></td> </tr> <tr> <td>City: </td> <td><input type="text" name="F1City" value="<?php echo $fields['F1City']; ?>" /></td> </tr> <tr> <td>State: </td> <td><select name="F1State"><option value="">Choose a State</option><?php showOptionsDrop($states_arr, null, true); ?></select></td> </tr> <tr> <td>Zip Code: </td> <td><input type="text" name="F1Zip" size="6" maxlength="5" value="<?php echo $fields['F1Zip']; ?>" /></td> </tr> <tr> <td>Contact Telephone No: </td> <td>( <input type="text" name="F1Phone1" size="3" maxlength="3" value="<?php echo $fields['F1Phone1']; ?>" /> ) <input type="text" name="F1Phone2" size="3" maxlength="3" value="<?php echo $fields['F1Phone2']; ?>" /> - <input type="text" name="F1Phone3" size="4" maxlength="4" value="<?php echo $fields['F1Phone3']; ?>" /> </td> </tr> <tr> <td>Email:</td> <td><input type="text" name="F1Email" value="<?php echo $fields['F1Email']; ?>" /></td> </tr> </table> <br /> <?php } $i++; endwhile; ?> <div align="right"><input class="enrbutton" type="submit" name="submit" value="Continue" /></div> </form>

    Read the article

  • scrolling lags in emacs 23.2 with GTK

    - by mefiX
    Hey there, I am using emacs 23.2 with the GTK toolkit. I built emacs from source using the following configure-params: ./configure --prefix=/usr --without-makeinfo --without-sound Which builds emacs with the following configuration: Where should the build process find the source code? /home/****/incoming/emacs-23.2 What operating system and machine description files should Emacs use? `s/gnu-linux.h' and `m/intel386.h' What compiler should emacs be built with? gcc -g -O2 -Wdeclaration-after-statement -Wno-pointer-sign Should Emacs use the GNU version of malloc? yes (Using Doug Lea's new malloc from the GNU C Library.) Should Emacs use a relocating allocator for buffers? yes Should Emacs use mmap(2) for buffer allocation? no What window system should Emacs use? x11 What toolkit should Emacs use? GTK Where do we find X Windows header files? Standard dirs Where do we find X Windows libraries? Standard dirs Does Emacs use -lXaw3d? no Does Emacs use -lXpm? yes Does Emacs use -ljpeg? yes Does Emacs use -ltiff? yes Does Emacs use a gif library? yes -lgif Does Emacs use -lpng? yes Does Emacs use -lrsvg-2? no Does Emacs use -lgpm? yes Does Emacs use -ldbus? yes Does Emacs use -lgconf? no Does Emacs use -lfreetype? yes Does Emacs use -lm17n-flt? no Does Emacs use -lotf? yes Does Emacs use -lxft? yes Does Emacs use toolkit scroll bars? yes When I'm scrolling within files of a common size (about 1000 lines) holding the up/down-keys, emacs almost hangs and produces about 50% CPU-load. I use the following plugins: ido linum tabbar auto-complete-config Starting emacs with -q fixes the problem, but then I don't have any plugins. I can't figure out, which part of my .emacs is responsible for this behaviour. Here's an excerpt of my .emacs-file: (require 'ido) (ido-mode 1) (require 'linum) (global-linum-mode 1) (require 'tabbar) (tabbar-mode 1) (tabbar-local-mode 0) (tabbar-mwheel-mode 0) (setq tabbar-buffer-groups-function (lambda () (list "All"))) (global-set-key [M-left] 'tabbar-backward) (global-set-key [M-right] 'tabbar-forward) ;; hide the toolbar (gtk etc.) (tool-bar-mode -1) ;; Mouse scrolling enhancements (setq mouse-wheel-progressive-speed nil) (setq mouse-wheel-scroll-amount '(5 ((shift) . 5) ((control) . nil))) ;; Smart-HOME (defun smart-beginning-of-line () "Forces the cursor to jump to the first none whitespace char of the current line when pressing HOME" (interactive) (let ((oldpos (point))) (back-to-indentation) (and (= oldpos (point)) (beginning-of-line)))) (put 'smart-beginning-of-line 'CUA 'move) (global-set-key [home] 'smart-beginning-of-line) (custom-set-variables ;; custom-set-variables was added by Custom. ;; If you edit it by hand, you could mess it up, so be careful. ;; Your init file should contain only one such instance. ;; If there is more than one, they won't work right. '(column-number-mode t) '(cua-mode t nil (cua-base)) '(custom-buffer-indent 4) '(delete-selection-mode nil) '(display-time-24hr-format t) '(display-time-day-and-date 1) '(display-time-mode t) '(global-font-lock-mode t nil (font-lock)) '(inhibit-startup-buffer-menu t) '(inhibit-startup-screen t) '(pc-select-meta-moves-sexps t) '(pc-select-selection-keys-only t) '(pc-selection-mode t nil (pc-select)) '(scroll-bar-mode (quote right)) '(show-paren-mode t) '(standard-indent 4) '(uniquify-buffer-name-style (quote forward) nil (uniquify))) (setq-default tab-width 4) (setq-default indent-tabs-mode t) (setq c-basic-offset 4) ;; Highlighting of the current line (global-hl-line-mode 1) (set-face-background 'hl-line "#E8F2FE") (defalias 'yes-or-no-p 'y-or-n-p) (display-time) (set-language-environment "Latin-1") ;; Change cursor color according to mode (setq djcb-read-only-color "gray") ;; valid values are t, nil, box, hollow, bar, (bar . WIDTH), hbar, ;; (hbar. HEIGHT); see the docs for set-cursor-type (setq djcb-read-only-cursor-type 'hbar) (setq djcb-overwrite-color "red") (setq djcb-overwrite-cursor-type 'box) (setq djcb-normal-color "black") (setq djcb-normal-cursor-type 'bar) (defun djcb-set-cursor-according-to-mode () "change cursor color and type according to some minor modes." (cond (buffer-read-only (set-cursor-color djcb-read-only-color) (setq cursor-type djcb-read-only-cursor-type)) (overwrite-mode (set-cursor-color djcb-overwrite-color) (setq cursor-type djcb-overwrite-cursor-type)) (t (set-cursor-color djcb-normal-color) (setq cursor-type djcb-normal-cursor-type)))) (add-hook 'post-command-hook 'djcb-set-cursor-according-to-mode) (define-key global-map '[C-right] 'forward-sexp) (define-key global-map '[C-left] 'backward-sexp) (define-key global-map '[s-left] 'windmove-left) (define-key global-map '[s-right] 'windmove-right) (define-key global-map '[s-up] 'windmove-up) (define-key global-map '[s-down] 'windmove-down) (define-key global-map '[S-down-mouse-1] 'mouse-stay-and-copy) (define-key global-map '[C-M-S-down-mouse-1] 'mouse-stay-and-swap) (define-key global-map '[S-mouse-2] 'mouse-yank-and-kill) (define-key global-map '[C-S-down-mouse-1] 'mouse-stay-and-kill) (define-key global-map "\C-a" 'mark-whole-buffer) (custom-set-faces ;; custom-set-faces was added by Custom. ;; If you edit it by hand, you could mess it up, so be careful. ;; Your init file should contain only one such instance. ;; If there is more than one, they won't work right. '(default ((t (:inherit nil :stipple nil :background "#f7f9fa" :foreground "#191919" :inverse-video nil :box nil :strike-through nil :overline nil :underline nil :slant normal :weight normal :height 98 :width normal :foundry "unknown" :family "DejaVu Sans Mono")))) '(font-lock-builtin-face ((((class color) (min-colors 88) (background light)) (:foreground "#642880" :weight bold)))) '(font-lock-comment-face ((((class color) (min-colors 88) (background light)) (:foreground "#3f7f5f")))) '(font-lock-constant-face ((((class color) (min-colors 88) (background light)) (:weight bold)))) '(font-lock-doc-face ((t (:inherit font-lock-string-face :foreground "#3f7f5f")))) '(font-lock-function-name-face ((((class color) (min-colors 88) (background light)) (:foreground "Black" :weight bold)))) '(font-lock-keyword-face ((((class color) (min-colors 88) (background light)) (:foreground "#7f0055" :weight bold)))) '(font-lock-preprocessor-face ((t (:inherit font-lock-builtin-face :foreground "#7f0055" :weight bold)))) '(font-lock-string-face ((((class color) (min-colors 88) (background light)) (:foreground "#0000c0")))) '(font-lock-type-face ((((class color) (min-colors 88) (background light)) (:foreground "#7f0055" :weight bold)))) '(font-lock-variable-name-face ((((class color) (min-colors 88) (background light)) (:foreground "Black")))) '(minibuffer-prompt ((t (:foreground "medium blue")))) '(mode-line ((t (:background "#222222" :foreground "White")))) '(tabbar-button ((t (:inherit tabbar-default :foreground "dark red")))) '(tabbar-button-highlight ((t (:inherit tabbar-default :background "white" :box (:line-width 2 :color "white"))))) '(tabbar-default ((t (:background "gray90" :foreground "gray50" :box (:line-width 3 :color "gray90") :height 100)))) '(tabbar-highlight ((t (:underline t)))) '(tabbar-selected ((t (:inherit tabbar-default :foreground "blue" :weight bold)))) '(tabbar-separator ((t nil))) '(tabbar-unselected ((t (:inherit tabbar-default))))) Any suggestions? Kind regards, mefiX

    Read the article

  • My App crashes when launched on my Iphone

    - by Miky Mike
    hi guys, I have a problem here : my app crashed on my Iphone (JB) though Xcode doesn't complain about anything. The app works fine on the simulator though. However, there is this in the device logs : Thread 0 Crashed: 0 libSystem.B.dylib 0x00078ac8 kill + 8 1 libSystem.B.dylib 0x00078ab8 kill + 4 2 libSystem.B.dylib 0x00078aaa raise + 10 3 libSystem.B.dylib 0x0008d03a abort + 50 4 libstdc++.6.dylib 0x00044a20 __gnu_cxx::__verbose_terminate_handler() + 376 5 libobjc.A.dylib 0x00005958 _objc_terminate + 104 6 libstdc++.6.dylib 0x00042df2 _cxxabiv1::_terminate(void (*)()) + 46 7 libstdc++.6.dylib 0x00042e46 std::terminate() + 10 8 libstdc++.6.dylib 0x00042f16 __cxa_throw + 78 9 libobjc.A.dylib 0x00004838 objc_exception_throw + 64 10 CoreFoundation 0x0009fd0e +[NSException raise:format:arguments:] + 62 11 CoreFoundation 0x0009fd48 +[NSException raise:format:] + 28 12 Foundation 0x000125d8 -[NSURL(NSURL) initFileURLWithPath:] + 64 13 Foundation 0x000371e0 +[NSURL(NSURL) fileURLWithPath:] + 24 14 TheLearningMachine 0x00002d08 0x1000 + 7432 15 TheLearningMachine 0x00002e8c 0x1000 + 7820 16 TheLearningMachine 0x00002be4 0x1000 + 7140 17 TheLearningMachine 0x000029b6 0x1000 + 6582 18 UIKit 0x0000e47a -[UIApplication _callInitializationDelegatesForURL:payload:suspended:] + 766 19 UIKit 0x000049e0 -[UIApplication _runWithURL:payload:launchOrientation:statusBarStyle:statusBarHidden:] + 200 20 UIKit 0x0005dfd6 -[UIApplication handleEvent:withNewEvent:] + 1390 21 UIKit 0x0005d8fa -[UIApplication sendEvent:] + 38 22 UIKit 0x0005d330 _UIApplicationHandleEvent + 5104 23 GraphicsServices 0x00005044 PurpleEventCallback + 660 24 CoreFoundation 0x00034cdc __CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE1_PERFORM_FUNCTION + 20 25 CoreFoundation 0x00034ca0 __CFRunLoopDoSource1 + 160 26 CoreFoundation 0x00027566 __CFRunLoopRun + 514 27 CoreFoundation 0x00027270 CFRunLoopRunSpecific + 224 28 CoreFoundation 0x00027178 CFRunLoopRunInMode + 52 29 UIKit 0x000040fc -[UIApplication _run] + 364 30 UIKit 0x00002128 UIApplicationMain + 664 31 TheLearningMachine 0x00002948 0x1000 + 6472 32 TheLearningMachine 0x000028fc 0x1000 + 6396 Thread 1: 0 libSystem.B.dylib 0x0002d330 kevent + 24 1 libSystem.B.dylib 0x000d6b6c _dispatch_mgr_invoke + 88 2 libSystem.B.dylib 0x000d65bc _dispatch_queue_invoke + 96 3 libSystem.B.dylib 0x000d675c _dispatch_worker_thread2 + 120 4 libSystem.B.dylib 0x0007a67a _pthread_wqthread + 258 5 libSystem.B.dylib 0x00073190 start_wqthread + 0 Thread 2: 0 libSystem.B.dylib 0x0007b19c __workq_kernreturn + 8 1 libSystem.B.dylib 0x0007a790 _pthread_wqthread + 536 2 libSystem.B.dylib 0x00073190 start_wqthread + 0 Thread 3: 0 libSystem.B.dylib 0x00000c98 mach_msg_trap + 20 1 libSystem.B.dylib 0x00002d64 mach_msg + 44 2 CoreFoundation 0x00027c38 __CFRunLoopServiceMachPort + 88 3 CoreFoundation 0x000274c2 __CFRunLoopRun + 350 4 CoreFoundation 0x00027270 CFRunLoopRunSpecific + 224 5 CoreFoundation 0x00027178 CFRunLoopRunInMode + 52 6 WebCore 0x000024e2 RunWebThread(void*) + 362 7 libSystem.B.dylib 0x0007a27e _pthread_start + 242 8 libSystem.B.dylib 0x0006f2a8 thread_start + 0 Thread 0 crashed with ARM Thread State: r0: 0x00000000 r1: 0x00000000 r2: 0x00000001 r3: 0x3e0862b4 r4: 0x00000006 r5: 0x0015a2ec r6: 0x2fffe090 r7: 0x2fffe0a0 r8: 0x3e1a378c r9: 0x00000065 r10: 0x33028e5a r11: 0x3e1ab89c ip: 0x00000025 sp: 0x2fffe0a0 lr: 0x30277abf pc: 0x30277ac8 cpsr: 0x000f0010 Any idea what the problem can be ? I've already spent my whole day on that, but... I'm stuck. Thanks in advance... Miky Mike Ok, Here is more then from the console, I get this : This GDB was configured as "--host=i386-apple-darwin --target=arm-apple-darwin".tty /dev/ttys002 Loading program into debugger… Program loaded. target remote-mobile /tmp/.XcodeGDBRemote-17280-65 Switching to remote-macosx protocol mem 0x1000 0x3fffffff cache mem 0x40000000 0xffffffff none mem 0x00000000 0x0fff none run Running… Error launching remote program: failed to get the task for process 456. Error launching remote program: failed to get the task for process 456. The program being debugged is not being run. The program being debugged is not being run. [Session started at 2010-12-23 20:33:33 +0100.] GNU gdb 6.3.50-20050815 (Apple version gdb-1472) (Thu Aug 5 05:54:10 UTC 2010) Copyright 2004 Free Software Foundation, Inc. GDB is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Type "show copying" to see the conditions. There is absolutely no warranty for GDB. Type "show warranty" for details. This GDB was configured as "--host=i386-apple-darwin --target=arm-apple-darwin".tty /dev/ttys004 Loading program into debugger… Program loaded. target remote-mobile /tmp/.XcodeGDBRemote-17280-72 Switching to remote-macosx protocol mem 0x1000 0x3fffffff cache mem 0x40000000 0xffffffff none mem 0x00000000 0x0fff none run Running… Error launching remote program: failed to get the task for process 508. Error launching remote program: failed to get the task for process 508. The program being debugged is not being run. The program being debugged is not being run. And here is the code page that calls the URL import "TheLearningMachineAppDelegate.h" import "RootViewController.h" @implementation TheLearningMachineAppDelegate @synthesize window; @synthesize navigationController; pragma mark - pragma mark Application lifecycle (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions { RootViewController *rootViewController = (RootViewController *)[navigationController topViewController]; rootViewController.managedObjectContext = self.managedObjectContext; [window addSubview:[navigationController view]]; [window makeKeyAndVisible]; return YES; } (void)applicationWillResignActive:(UIApplication )application { / Sent when the application is about to move from active to inactive state. This can occur for certain types of temporary interruptions (such as an incoming phone call or SMS message) or when the user quits the application and it begins the transition to the background state. Use this method to pause ongoing tasks, disable timers, and throttle down OpenGL ES frame rates. Games should use this method to pause the game. */ } (void)applicationDidEnterBackground:(UIApplication *)application { [self saveContext]; } (void)applicationWillEnterForeground:(UIApplication )application { / Called as part of the transition from the background to the inactive state: here you can undo many of the changes made on entering the background. */ } (void)applicationDidBecomeActive:(UIApplication )application { / Restart any tasks that were paused (or not yet started) while the application was inactive. If the application was previously in the background, optionally refresh the user interface. */ } // Method that saves the managed object context before the application terminates. (void)applicationWillTerminate:(UIApplication *)application { [self saveContext]; } (void)saveContext { NSError *error = nil; if (managedObjectContext != nil) { if ([managedObjectContext hasChanges] && ![managedObjectContext save:&error]) { NSLog(@"Unresolved error %@, %@", error, [error userInfo]); abort(); //Replace this implementation with code to handle the error appropriately. //abort() causes the application to generate a crash log and terminate. You should not use this function in a shipping application, although it may be useful during development. If it is not possible to recover from the error, display an alert panel that instructs the user to quit the application by pressing the Home button. } } } pragma mark - pragma mark Core Data stack // Returns the managed object context for the application. (NSManagedObjectContext *)managedObjectContext { if (managedObjectContext != nil) { return managedObjectContext; } NSPersistentStoreCoordinator *coordinator = [self persistentStoreCoordinator]; if (coordinator != nil) { managedObjectContext = [[NSManagedObjectContext alloc] init]; [managedObjectContext setPersistentStoreCoordinator:coordinator]; } return managedObjectContext; } // Returns the managed object model for the application. (NSManagedObjectModel *)managedObjectModel { if (managedObjectModel != nil) { return managedObjectModel; } NSString *modelPath = [[NSBundle mainBundle] pathForResource:@"TheLearningMachine" ofType:@"momd"]; NSURL *modelURL = [NSURL fileURLWithPath:modelPath]; managedObjectModel = [[NSManagedObjectModel alloc] initWithContentsOfURL:modelURL]; return managedObjectModel; } pragma mark - pragma mark Application's Documents directory // Returns the path to the application's Documents directory. - (NSString *)applicationDocumentsDirectory { return [NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) lastObject]; } // Returns the persistent store coordinator for the application. - (NSPersistentStoreCoordinator *)persistentStoreCoordinator { if (persistentStoreCoordinator != nil) { return persistentStoreCoordinator; } NSURL *storeURL = [NSURL fileURLWithPath: [[self applicationDocumentsDirectory] stringByAppendingPathComponent: @"TheLearningMachine.sqlite"]]; NSError *error = nil; persistentStoreCoordinator = [[NSPersistentStoreCoordinator alloc] initWithManagedObjectModel:[self managedObjectModel]]; if (![persistentStoreCoordinator addPersistentStoreWithType:NSSQLiteStoreType configuration:nil URL:storeURL options:nil error:&error]) { NSLog(@"Unresolved error %@, %@", error, [error userInfo]); abort(); } return persistentStoreCoordinator; } pragma mark - pragma mark Memory management (void)applicationDidReceiveMemoryWarning:(UIApplication )application { / Free up as much memory as possible by purging cached data objects that can be recreated (or reloaded from disk) later. */ } (void)dealloc { [managedObjectContext release]; [managedObjectModel release]; [persistentStoreCoordinator release]; [window release]; [super dealloc]; } @end

    Read the article

  • EKCalendar not added to iCal

    - by Alex75
    I have a strange behavior on my iPhone. I'm creating an application that uses calendar events (EventKit). The class that use is as follows: the .h one #import "GenericManager.h" #import <EventKit/EventKit.h> #define oneDay 60*60*24 #define oneHour 60*60 @protocol CalendarManagerDelegate; @interface CalendarManager : GenericManager /* * metodo che aggiunge un evento ad un calendario di nome Name nel giorno onDate. * L'evento da aggiungere viene recuperato tramite il dataSource che è quindi * OBBLIGATORIO (!= nil). * * Restituisce YES solo se il delegate è conforme al protocollo CalendarManagerDataSource. * NO altrimenti */ + (BOOL) addEventForCalendarWithName:(NSString *) name fromDate:(NSDate *)fromDate toDate: (NSDate *) toDate withDelegate:(id<CalendarManagerDelegate>) delegate; /* * metodo che aggiunge un evento per giorno compreso tra fromDate e toDate ad un * calendario di nome Name. L'evento da aggiungere viene recuperato tramite il dataSource * che è quindi OBBLIGATORIO (!= nil). * * Restituisce YES solo se il delegate è conforme al protocollo CalendarManagerDataSource. * NO altrimenti */ + (BOOL) addEventsForCalendarWithName:(NSString *) name fromDate:(NSDate *)fromDate toDate: (NSDate *) toDate withDelegate:(id<CalendarManagerDelegate>) delegate; @end @protocol CalendarManagerDelegate <NSObject> // viene inviato quando il calendario necessita informazioni sull' evento da aggiungere - (void) calendarManagerDidCreateEvent:(EKEvent *) event; @end the .m one // // CalendarManager.m // AppCampeggioSingolo // // Created by CreatiWeb Srl on 12/17/12. // Copyright (c) 2012 CreatiWeb Srl. All rights reserved. // #import "CalendarManager.h" #import "Commons.h" #import <objc/message.h> @interface CalendarManager () @end @implementation CalendarManager + (void)requestToEventStore:(EKEventStore *)eventStore delegate:(id)delegate fromDate:(NSDate *)fromDate toDate: (NSDate *) toDate name:(NSString *)name { if([eventStore respondsToSelector:@selector(requestAccessToEntityType:completion:)]) { // ios >= 6.0 [eventStore requestAccessToEntityType:EKEntityTypeEvent completion:^(BOOL granted, NSError *error) { if (granted) { [self addEventForCalendarWithName:name fromDate: fromDate toDate: toDate inEventStore:eventStore withDelegate:delegate]; } else { } }]; } else if (class_getClassMethod([EKCalendar class], @selector(calendarIdentifier)) != nil) { // ios >= 5.0 && ios < 6.0 [self addEventForCalendarWithName:name fromDate:fromDate toDate:toDate inEventStore:eventStore withDelegate:delegate]; } else { // ios < 5.0 EKCalendar *myCalendar = [eventStore defaultCalendarForNewEvents]; EKEvent *event = [self generateEventForCalendar:myCalendar fromDate: fromDate toDate: toDate inEventStore:eventStore withDelegate:delegate]; [eventStore saveEvent:event span:EKSpanThisEvent error:nil]; } } /* * metodo che recupera l'identificativo del calendario associato all'app o nil se non è mai stato creato. */ + (NSString *) identifierForCalendarName: (NSString *) name { NSString * confFileName = [self pathForFile:kCurrentCalendarFileName]; NSDictionary *confCalendar = [NSDictionary dictionaryWithContentsOfFile:confFileName]; NSString *currentIdentifier = [confCalendar objectForKey:name]; return currentIdentifier; } /* * memorizza l'identifier del calendario */ + (void) saveCalendarIdentifier:(NSString *) identifier andName: (NSString *) name { if (identifier != nil) { NSString * confFileName = [self pathForFile:kCurrentCalendarFileName]; NSMutableDictionary *confCalendar = [NSMutableDictionary dictionaryWithContentsOfFile:confFileName]; if (confCalendar == nil) { confCalendar = [NSMutableDictionary dictionaryWithCapacity:1]; } [confCalendar setObject:identifier forKey:name]; [confCalendar writeToFile:confFileName atomically:YES]; } } + (EKCalendar *)getCalendarWithName:(NSString *)name inEventStore:(EKEventStore *)eventStore withLocalSource: (EKSource *)localSource forceCreation:(BOOL) force { EKCalendar *myCalendar; NSString *identifier = [self identifierForCalendarName:name]; if (force || identifier == nil) { NSLog(@"create new calendar"); if (class_getClassMethod([EKCalendar class], @selector(calendarForEntityType:eventStore:)) != nil) { // da ios 6.0 in avanti myCalendar = [EKCalendar calendarForEntityType:EKEntityTypeEvent eventStore:eventStore]; } else { myCalendar = [EKCalendar calendarWithEventStore:eventStore]; } myCalendar.title = name; myCalendar.source = localSource; NSError *error = nil; BOOL result = [eventStore saveCalendar:myCalendar commit:YES error:&error]; if (result) { NSLog(@"Saved calendar %@ to event store. %@",myCalendar,eventStore); } else { NSLog(@"Error saving calendar: %@.", error); } [self saveCalendarIdentifier:myCalendar.calendarIdentifier andName:name]; } // You can also configure properties like the calendar color etc. The important part is to store the identifier for later use. On the other hand if you already have the identifier, you can just fetch the calendar: else { myCalendar = [eventStore calendarWithIdentifier:identifier]; NSLog(@"fetch an old-one = %@",myCalendar); } return myCalendar; } + (EKCalendar *)addEventForCalendarWithName: (NSString *) name fromDate:(NSDate *)fromDate toDate: (NSDate *) toDate inEventStore:(EKEventStore *)eventStore withDelegate: (id<CalendarManagerDelegate>) delegate { // da ios 5.0 in avanti EKCalendar *myCalendar; EKSource *localSource = nil; for (EKSource *source in eventStore.sources) { if (source.sourceType == EKSourceTypeLocal) { localSource = source; break; } } @synchronized(self) { myCalendar = [self getCalendarWithName:name inEventStore:eventStore withLocalSource:localSource forceCreation:NO]; if (myCalendar == nil) myCalendar = [self getCalendarWithName:name inEventStore:eventStore withLocalSource:localSource forceCreation:YES]; NSLog(@"End synchronized block %@",myCalendar); } EKEvent *event = [self generateEventForCalendar:myCalendar fromDate:fromDate toDate:toDate inEventStore:eventStore withDelegate:delegate]; [eventStore saveEvent:event span:EKSpanThisEvent error:nil]; return myCalendar; } + (EKEvent *) generateEventForCalendar: (EKCalendar *) calendar fromDate:(NSDate *)fromDate toDate: (NSDate *) toDate inEventStore:(EKEventStore *) eventStore withDelegate:(id<CalendarManagerDelegate>) delegate { EKEvent *event = [EKEvent eventWithEventStore:eventStore]; event.startDate=fromDate; event.endDate=toDate; [delegate calendarManagerDidCreateEvent:event]; [event setCalendar:calendar]; // ricerca dell'evento nel calendario, se ne trovo uno uguale non lo inserisco NSPredicate *predicate = [eventStore predicateForEventsWithStartDate:fromDate endDate:toDate calendars:[NSArray arrayWithObject:calendar]]; NSArray *matchEvents = [eventStore eventsMatchingPredicate:predicate]; if ([matchEvents count] > 0) { // ne ho trovati di gia' presenti, vediamo se uno e' quello che vogliamo inserire BOOL found = NO; for (EKEvent *fetchEvent in matchEvents) { if ([fetchEvent.title isEqualToString:event.title] && [fetchEvent.notes isEqualToString:event.notes]) { found = YES; break; } } if (found) { // esiste già e quindi non lo inserisco NSLog(@"OH NOOOOOO!!"); event = nil; } } return event; } #pragma mark - Public Methods + (BOOL) addEventForCalendarWithName:(NSString *) name fromDate:(NSDate *)fromDate toDate: (NSDate *) toDate withDelegate:(id<CalendarManagerDelegate>) delegate { BOOL retVal = YES; EKEventStore *eventStore=[[EKEventStore alloc] init]; if ([delegate conformsToProtocol:@protocol(CalendarManagerDelegate)]) { [self requestToEventStore:eventStore delegate:delegate fromDate:fromDate toDate: toDate name:name]; } else { retVal = NO; } return retVal; } + (BOOL) addEventsForCalendarWithName:(NSString *) name fromDate:(NSDate *)fromDate toDate: (NSDate *) toDate withDelegate:(id<CalendarManagerDelegate>) delegate { BOOL retVal = YES; NSDate *dateCursor = fromDate; EKEventStore *eventStore=[[EKEventStore alloc] init]; if ([delegate conformsToProtocol:@protocol(CalendarManagerDelegate)]) { while (retVal && ([dateCursor compare:toDate] == NSOrderedAscending)) { NSDate *finish = [dateCursor dateByAddingTimeInterval:oneDay]; [self requestToEventStore:eventStore delegate:delegate fromDate: dateCursor toDate: finish name:name]; dateCursor = [dateCursor dateByAddingTimeInterval:oneDay]; } } else { retVal = NO; } return retVal; } @end In practice, on my iphone I get the log: fetch an old-one = (null) 19/12/2012 11:33:09.520 AppCampeggioSingolo [730:8 b1b] create new calendar 19/12/2012 11:33:09.558 AppCampeggioSingolo [730:8 b1b] Saved calendar EKCalendar every time I add an event, then I look and I can not find it on iCal calendar event he added. On the iPhone of a friend of mine, however, everything is working correctly. I doubt that the problem stems from the code, but just do not understand what it could be. I searched all day yesterday and part of today on google but have not found anything yet. Any help will be greatly appreciated EDIT: I forgot the call wich is [CalendarManager addEventForCalendarWithName: @"myCalendar" fromDate:fromDate toDate: toDate withDelegate:self]; in the delegate method simply set title and notes of the event like this - (void) calendarManagerDidCreateEvent:(EKEvent *) event { event.title = @"the title"; event.notes = @"some notes"; }

    Read the article

  • Inserting Records in Ascending Order function- C homework assignment

    - by Aaron McRuer
    Good day, Stack Overflow. I have a homework assignment that I'm working on this weekend that I'm having a bit of a problem with. We have a struct "Record" (which contains information about cars for a dealership) that gets placed in a particular spot in a linked list according to 1) its make and 2) according to its model year. This is done when initially building the list, when a "int insertRecordInAscendingOrder" function is called in Main. In "insertRecordInAscendingOrder", a third function, "createRecord" is called, where the linked list is created. The function then goes to the function "compareCars" to determine what elements get put where. Depending on the value returned by this function, insertRecordInAscendingOrder then places the record where it belongs. The list is then printed out. There's more to the assignment, but I'll cross that bridge when I come to it. Ideally, and for the assignment to be considered correct, the linked list must be ordered as: Chevrolet 2012 25 Chevrolet 2013 10 Ford 2010 5 Ford 2011 3 Ford 2012 15 Honda 2011 9 Honda 2012 3 Honda 2013 12 Toyota 2009 2 Toyota 2011 7 Toyota 2013 20 from the a text file that has the data ordered the following way: Ford 2012 15 Ford 2011 3 Ford 2010 5 Toyota 2011 7 Toyota 2012 20 Toyota 2009 2 Honda 2011 9 Honda 2012 3 Honda 2013 12 Chevrolet 2013 10 Chevrolet 2012 25 Notice that the alphabetical order of the "make" field takes precedence, then, the model year is arranged from oldest to newest. However, the program produces this as the final list: Chevrolet 2012 25 Chevrolet 2013 10 Honda 2011 9 Honda 2012 3 Honda 2013 12 Toyota 2009 2 Toyota 2011 7 Toyota 2012 20 Ford 2010 5 Ford 2011 3 Ford 2012 15 I sat down with a grad student and tried to work out all of this yesterday, but we just couldn't figure out why it was kicking the Ford nodes down to the end of the list. Here's the code. As you'll notice, I included a printList call at each instance of the insertion of a node. This way, you can see just what is happening when the nodes are being put in "order". It is in ANSI C99. All function calls must be made as they are specified, so unfortunately, there's no real way of getting around this problem by creating a more efficient algorithm. #include <stdio.h> #include <stdlib.h> #include <string.h> #define MAX_LINE 50 #define MAX_MAKE 20 typedef struct record { char *make; int year; int stock; struct record *next; } Record; int compareCars(Record *car1, Record *car2); void printList(Record *head); Record* createRecord(char *make, int year, int stock); int insertRecordInAscendingOrder(Record **head, char *make, int year, int stock); int main(int argc, char **argv) { FILE *inFile = NULL; char line[MAX_LINE + 1]; char *make, *yearStr, *stockStr; int year, stock, len; Record* headRecord = NULL; /*Input and file diagnostics*/ if (argc!=2) { printf ("Filename not provided.\n"); return 1; } if((inFile=fopen(argv[1], "r"))==NULL) { printf("Can't open the file\n"); return 2; } /*obtain values for linked list*/ while (fgets(line, MAX_LINE, inFile)) { make = strtok(line, " "); yearStr = strtok(NULL, " "); stockStr = strtok(NULL, " "); year = atoi(yearStr); stock = atoi(stockStr); insertRecordInAscendingOrder(&headRecord,make, year, stock); } printf("The original list in ascending order: \n"); printList(headRecord); } /*use strcmp to compare two makes*/ int compareCars(Record *car1, Record *car2) { int compStrResult; compStrResult = strcmp(car1->make, car2->make); int compYearResult = 0; if(car1->year > car2->year) { compYearResult = 1; } else if(car1->year == car2->year) { compYearResult = 0; } else { compYearResult = -1; } if(compStrResult == 0 ) { if(compYearResult == 1) { return 1; } else if(compYearResult == -1) { return -1; } else { return compStrResult; } } else if(compStrResult == 1) { return 1; } else { return -1; } } int insertRecordInAscendingOrder(Record **head, char *make, int year, int stock) { Record *previous = *head; Record *newRecord = createRecord(make, year, stock); Record *current = *head; int compResult; if(*head == NULL) { *head = newRecord; printf("Head is null, list was empty\n"); printList(*head); return 1; } else if ( compareCars(newRecord, *head)==-1) { *head = newRecord; (*head)->next = current; printf("New record was less than the head, replacing\n"); printList(*head); return 1; } else { printf("standard case, searching and inserting\n"); previous = *head; while ( current != NULL &&(compareCars(newRecord, current)==1)) { printList(*head); previous = current; current = current->next; } printList(*head); previous->next = newRecord; previous->next->next = current; } return 1; } /*creates records from info passed in from main via insertRecordInAscendingOrder.*/ Record* createRecord(char *make, int year, int stock) { printf("CreateRecord\n"); Record *theRecord; int len; if(!make) { return NULL; } theRecord = malloc(sizeof(Record)); if(!theRecord) { printf("Unable to allocate memory for the structure.\n"); return NULL; } theRecord->year = year; theRecord->stock = stock; len = strlen(make); theRecord->make = malloc(len + 1); strncpy(theRecord->make, make, len); theRecord->make[len] = '\0'; theRecord->next=NULL; return theRecord; } /*prints list. lists print.*/ void printList(Record *head) { int i; int j = 50; Record *aRecord; aRecord = head; for(i = 0; i < j; i++) { printf("-"); } printf("\n"); printf("%20s%20s%10s\n", "Make", "Year", "Stock"); for(i = 0; i < j; i++) { printf("-"); } printf("\n"); while(aRecord != NULL) { printf("%20s%20d%10d\n", aRecord->make, aRecord->year, aRecord->stock); aRecord = aRecord->next; } printf("\n"); } The text file you'll need for a command line argument can be saved under any name you like; here are the contents you'll need: Ford 2012 15 Ford 2011 3 Ford 2010 5 Toyota 2011 7 Toyota 2012 20 Toyota 2009 2 Honda 2011 9 Honda 2012 3 Honda 2013 12 Chevrolet 2013 10 Chevrolet 2012 25 Thanks in advance for your help. I shall continue to plow away at it myself.

    Read the article

  • c# Counter requires 2 button clicks to update

    - by marko.ivanovski.nz
    Hi, I have a problem that has been bugging me all day. In my code I have the following: private int rowCount { get { return (int)ViewState["rowCount"]; } set { ViewState["rowCount"] = value; } } and a button event protected void addRow_Click(object sender, EventArgs e) { rowCount = rowCount + 1; } Then on Page_Load I read that value and create controls accordingly. I understand the button event fires AFTER the Page_Load fires so the value isn't updated until the next postback. Real nightmare. Here's the entire code: protected void Page_Load(object sender, EventArgs e) { string xmlValue = ""; //To read a value from a database if (xmlValue.Length > 0) { if (!Page.IsPostBack) { DataSet ds = XMLToDataSet(xmlValue); Table dimensionsTable = DataSetToTable(ds); tablePanel.Controls.Add(dimensionsTable); DataTable dt = ds.Tables["Dimensions"]; rowCount = dt.Rows.Count; colCount = dt.Columns.Count; } else { tablePanel.Controls.Add(DataSetToTable(DefaultDataSet(rowCount, colCount))); } } else { if (!Page.IsPostBack) { rowCount = 2; colCount = 4; } tablePanel.Controls.Add(DataSetToTable(DefaultDataSet(rowCount, colCount))); } } protected void submit_Click(object sender, EventArgs e) { resultsLabel.Text = Server.HtmlEncode(DataSetToStringXML(TableToDataSet((Table)tablePanel.Controls[0]))); } protected void addColumn_Click(object sender, EventArgs e) { colCount = colCount + 1; } protected void addRow_Click(object sender, EventArgs e) { rowCount = rowCount + 1; } public DataSet TableToDataSet(Table table) { DataSet ds = new DataSet(); DataTable dt = new DataTable("Dimensions"); ds.Tables.Add(dt); //Add headers for (int i = 0; i < table.Rows[0].Cells.Count; i++) { DataColumn col = new DataColumn(); TextBox headerTxtBox = (TextBox)table.Rows[0].Cells[i].Controls[0]; col.ColumnName = headerTxtBox.Text; col.Caption = headerTxtBox.Text; dt.Columns.Add(col); } for (int i = 0; i < table.Rows.Count; i++) { DataRow valueRow = dt.NewRow(); for (int x = 0; x < table.Rows[i].Cells.Count; x++) { TextBox valueTextBox = (TextBox)table.Rows[i].Cells[x].Controls[0]; valueRow[x] = valueTextBox.Text; } dt.Rows.Add(valueRow); } return ds; } public Table DataSetToTable(DataSet ds) { DataTable dt = ds.Tables["Dimensions"]; Table newTable = new Table(); //Add headers TableRow headerRow = new TableRow(); for (int i = 0; i < dt.Columns.Count; i++) { TableCell headerCell = new TableCell(); TextBox headerTxtBox = new TextBox(); headerTxtBox.ID = "HeadersTxtBox" + i.ToString(); headerTxtBox.Font.Bold = true; headerTxtBox.Text = dt.Columns[i].ColumnName; headerCell.Controls.Add(headerTxtBox); headerRow.Cells.Add(headerCell); } newTable.Rows.Add(headerRow); //Add value rows for (int i = 0; i < dt.Rows.Count; i++) { TableRow valueRow = new TableRow(); for (int x = 0; x < dt.Columns.Count; x++) { TableCell valueCell = new TableCell(); TextBox valueTxtBox = new TextBox(); valueTxtBox.ID = "ValueTxtBox" + i.ToString() + i + x + x.ToString(); valueTxtBox.Text = dt.Rows[i][x].ToString(); valueCell.Controls.Add(valueTxtBox); valueRow.Cells.Add(valueCell); } newTable.Rows.Add(valueRow); } return newTable; } public DataSet DefaultDataSet(int rows, int cols) { DataSet ds = new DataSet(); DataTable dt = new DataTable("Dimensions"); ds.Tables.Add(dt); DataColumn nameCol = new DataColumn(); nameCol.Caption = "Name"; nameCol.ColumnName = "Name"; nameCol.DataType = System.Type.GetType("System.String"); dt.Columns.Add(nameCol); DataColumn widthCol = new DataColumn(); widthCol.Caption = "Width"; widthCol.ColumnName = "Width"; widthCol.DataType = System.Type.GetType("System.String"); dt.Columns.Add(widthCol); if (cols > 2) { DataColumn heightCol = new DataColumn(); heightCol.Caption = "Height"; heightCol.ColumnName = "Height"; heightCol.DataType = System.Type.GetType("System.String"); dt.Columns.Add(heightCol); } if (cols > 3) { DataColumn depthCol = new DataColumn(); depthCol.Caption = "Depth"; depthCol.ColumnName = "Depth"; depthCol.DataType = System.Type.GetType("System.String"); dt.Columns.Add(depthCol); } if (cols > 4) { int newColCount = cols - 4; for (int i = 0; i < newColCount; i++) { DataColumn newCol = new DataColumn(); newCol.Caption = "New " + i.ToString(); newCol.ColumnName = "New " + i.ToString(); newCol.DataType = System.Type.GetType("System.String"); dt.Columns.Add(newCol); } } for (int i = 0; i < rows; i++) { DataRow newRow = dt.NewRow(); newRow["Name"] = "Name " + i.ToString(); newRow["Width"] = "Width " + i.ToString(); if (cols > 2) { newRow["Height"] = "Height " + i.ToString(); } if (cols > 3) { newRow["Depth"] = "Depth " + i.ToString(); } dt.Rows.Add(newRow); } return ds; } public DataSet XMLToDataSet(string xml) { StringReader sr = new StringReader(xml); DataSet ds = new DataSet(); ds.ReadXml(sr); return ds; } public string DataSetToStringXML(DataSet ds) { XmlDocument _XMLDoc = new XmlDocument(); _XMLDoc.LoadXml(ds.GetXml()); StringWriter sw = new StringWriter(); XmlTextWriter xw = new XmlTextWriter(sw); XmlDocument xml = _XMLDoc; xml.WriteTo(xw); return sw.ToString(); } private int rowCount { get { return (int)ViewState["rowCount"]; } set { ViewState["rowCount"] = value; } } private int colCount { get { return (int)ViewState["colCount"]; } set { ViewState["colCount"] = value; } } Thanks in advance, Marko

    Read the article

  • Unnecessary Error Message Being Displayed

    - by ThatMacLad
    I've set up a form to update my blog and it was working fine up until about this morning. It keeps on turning up with an Invalid Entry ID error on the edit post page when I click the update button despite the fact that it updates the homepage. All help is seriously appreciated. <html> <head> <title>Ultan's Blog | New Post</title> <link rel="stylesheet" href="css/editpost.css" type="text/css" /> </head> <body> <div class="new-form"> <div class="header"> </div> <div class="form-bg"> <?php mysql_connect ('localhost', 'root', 'root') ; mysql_select_db ('tmlblog'); if (isset($_POST['update'])) { $id = htmlspecialchars(strip_tags($_POST['id'])); $month = htmlspecialchars(strip_tags($_POST['month'])); $date = htmlspecialchars(strip_tags($_POST['date'])); $year = htmlspecialchars(strip_tags($_POST['year'])); $time = htmlspecialchars(strip_tags($_POST['time'])); $entry = $_POST['entry']; $title = htmlspecialchars(strip_tags($_POST['title'])); if (isset($_POST['password'])) $password = htmlspecialchars(strip_tags($_POST['password'])); else $password = ""; $entry = nl2br($entry); if (!get_magic_quotes_gpc()) { $title = addslashes($title); $entry = addslashes($entry); } $timestamp = strtotime ($month . " " . $date . " " . $year . " " . $time); $result = mysql_query("UPDATE php_blog SET timestamp='$timestamp', title='$title', entry='$entry', password='$password' WHERE id='$id' LIMIT 1") or print ("Can't update entry.<br />" . mysql_error()); header("Location: post.php?id=" . $id); } if (isset($_POST['delete'])) { $id = (int)$_POST['id']; $result = mysql_query("DELETE FROM php_blog WHERE id='$id'") or print ("Can't delete entry.<br />" . mysql_error()); if ($result != false) { print "The entry has been successfully deleted from the database."; exit; } } if (!isset($_GET['id']) || empty($_GET['id']) || !is_numeric($_GET['id'])) { die("Invalid entry ID."); } else { $id = (int)$_GET['id']; } $result = mysql_query ("SELECT * FROM php_blog WHERE id='$id'") or print ("Can't select entry.<br />" . $sql . "<br />" . mysql_error()); while ($row = mysql_fetch_array($result)) { $old_timestamp = $row['timestamp']; $old_title = stripslashes($row['title']); $old_entry = stripslashes($row['entry']); $old_password = $row['password']; $old_title = str_replace('"','\'',$old_title); $old_entry = str_replace('<br />', '', $old_entry); $old_month = date("F",$old_timestamp); $old_date = date("d",$old_timestamp); $old_year = date("Y",$old_timestamp); $old_time = date("H:i",$old_timestamp); } ?> <form method="post" action="<?php echo $_SERVER['PHP_SELF']; ?>"> <p><input type="hidden" name="id" value="<?php echo $id; ?>" /> <strong><label for="month">Date (month, day, year):</label></strong> <select name="month" id="month"> <option value="<?php echo $old_month; ?>"><?php echo $old_month; ?></option> <option value="January">January</option> <option value="February">February</option> <option value="March">March</option> <option value="April">April</option> <option value="May">May</option> <option value="June">June</option> <option value="July">July</option> <option value="August">August</option> <option value="September">September</option> <option value="October">October</option> <option value="November">November</option> <option value="December">December</option> </select> <input type="text" name="date" id="date" size="2" value="<?php echo $old_date; ?>" /> <select name="year" id="year"> <option value="<?php echo $old_year; ?>"><?php echo $old_year; ?></option> <option value="2004">2004</option> <option value="2005">2005</option> <option value="2006">2006</option> <option value="2007">2007</option> <option value="2008">2008</option> <option value="2009">2009</option> <option value="2010">2010</option> </select> <strong><label for="time">Time:</label></strong> <input type="text" name="time" id="time" size="5" value="<?php echo $old_time; ?>" /></p> <p><strong><label for="title">Title:</label></strong> <input type="text" name="title" id="title" value="<?php echo $old_title; ?>" size="40" /> </p> <p><strong><label for="password">Password protect?</label></strong> <input type="checkbox" name="password" id="password" value="1"<?php if($old_password == 1) echo " checked=\"checked\""; ?> /></p> <p><textarea cols="80" rows="20" name="entry" id="entry"><?php echo $old_entry; ?></textarea></p> <p><input type="submit" name="update" id="update" value="Update"></p> </form> <p><strong>Be absolutely sure that this is the post that you wish to remove from the blog!</strong><br /> </p> <form action="<?php echo $_SERVER['PHP_SELF']; ?>" method="post"> <input type="hidden" name="id" id="id" value="<?php echo $id; ?>" /> <input type="submit" name="delete" id="delete" value="Delete" /> </form> </div> </div> </div> <div class="bottom"></div> </body> </html>

    Read the article

  • Slow NFS and GFS2 performance

    - by Tiago
    Recently I've designed and configured a 4 node cluster for a webapp that does lots of file handling. The cluster have been broken down into 2 main roles, webserver and storage. Each role is replicated to a second server using drbd in active/passive mode. The webserver does a NFS mount of the data directory of the storage server and the latter also has a webserver running to serve files to browser clients. In the storage servers I've created a GFS2 FS to hold the data which is wired to drbd. I've chose GFS2 mainly because the announced performance and also because the volume size which has to be pretty high. Since we entered production I've been facing two problems that I think are deeply connected. First of all, the NFS mount on the webservers keeps hanging for a minute or so and then resumes normal operations. By analyzing the logs I've found out that NFS stops answering for a while and outputs the following log lines: Oct 15 18:15:42 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:44 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:46 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:47 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:47 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:47 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:48 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:48 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:51 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:52 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:52 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:55 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:55 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:58 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK In this case, the hang lasted for 16 seconds but sometimes it takes 1 or 2 minutes to resume normal operations. My first guess was this was happening due to heavy load of the NFS mount and that by increasing RPCNFSDCOUNT to a higher value, this would become stable. I've increased it several times and apparently, after a while, the logs started appearing less times. The value is now on 32. After further investigating the issue, I've came across a different hang, despite the NFS messages still appear in the logs. Sometimes, the GFS2 FS simply hangs which causes both the NFS and the storage webserver to serve files. Both stay hang for a while and then they resume normal operations. This hangs leaves no trace on client side (also leaves no NFS ... not responding messages) and, on the storage side, the log system appears to be empty, even though the rsyslogd is running. The nodes connect themselves through a 10Gbps non-dedicated connection but I don't think this is an issue because the GFS2 hang is confirmed but connecting directly to the active storage server. I've been trying to solve this for a while now and I've tried different NFS configuration options, before I've found out the GFS2 FS is also hanging. The NFS mount is exported as such: /srv/data/ <ip_address>(rw,async,no_root_squash,no_all_squash,fsid=25) And the NFS client mounts with: mount -o "async,hard,intr,wsize=8192,rsize=8192" active.storage.vlan:/srv/data /srv/data After some tests, these were the configurations that yielded more performance to the cluster. I am desperate to find a solution for this as the cluster is already in production mode and I need to fix this so that this hangs won't happen in the future and I don't really know for sure what and how I should be benchmarking. What I can tell is that this is happening due to heavy loads as I have tested the cluster earlier and this problems weren't happening at all. Please tell me if you need me to provide configuration details of the cluster, and which do you want me to post. As last resort I can migrate the files to a different FS but I need some solid pointers on whether this will solve this problems as the volume size is extremely large at this point. The servers are being hosted by a third-party enterprise and I don't have physical access to them. Best regards. EDIT 1: The servers are physical servers and their specs are: Webservers: Intel Bi Xeon E5606 2x4 2.13GHz 24GB DDR3 Intel SSD 320 2 x 120GB Raid 1 Storage: Intel i5 3550 3.3GHz 16GB DDR3 12 x 2TB SATA Initially there was a VRack setup between the servers but we've upgraded one of the storage servers to have more RAM and it wasn't inside the VRack. They connect through a shared 10Gbps connection between them. Please note that it is the same connection that is used for public access. They use a single IP (using IP Failover) to connect between them and to allow for a graceful failover. NFS is therefore over a public connection and not under any private network (it was before the upgrade, were the problem still existed). The firewall was configured and tested thoroughly but I disabled it for a while to see if the problem still occurred, and it did. From my knowledge the hosting provider isn't blocking or limiting the connection between either the servers and the public domain (at least under a given bandwidth consumption threshold that hasn't been reached yet). Hope this helps figuring out the problem. EDIT 2: Relevant software versions: CentOS 2.6.32-279.9.1.el6.x86_64 nfs-utils-1.2.3-26.el6.x86_64 nfs-utils-lib-1.1.5-4.el6.x86_64 gfs2-utils-3.0.12.1-32.el6_3.1.x86_64 kmod-drbd84-8.4.2-1.el6_3.elrepo.x86_64 drbd84-utils-8.4.2-1.el6.elrepo.x86_64 DRBD configuration on storage servers: #/etc/drbd.d/storage.res resource storage { protocol C; on <server1 fqdn> { device /dev/drbd0; disk /dev/vg_storage/LV_replicated; address <server1 ip>:7788; meta-disk internal; } on <server2 fqdn> { device /dev/drbd0; disk /dev/vg_storage/LV_replicated; address <server2 ip>:7788; meta-disk internal; } } NFS Configuration in storage servers: #/etc/sysconfig/nfs RPCNFSDCOUNT=32 STATD_PORT=10002 STATD_OUTGOING_PORT=10003 MOUNTD_PORT=10004 RQUOTAD_PORT=10005 LOCKD_UDPPORT=30001 LOCKD_TCPPORT=30001 (can there be any conflict in using the same port for both LOCKD_UDPPORT and LOCKD_TCPPORT?) GFS2 configuration: # gfs2_tool gettune <mountpoint> incore_log_blocks = 1024 log_flush_secs = 60 quota_warn_period = 10 quota_quantum = 60 max_readahead = 262144 complain_secs = 10 statfs_slow = 0 quota_simul_sync = 64 statfs_quantum = 30 quota_scale = 1.0000 (1, 1) new_files_jdata = 0 Storage network environment: eth0 Link encap:Ethernet HWaddr <mac address> inet addr:<ip address> Bcast:<bcast address> Mask:<ip mask> inet6 addr: <ip address> Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:957025127 errors:0 dropped:0 overruns:0 frame:0 TX packets:1473338731 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:2630984979622 (2.3 TiB) TX bytes:1648430431523 (1.4 TiB) eth0:0 Link encap:Ethernet HWaddr <mac address> inet addr:<ip failover address> Bcast:<bcast address> Mask:<ip mask> UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 The IP addresses are statically assigned with the given network configurations: DEVICE="eth0" BOOTPROTO="static" HWADDR=<mac address> ONBOOT="yes" TYPE="Ethernet" IPADDR=<ip address> NETMASK=<net mask> and DEVICE="eth0:0" BOOTPROTO="static" HWADDR=<mac address> IPADDR=<ip failover> NETMASK=<net mask> ONBOOT="yes" BROADCAST=<bcast address> Hosts file to allow for a graceful NFS failover in conjunction with NFS option fsid=25 set on both storage servers: #/etc/hosts <storage ip failover address> active.storage.vlan <webserver ip failover address> active.service.vlan As you can see, packet errors are down to 0. I've also ran ping for a long time without any packet loss. MTU size is the normal 1500. As there is no VLan by now, this is the MTU used to communicate between servers. The webservers' network environment is similar. One thing I forgot to mention is that the storage servers handle ~200GB of new files each day through the NFS connection, which is a key point for me to think this is some kind of heavy load problem with either NFS or GFS2. If you need further configuration details please tell me. EDIT 3: Earlier today we had a major filesystem crash on the storage server. I couldn't get the details of the crash right away because the server stop responding. After the reboot, I noticed the filesystem was extremely slow, and I was not being able to serve a single file through either NFS or httpd, perhaps due to cache warming or so. Nevertheless, I've been monitoring the server closely and the following error came up in dmesg. The source of the problem is clearly GFS, which is waiting for a lock and ends up starving after a while. INFO: task nfsd:3029 blocked for more than 120 seconds. "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. nfsd D 0000000000000000 0 3029 2 0x00000080 ffff8803814f79e0 0000000000000046 0000000000000000 ffffffff8109213f ffff880434c5e148 ffff880624508d88 ffff8803814f7960 ffffffffa037253f ffff8803815c1098 ffff8803814f7fd8 000000000000fb88 ffff8803815c1098 Call Trace: [<ffffffff8109213f>] ? wake_up_bit+0x2f/0x40 [<ffffffffa037253f>] ? gfs2_holder_wake+0x1f/0x30 [gfs2] [<ffffffff814ff42e>] __mutex_lock_slowpath+0x13e/0x180 [<ffffffff814ff2cb>] mutex_lock+0x2b/0x50 [<ffffffffa0379f21>] gfs2_log_reserve+0x51/0x190 [gfs2] [<ffffffffa0390da2>] gfs2_trans_begin+0x112/0x1d0 [gfs2] [<ffffffffa0369b05>] ? gfs2_dir_check+0x35/0xe0 [gfs2] [<ffffffffa0377943>] gfs2_createi+0x1a3/0xaa0 [gfs2] [<ffffffff8121aab1>] ? avc_has_perm+0x71/0x90 [<ffffffffa0383d1e>] gfs2_create+0x7e/0x1a0 [gfs2] [<ffffffffa037783f>] ? gfs2_createi+0x9f/0xaa0 [gfs2] [<ffffffff81188cf4>] vfs_create+0xb4/0xe0 [<ffffffffa04217d6>] nfsd_create_v3+0x366/0x4c0 [nfsd] [<ffffffffa0429703>] nfsd3_proc_create+0x123/0x1b0 [nfsd] [<ffffffffa041a43e>] nfsd_dispatch+0xfe/0x240 [nfsd] [<ffffffffa025a5d4>] svc_process_common+0x344/0x640 [sunrpc] [<ffffffff810602a0>] ? default_wake_function+0x0/0x20 [<ffffffffa025ac10>] svc_process+0x110/0x160 [sunrpc] [<ffffffffa041ab62>] nfsd+0xc2/0x160 [nfsd] [<ffffffffa041aaa0>] ? nfsd+0x0/0x160 [nfsd] [<ffffffff81091de6>] kthread+0x96/0xa0 [<ffffffff8100c14a>] child_rip+0xa/0x20 [<ffffffff81091d50>] ? kthread+0x0/0xa0 [<ffffffff8100c140>] ? child_rip+0x0/0x20

    Read the article

  • PC hangs and reboots from time to time

    - by Bevor
    Hello, I have a very strange problem: Since I have my new PC, I have always had problems with it. From time to time the computer freezes for some seconds and suddendly reboots by itself. I've had this problem since Ubuntu 9.10. The same with 10.04 and 10.10. That's why I don't think it's a software failure because the problem persist too long. It doesn't have anything to do with what I'm doing at this time. Sometimes I listen to music, sometimes I only use Firefox, sometimes I'm running 2 or 3 VMs, sometimes I watch DVD. So it's not isolatable. I could freeze once a day or once a week. I put the PC to the vendor twice(!). The first time they changed my power supply but the problem persisted. The second time they told me that they made some heavy performance tests 50 hours long but they didn't find anything. (How can that be that I have daily freezes with normal usage). The vendor didn't check the hard discs because they used their own disc with Windows. (So they never checked the Linux installation). Yesterday I made some intensive hard disc scans with "SMART" but no errors were found. I ran memtest for 3 times but no errors found. I already had this problem in my old flat, so I doubt that I has something to do with current fluctuation. I already tried another electrical socket and changed to connector strip but the problem persists. At the moment I removed 2 of the RAMs (2x 2GB). In all I have 6GB, 2x2GB and 2x1GB. Could this difference maybe be a problem? Here is a list of my components. I hope that anybody find something I didn't think about yet. And here a list of my components: 1x AMD Phenom II X4 965 Black Edition, 3,4Ghz, Quad Core, S-AM3, Boxed 2x DDR3-RAM 2048MB, PC3-1333 Mhz, CL9, Kingston ValueRAM 2x DDR3-RAM 1024MB, PC3-1333 Mhz, CL9, Kingston ValueRAM 2x SATA II Seagate Barracuda 7200.12, 1TB 32MB Cache = RAID 1 1x DVD ROM SATA LG DH16NSR, 16x/52x 1x DVD-+R/-+RW SATA LG GH-22NS50 1x Cardreader 18in1 1x PCI-E 2.0 GeForce GTS 250, Retail, 1024MB 1x Power Supply ATX 400 Watt, CHIEFTEC APS-400S, 80 Plus 1x Network card PCI Intel PRO/1000GT 10/100/1000 MBit 1x Mainboard Socket-AM3 ASUS M4A79XTD EVO, ATX lshw: description: Desktop Computer product: System Product Name vendor: System manufacturer version: System Version serial: System Serial Number width: 64 bits capabilities: smbios-2.5 dmi-2.5 vsyscall64 vsyscall32 configuration: boot=normal chassis=desktop uuid=80E4001E-8C00-002C-AA59-E0CB4EBAC29A *-core description: Motherboard product: M4A79XTD EVO vendor: ASUSTeK Computer INC. physical id: 0 version: Rev X.0X serial: MT709CK11101196 slot: To Be Filled By O.E.M. *-firmware description: BIOS vendor: American Megatrends Inc. physical id: 0 version: 0704 (11/25/2009) size: 64KiB capacity: 960KiB capabilities: isa pci pnp apm upgrade shadowing escd cdboot bootselect socketedrom edd int13floppy1200 int13floppy720 int13floppy2880 int5printscreen int9keyboard int14serial int17printer int10video acpi usb ls120boot zipboot biosbootspecification *-cpu description: CPU product: AMD Phenom(tm) II X4 965 Processor vendor: Advanced Micro Devices [AMD] physical id: 4 bus info: cpu@0 version: AMD Phenom(tm) II X4 965 Processor serial: To Be Filled By O.E.M. slot: AM3 size: 800MHz capacity: 3400MHz width: 64 bits clock: 200MHz capabilities: fpu fpu_exception wp vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp x86-64 3dnowext 3dnow constant_tsc rep_good nonstop_tsc extd_apicid pni monitor cx16 popcnt lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt npt lbrv svm_lock nrip_save cpufreq *-cache:0 description: L1 cache physical id: 5 slot: L1-Cache size: 512KiB capacity: 512KiB capabilities: pipeline-burst internal varies data *-cache:1 description: L2 cache physical id: 6 slot: L2-Cache size: 2MiB capacity: 2MiB capabilities: pipeline-burst internal varies unified *-cache:2 description: L3 cache physical id: 7 slot: L3-Cache size: 6MiB capacity: 6MiB capabilities: pipeline-burst internal varies unified *-memory description: System Memory physical id: 36 slot: System board or motherboard size: 2GiB *-bank:0 description: DIMM Synchronous 1333 MHz (0.8 ns) product: ModulePartNumber00 vendor: Manufacturer00 physical id: 0 serial: SerNum00 slot: DIMM0 size: 1GiB width: 64 bits clock: 1333MHz (0.8ns) *-bank:1 description: DIMM Synchronous 1333 MHz (0.8 ns) product: ModulePartNumber01 vendor: Manufacturer01 physical id: 1 serial: SerNum01 slot: DIMM1 size: 1GiB width: 64 bits clock: 1333MHz (0.8ns) *-bank:2 description: DIMM [empty] product: ModulePartNumber02 vendor: Manufacturer02 physical id: 2 serial: SerNum02 slot: DIMM2 *-bank:3 description: DIMM [empty] product: ModulePartNumber03 vendor: Manufacturer03 physical id: 3 serial: SerNum03 slot: DIMM3 *-pci:0 description: Host bridge product: RD780 Northbridge only dual slot PCI-e_GFX and HT1 K8 part vendor: ATI Technologies Inc physical id: 100 bus info: pci@0000:00:00.0 version: 00 width: 32 bits clock: 66MHz *-pci:0 description: PCI bridge product: RD790 PCI to PCI bridge (external gfx0 port A) vendor: ATI Technologies Inc physical id: 2 bus info: pci@0000:00:02.0 version: 00 width: 32 bits clock: 33MHz capabilities: pci pm pciexpress msi ht normal_decode bus_master cap_list configuration: driver=pcieport resources: irq:40 ioport:a000(size=4096) memory:f8000000-fbbfffff ioport:d0000000(size=268435456) *-display description: VGA compatible controller product: G92 [GeForce GTS 250] vendor: nVidia Corporation physical id: 0 bus info: pci@0000:01:00.0 version: a2 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress vga_controller bus_master cap_list rom configuration: driver=nvidia latency=0 resources: irq:18 memory:fa000000-faffffff memory:d0000000-dfffffff memory:f8000000-f9ffffff ioport:ac00(size=128) memory:fbbe0000-fbbfffff *-pci:1 description: PCI bridge product: RD790 PCI to PCI bridge (PCI express gpp port C) vendor: ATI Technologies Inc physical id: 6 bus info: pci@0000:00:06.0 version: 00 width: 32 bits clock: 33MHz capabilities: pci pm pciexpress msi ht normal_decode bus_master cap_list configuration: driver=pcieport resources: irq:41 ioport:b000(size=4096) memory:fbc00000-fbcfffff ioport:f6f00000(size=1048576) *-network description: Ethernet interface product: RTL8111/8168B PCI Express Gigabit Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:02:00.0 logical name: eth0 version: 03 serial: e0:cb:4e:ba:c2:9a size: 10MB/s capacity: 1GB/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list rom ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=half latency=0 link=no multicast=yes port=MII speed=10MB/s resources: irq:45 ioport:b800(size=256) memory:f6fff000-f6ffffff memory:f6ff8000-f6ffbfff memory:fbcf0000-fbcfffff *-pci:2 description: PCI bridge product: RD790 PCI to PCI bridge (PCI express gpp port D) vendor: ATI Technologies Inc physical id: 7 bus info: pci@0000:00:07.0 version: 00 width: 32 bits clock: 33MHz capabilities: pci pm pciexpress msi ht normal_decode bus_master cap_list configuration: driver=pcieport resources: irq:42 ioport:c000(size=4096) memory:fbd00000-fbdfffff *-firewire description: FireWire (IEEE 1394) product: VT6315 Series Firewire Controller vendor: VIA Technologies, Inc. physical id: 0 bus info: pci@0000:03:00.0 version: 00 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress ohci bus_master cap_list configuration: driver=firewire_ohci latency=0 resources: irq:19 memory:fbdff800-fbdfffff ioport:c800(size=256) *-pci:3 description: PCI bridge product: RD790 PCI to PCI bridge (PCI express gpp port E) vendor: ATI Technologies Inc physical id: 9 bus info: pci@0000:00:09.0 version: 00 width: 32 bits clock: 33MHz capabilities: pci pm pciexpress msi ht normal_decode bus_master cap_list configuration: driver=pcieport resources: irq:43 ioport:d000(size=4096) memory:fbe00000-fbefffff *-ide description: IDE interface product: 88SE6121 SATA II Controller vendor: Marvell Technology Group Ltd. physical id: 0 bus info: pci@0000:04:00.0 version: b2 width: 32 bits clock: 33MHz capabilities: ide pm msi pciexpress bus_master cap_list configuration: driver=pata_marvell latency=0 resources: irq:17 ioport:dc00(size=8) ioport:d880(size=4) ioport:d800(size=8) ioport:d480(size=4) ioport:d400(size=16) memory:fbeffc00-fbefffff *-storage description: SATA controller product: SB700/SB800 SATA Controller [IDE mode] vendor: ATI Technologies Inc physical id: 11 bus info: pci@0000:00:11.0 logical name: scsi0 logical name: scsi2 version: 00 width: 32 bits clock: 66MHz capabilities: storage msi ahci_1.0 bus_master cap_list emulated configuration: driver=ahci latency=64 resources: irq:44 ioport:9000(size=8) ioport:8000(size=4) ioport:7000(size=8) ioport:6000(size=4) ioport:5000(size=16) memory:f7fffc00-f7ffffff *-disk:0 description: ATA Disk product: ST31000528AS vendor: Seagate physical id: 0 bus info: scsi@0:0.0.0 logical name: /dev/sda version: CC38 serial: 9VP3WD9Z size: 931GiB (1TB) capabilities: partitioned partitioned:dos configuration: ansiversion=5 signature=000ad206 *-volume:0 UNCLAIMED description: Linux filesystem partition vendor: Linux physical id: 1 bus info: scsi@0:0.0.0,1 version: 1.0 serial: 81839235-21ea-4853-90a4-814779f49000 size: 972MiB capacity: 972MiB capabilities: primary ext2 initialized configuration: filesystem=ext2 modified=2010-12-06 18:32:58 mounted=2010-11-01 07:05:10 state=unknown *-volume:1 UNCLAIMED description: Linux swap volume physical id: 2 bus info: scsi@0:0.0.0,2 version: 1 serial: 22b881d5-6f5c-484d-94e8-e231896fa91b size: 486MiB capacity: 486MiB capabilities: primary nofs swap initialized configuration: filesystem=swap pagesize=4096 *-volume:2 UNCLAIMED description: EXT3 volume vendor: Linux physical id: 3 bus info: scsi@0:0.0.0,3 version: 1.0 serial: ad5b0daf-11e8-4f8f-8598-4e89da9c0d84 size: 47GiB capacity: 47GiB capabilities: primary journaled extended_attributes large_files recover ext3 ext2 initialized configuration: created=2010-02-16 20:42:29 filesystem=ext3 modified=2010-11-29 17:02:34 mounted=2010-12-06 18:32:50 state=clean *-volume:3 UNCLAIMED description: Extended partition physical id: 4 bus info: scsi@0:0.0.0,4 size: 882GiB capacity: 882GiB capabilities: primary extended partitioned partitioned:extended *-logicalvolume UNCLAIMED description: Linux filesystem partition physical id: 5 capacity: 882GiB *-disk:1 description: ATA Disk product: ST31000528AS vendor: Seagate physical id: 1 bus info: scsi@2:0.0.0 logical name: /dev/sdb version: CC38 serial: 9VP3SCPF size: 931GiB (1TB) capabilities: partitioned partitioned:dos configuration: ansiversion=5 signature=000ad206 *-volume:0 UNCLAIMED description: Linux filesystem partition vendor: Linux physical id: 1 bus info: scsi@2:0.0.0,1 version: 1.0 serial: 81839235-21ea-4853-90a4-814779f49000 size: 972MiB capacity: 972MiB capabilities: primary ext2 initialized configuration: filesystem=ext2 modified=2010-12-06 18:32:58 mounted=2010-11-01 07:05:10 state=unknown *-volume:1 UNCLAIMED description: Linux swap volume physical id: 2 bus info: scsi@2:0.0.0,2 version: 1 serial: 22b881d5-6f5c-484d-94e8-e231896fa91b size: 486MiB capacity: 486MiB capabilities: primary nofs swap initialized configuration: filesystem=swap pagesize=4096 *-volume:2 UNCLAIMED description: EXT3 volume vendor: Linux physical id: 3 bus info: scsi@2:0.0.0,3 version: 1.0 serial: ad5b0daf-11e8-4f8f-8598-4e89da9c0d84 size: 47GiB capacity: 47GiB capabilities: primary journaled extended_attributes large_files recover ext3 ext2 initialized configuration: created=2010-02-16 20:42:29 filesystem=ext3 modified=2010-11-29 17:02:34 mounted=2010-12-06 18:32:50 state=clean *-volume:3 UNCLAIMED description: Extended partition physical id: 4 bus info: scsi@2:0.0.0,4 size: 882GiB capacity: 882GiB capabilities: primary extended partitioned partitioned:extended *-logicalvolume UNCLAIMED description: Linux filesystem partition physical id: 5 capacity: 882GiB *-usb:0 description: USB Controller product: SB700/SB800 USB OHCI0 Controller vendor: ATI Technologies Inc physical id: 12 bus info: pci@0000:00:12.0 version: 00 width: 32 bits clock: 66MHz capabilities: ohci bus_master configuration: driver=ohci_hcd latency=64 resources: irq:16 memory:f7ffd000-f7ffdfff *-usb:1 description: USB Controller product: SB700 USB OHCI1 Controller vendor: ATI Technologies Inc physical id: 12.1 bus info: pci@0000:00:12.1 version: 00 width: 32 bits clock: 66MHz capabilities: ohci bus_master configuration: driver=ohci_hcd latency=64 resources: irq:16 memory:f7ffe000-f7ffefff *-usb:2 description: USB Controller product: SB700/SB800 USB EHCI Controller vendor: ATI Technologies Inc physical id: 12.2 bus info: pci@0000:00:12.2 version: 00 width: 32 bits clock: 66MHz capabilities: pm debug ehci bus_master cap_list configuration: driver=ehci_hcd latency=64 resources: irq:17 memory:f7fff800-f7fff8ff *-usb:3 description: USB Controller product: SB700/SB800 USB OHCI0 Controller vendor: ATI Technologies Inc physical id: 13 bus info: pci@0000:00:13.0 version: 00 width: 32 bits clock: 66MHz capabilities: ohci bus_master configuration: driver=ohci_hcd latency=64 resources: irq:18 memory:f7ffb000-f7ffbfff *-usb:4 description: USB Controller product: SB700 USB OHCI1 Controller vendor: ATI Technologies Inc physical id: 13.1 bus info: pci@0000:00:13.1 version: 00 width: 32 bits clock: 66MHz capabilities: ohci bus_master configuration: driver=ohci_hcd latency=64 resources: irq:18 memory:f7ffc000-f7ffcfff *-usb:5 description: USB Controller product: SB700/SB800 USB EHCI Controller vendor: ATI Technologies Inc physical id: 13.2 bus info: pci@0000:00:13.2 version: 00 width: 32 bits clock: 66MHz capabilities: pm debug ehci bus_master cap_list configuration: driver=ehci_hcd latency=64 resources: irq:19 memory:f7fff400-f7fff4ff *-serial UNCLAIMED description: SMBus product: SBx00 SMBus Controller vendor: ATI Technologies Inc physical id: 14 bus info: pci@0000:00:14.0 version: 3c width: 32 bits clock: 66MHz capabilities: ht cap_list configuration: latency=0 *-ide description: IDE interface product: SB700/SB800 IDE Controller vendor: ATI Technologies Inc physical id: 14.1 bus info: pci@0000:00:14.1 logical name: scsi5 version: 00 width: 32 bits clock: 66MHz capabilities: ide msi bus_master cap_list emulated configuration: driver=pata_atiixp latency=64 resources: irq:16 ioport:1f0(size=8) ioport:3f6 ioport:170(size=8) ioport:376 ioport:ff00(size=16) *-cdrom:0 description: DVD reader product: DVDROM DH16NS30 vendor: HL-DT-ST physical id: 0.0.0 bus info: scsi@5:0.0.0 logical name: /dev/cdrom1 logical name: /dev/dvd1 logical name: /dev/scd0 logical name: /dev/sr0 version: 1.00 capabilities: removable audio dvd configuration: ansiversion=5 status=nodisc *-cdrom:1 description: DVD-RAM writer product: DVDRAM GH22NS50 vendor: HL-DT-ST physical id: 0.1.0 bus info: scsi@5:0.1.0 logical name: /dev/cdrom logical name: /dev/cdrw logical name: /dev/dvd logical name: /dev/dvdrw logical name: /dev/scd1 logical name: /dev/sr1 version: TN02 capabilities: removable audio cd-r cd-rw dvd dvd-r dvd-ram configuration: ansiversion=5 status=nodisc *-multimedia description: Audio device product: SBx00 Azalia (Intel HDA) vendor: ATI Technologies Inc physical id: 14.2 bus info: pci@0000:00:14.2 version: 00 width: 64 bits clock: 33MHz capabilities: pm bus_master cap_list configuration: driver=HDA Intel latency=64 resources: irq:16 memory:f7ff4000-f7ff7fff *-isa description: ISA bridge product: SB700/SB800 LPC host controller vendor: ATI Technologies Inc physical id: 14.3 bus info: pci@0000:00:14.3 version: 00 width: 32 bits clock: 66MHz capabilities: isa bus_master configuration: latency=0 *-pci:4 description: PCI bridge product: SBx00 PCI to PCI Bridge vendor: ATI Technologies Inc physical id: 14.4 bus info: pci@0000:00:14.4 version: 00 width: 32 bits clock: 66MHz capabilities: pci subtractive_decode bus_master resources: ioport:e000(size=4096) memory:fbf00000-fbffffff *-network description: Ethernet interface product: 82541PI Gigabit Ethernet Controller vendor: Intel Corporation physical id: 5 bus info: pci@0000:05:05.0 logical name: eth1 version: 05 serial: 00:1b:21:56:f3:60 size: 100MB/s capacity: 1GB/s width: 32 bits clock: 66MHz capabilities: pm pcix bus_master cap_list rom ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=e1000 driverversion=7.3.21-k6-NAPI duplex=full firmware=N/A ip=192.168.1.2 latency=64 link=yes mingnt=255 multicast=yes port=twisted pair speed=100MB/s resources: irq:20 memory:fbfe0000-fbffffff memory:fbfc0000-fbfdffff ioport:ec00(size=64) memory:fbfa0000-fbfbffff *-usb:6 description: USB Controller product: SB700/SB800 USB OHCI2 Controller vendor: ATI Technologies Inc physical id: 14.5 bus info: pci@0000:00:14.5 version: 00 width: 32 bits clock: 66MHz capabilities: ohci bus_master configuration: driver=ohci_hcd latency=64 resources: irq:18 memory:f7ffa000-f7ffafff *-pci:1 description: Host bridge product: Family 10h Processor HyperTransport Configuration vendor: Advanced Micro Devices [AMD] physical id: 101 bus info: pci@0000:00:18.0 version: 00 width: 32 bits clock: 33MHz *-pci:2 description: Host bridge product: Family 10h Processor Address Map vendor: Advanced Micro Devices [AMD] physical id: 102 bus info: pci@0000:00:18.1 version: 00 width: 32 bits clock: 33MHz *-pci:3 description: Host bridge product: Family 10h Processor DRAM Controller vendor: Advanced Micro Devices [AMD] physical id: 103 bus info: pci@0000:00:18.2 version: 00 width: 32 bits clock: 33MHz *-pci:4 description: Host bridge product: Family 10h Processor Miscellaneous Control vendor: Advanced Micro Devices [AMD] physical id: 104 bus info: pci@0000:00:18.3 version: 00 width: 32 bits clock: 33MHz configuration: driver=k10temp resources: irq:0 *-pci:5 description: Host bridge product: Family 10h Processor Link Control vendor: Advanced Micro Devices [AMD] physical id: 105 bus info: pci@0000:00:18.4 version: 00 width: 32 bits clock: 33MHz *-scsi physical id: 1 bus info: usb@2:3 logical name: scsi8 capabilities: emulated scsi-host configuration: driver=usb-storage *-disk:0 description: SCSI Disk physical id: 0.0.0 bus info: scsi@8:0.0.0 logical name: /dev/sdc *-disk:1 description: SCSI Disk physical id: 0.0.1 bus info: scsi@8:0.0.1 logical name: /dev/sdd *-disk:2 description: SCSI Disk physical id: 0.0.2 bus info: scsi@8:0.0.2 logical name: /dev/sde *-disk:3 description: SCSI Disk physical id: 0.0.3 bus info: scsi@8:0.0.3 logical name: /dev/sdf *-network DISABLED description: Ethernet interface physical id: 1 logical name: vboxnet0 serial: 0a:00:27:00:00:00 capabilities: ethernet physical configuration: broadcast=yes multicast=yes

    Read the article

  • DirectX works for 64-bit but not 32-bit

    - by dtbarne
    I'm trying to play a game (Civilization 5) which was previously working but no longer. I believe I've narrowed it down to a DirectX issue because I get an error running dxdiag.exe in 32 bit mode. My goal (at least I believe) is to get Direct3D Acceleration "Enabled" in dxdiag (as it is in 64 bit dxdiag). A very similar issue is here: http://answers.microsoft.com/en-us/windows/forum/windows_7-gaming/direct3d-acceleration-is-not-available-in-windows/4c345e6e-dc68-e011-8dfc-68b599b31bf5?page=1 The proposed answer, which looks very promising, doesn't seem to work for me. Like other users in that thread, HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\Direct3D\Drivers does not have a SoftwareOnly key to change. I even tried manually adding it as a string and dword, to no avail. I have a NVIDIA GeForce GT 525M, and before you ask, yes I've tried updating (also uninstalling, reinstalling) my drivers. I've also tried doing the same with DirectX (and Civilization 5 for that matter). Been debugging for some 4+ hours now after a full day of work and I've run out of ideas. I'm hoping somebody knows the solution here! :) Here's what I see when I open dxdiag: DxDiag has detected that there mgiht have been a problem accessing Direct3D the last time this program was used. Would you like to bypass Direct3D this time? No - Crash Yes - Works, but in Display tab: DirectDraw Acceleration: Disabled Direct3D Acceleration: Not Available AGP Texture Acceleration: Not Available If I click "Run 64-bit DxDiag", all three are "Enabled". I should also note that I've tried the following steps as Microsoft suggests, but I'm not able to do so as the "Change Settings" button is disabled. Some programs run very slowly—or not at all—unless Microsoft DirectDraw or Direct3D hardware acceleration is turned on. To determine this, click the Display tab, and then under DirectX Features, check to see whether DirectDraw, Direct3D, and AGP Texture Acceleration appear as Enabled. If not, try turning on hardware acceleration. Click to open Screen Resolution. Click Advanced settings. Click the Troubleshoot tab, and then click Change settings. If you're prompted for an administrator password or confirmation, type the password or provide confirmation. Move the Hardware Acceleration slider to Full. Full dxdiag dump: ------------------ System Information ------------------ Time of this report: 11/8/2012, 23:13:24 Machine name: DTBARNE Operating System: Windows 7 Professional 64-bit (6.1, Build 7601) Service Pack 1 (7601.win7sp1_gdr.120830-0333) Language: English (Regional Setting: English) System Manufacturer: Dell Inc. System Model: Dell System XPS L502X BIOS: Default System BIOS Processor: Intel(R) Core(TM) i5-2450M CPU @ 2.50GHz (4 CPUs), ~2.5GHz Memory: 8192MB RAM Available OS Memory: 8086MB RAM Page File: 2466MB used, 13704MB available Windows Dir: C:\Windows DirectX Version: DirectX 11 DX Setup Parameters: Not found User DPI Setting: Using System DPI System DPI Setting: 96 DPI (100 percent) DWM DPI Scaling: Disabled DxDiag Version: 6.01.7601.17514 32bit Unicode DxDiag Previously: Crashed in Direct3D (stage 2). Re-running DxDiag with "dontskip" command line parameter or choosing not to bypass information gathering when prompted might result in DxDiag successfully obtaining this information ------------ DxDiag Notes ------------ Display Tab 1: No problems found. Sound Tab 1: No problems found. Sound Tab 2: No problems found. Input Tab: No problems found. -------------------- DirectX Debug Levels -------------------- Direct3D: 0/4 (retail) DirectDraw: 0/4 (retail) DirectInput: 0/5 (retail) DirectMusic: 0/5 (retail) DirectPlay: 0/9 (retail) DirectSound: 0/5 (retail) DirectShow: 0/6 (retail) --------------- Display Devices --------------- Card name: Intel(R) HD Graphics 3000 Manufacturer: Chip type: DAC type: Device Key: Enum\PCI\VEN_8086&DEV_0126&SUBSYS_04B61028&REV_09 Display Memory: Dedicated Memory: n/a Shared Memory: n/a Current Mode: 1920 x 1080 (32 bit) (60Hz) Monitor Name: Generic PnP Monitor Monitor Model: Monitor Id: Native Mode: Output Type: Driver Name: Driver File Version: () Driver Version: DDI Version: Driver Model: WDDM 1.1 Driver Attributes: Final Retail Driver Date/Size: , 0 bytes WHQL Logo'd: n/a WHQL Date Stamp: n/a Device Identifier: Vendor ID: Device ID: SubSys ID: Revision ID: Driver Strong Name: oem11.inf:IntelGfx.NTamd64.6.0:iSNBM0:8.15.10.2696:pci\ven_8086&dev_0126&subsys_04b61028 Rank Of Driver: 00E60001 Video Accel: Deinterlace Caps: n/a D3D9 Overlay: DXVA-HD: DDraw Status: Disabled D3D Status: Not Available AGP Status: Not Available ------------- Sound Devices ------------- Description: Speakers (High Definition Audio Device) Default Sound Playback: Yes Default Voice Playback: Yes Hardware ID: HDAUDIO\FUNC_01&VEN_10EC&DEV_0665&SUBSYS_102804B6&REV_1000 Manufacturer ID: 1 Product ID: 65535 Type: WDM Driver Name: HdAudio.sys Driver Version: 6.01.7601.17514 (English) Driver Attributes: Final Retail WHQL Logo'd: Yes Date and Size: 11/20/2010 22:23:47, 350208 bytes Other Files: Driver Provider: Microsoft HW Accel Level: Basic Cap Flags: 0xF1F Min/Max Sample Rate: 100, 200000 Static/Strm HW Mix Bufs: 1, 0 Static/Strm HW 3D Bufs: 0, 0 HW Memory: 0 Voice Management: No EAX(tm) 2.0 Listen/Src: No, No I3DL2(tm) Listen/Src: No, No Sensaura(tm) ZoomFX(tm): No Description: Digital Audio (S/PDIF) (High Definition Audio Device) Default Sound Playback: No Default Voice Playback: No Hardware ID: HDAUDIO\FUNC_01&VEN_10EC&DEV_0665&SUBSYS_102804B6&REV_1000 Manufacturer ID: 1 Product ID: 65535 Type: WDM Driver Name: HdAudio.sys Driver Version: 6.01.7601.17514 (English) Driver Attributes: Final Retail WHQL Logo'd: Yes Date and Size: 11/20/2010 22:23:47, 350208 bytes Other Files: Driver Provider: Microsoft HW Accel Level: Basic Cap Flags: 0xF1F Min/Max Sample Rate: 100, 200000 Static/Strm HW Mix Bufs: 1, 0 Static/Strm HW 3D Bufs: 0, 0 HW Memory: 0 Voice Management: No EAX(tm) 2.0 Listen/Src: No, No I3DL2(tm) Listen/Src: No, No Sensaura(tm) ZoomFX(tm): No --------------------- Sound Capture Devices --------------------- Description: Microphone (High Definition Audio Device) Default Sound Capture: Yes Default Voice Capture: Yes Driver Name: HdAudio.sys Driver Version: 6.01.7601.17514 (English) Driver Attributes: Final Retail Date and Size: 11/20/2010 22:23:47, 350208 bytes Cap Flags: 0x1 Format Flags: 0xFFFFF ------------------- DirectInput Devices ------------------- Device Name: Mouse Attached: 1 Controller ID: n/a Vendor/Product ID: n/a FF Driver: n/a Device Name: Keyboard Attached: 1 Controller ID: n/a Vendor/Product ID: n/a FF Driver: n/a Poll w/ Interrupt: No ----------- USB Devices ----------- + USB Root Hub | Vendor/Product ID: 0x8086, 0x1C26 | Matching Device ID: usb\root_hub20 | Service: usbhub | +-+ Generic USB Hub | | Vendor/Product ID: 0x8087, 0x0024 | | Location: Port_#0001.Hub_#0002 | | Matching Device ID: usb\class_09 | | Service: usbhub ---------------- Gameport Devices ---------------- ------------ PS/2 Devices ------------ + Standard PS/2 Keyboard | Matching Device ID: *pnp0303 | Service: i8042prt | + Terminal Server Keyboard Driver | Matching Device ID: root\rdp_kbd | Upper Filters: kbdclass | Service: TermDD | + Synaptics PS/2 Port TouchPad | Matching Device ID: *dll04b6 | Upper Filters: SynTP | Service: i8042prt | + Terminal Server Mouse Driver | Matching Device ID: root\rdp_mou | Upper Filters: mouclass | Service: TermDD ------------------------ Disk & DVD/CD-ROM Drives ------------------------ Drive: C: Free Space: 26.2 GB Total Space: 122.0 GB File System: NTFS Model: M4-CT128M4SSD2 ATA Device Drive: D: Model: Optiarc DVDRWBD BC-5540H ATA Device Driver: c:\windows\system32\drivers\cdrom.sys, 6.01.7601.17514 (English), , 0 bytes -------------- System Devices -------------- Name: High Definition Audio Controller Device ID: PCI\VEN_8086&DEV_1C20&SUBSYS_04B61028&REV_05\3&11583659&0&D8 Driver: n/a Name: PCI standard host CPU bridge Device ID: PCI\VEN_8086&DEV_0104&SUBSYS_04B61028&REV_09\3&11583659&0&00 Driver: n/a Name: PCI standard PCI-to-PCI bridge Device ID: PCI\VEN_8086&DEV_1C1A&SUBSYS_04B61028&REV_B5\3&11583659&0&E5 Driver: n/a Name: PCI standard PCI-to-PCI bridge Device ID: PCI\VEN_8086&DEV_0101&SUBSYS_20108086&REV_09\3&11583659&0&08 Driver: n/a Name: PCI standard PCI-to-PCI bridge Device ID: PCI\VEN_8086&DEV_1C18&SUBSYS_04B61028&REV_B5\3&11583659&0&E4 Driver: n/a Name: Intel(R) Centrino(R) Advanced-N 6230 Device ID: PCI\VEN_8086&DEV_0091&SUBSYS_52218086&REV_34\4&2634DE8D&0&00E1 Driver: n/a Name: PCI standard ISA bridge Device ID: PCI\VEN_8086&DEV_1C4B&SUBSYS_04B61028&REV_05\3&11583659&0&F8 Driver: n/a Name: PCI standard PCI-to-PCI bridge Device ID: PCI\VEN_8086&DEV_1C16&SUBSYS_04B61028&REV_B5\3&11583659&0&E3 Driver: n/a Name: Realtek PCIe GBE Family Controller Device ID: PCI\VEN_10EC&DEV_8168&SUBSYS_04B61028&REV_06\4&109EAB2F&0&00E5 Driver: n/a Name: Intel(R) Management Engine Interface Device ID: PCI\VEN_8086&DEV_1C3A&SUBSYS_04B61028&REV_04\3&11583659&0&B0 Driver: n/a Name: PCI standard PCI-to-PCI bridge Device ID: PCI\VEN_8086&DEV_1C12&SUBSYS_04B61028&REV_B5\3&11583659&0&E1 Driver: n/a Name: NVIDIA GeForce GT 525M Device ID: PCI\VEN_10DE&DEV_0DF5&SUBSYS_04B61028&REV_A1\4&4DCA75F&0&0008 Driver: n/a Name: Standard Enhanced PCI to USB Host Controller Device ID: PCI\VEN_8086&DEV_1C2D&SUBSYS_04B61028&REV_05\3&11583659&0&D0 Driver: n/a Name: PCI standard PCI-to-PCI bridge Device ID: PCI\VEN_8086&DEV_1C10&SUBSYS_04B61028&REV_B5\3&11583659&0&E0 Driver: n/a Name: Standard Enhanced PCI to USB Host Controller Device ID: PCI\VEN_8086&DEV_1C26&SUBSYS_04B61028&REV_05\3&11583659&0&E8 Driver: n/a Name: Standard AHCI 1.0 Serial ATA Controller Device ID: PCI\VEN_8086&DEV_1C03&SUBSYS_04B61028&REV_05\3&11583659&0&FA Driver: n/a Name: SM Bus Controller Device ID: PCI\VEN_8086&DEV_1C22&SUBSYS_04B61028&REV_05\3&11583659&0&FB Driver: n/a Name: Intel(R) HD Graphics 3000 Device ID: PCI\VEN_8086&DEV_0126&SUBSYS_04B61028&REV_09\3&11583659&0&10 Driver: n/a Name: Renesas Electronics USB 3.0 Host Controller Device ID: PCI\VEN_1033&DEV_0194&SUBSYS_04B61028&REV_04\4&3494AC3A&0&00E3 Driver: n/a ------------------ DirectShow Filters ------------------ DirectShow Filters: WMAudio Decoder DMO,0x00800800,1,1,WMADMOD.DLL,6.01.7601.17514 WMAPro over S/PDIF DMO,0x00600800,1,1,WMADMOD.DLL,6.01.7601.17514 WMSpeech Decoder DMO,0x00600800,1,1,WMSPDMOD.DLL,6.01.7601.17514 MP3 Decoder DMO,0x00600800,1,1,mp3dmod.dll,6.01.7600.16385 Mpeg4s Decoder DMO,0x00800001,1,1,mp4sdecd.dll,6.01.7600.16385 WMV Screen decoder DMO,0x00600800,1,1,wmvsdecd.dll,6.01.7601.17514 WMVideo Decoder DMO,0x00800001,1,1,wmvdecod.dll,6.01.7601.17514 Mpeg43 Decoder DMO,0x00800001,1,1,mp43decd.dll,6.01.7600.16385 Mpeg4 Decoder DMO,0x00800001,1,1,mpg4decd.dll,6.01.7600.16385 DV Muxer,0x00400000,0,0,qdv.dll,6.06.7601.17514 Color Space Converter,0x00400001,1,1,quartz.dll,6.06.7601.17713 WM ASF Reader,0x00400000,0,0,qasf.dll,12.00.7601.17514 Screen Capture filter,0x00200000,0,1,wmpsrcwp.dll,12.00.7601.17514 AVI Splitter,0x00600000,1,1,quartz.dll,6.06.7601.17713 VGA 16 Color Ditherer,0x00400000,1,1,quartz.dll,6.06.7601.17713 SBE2MediaTypeProfile,0x00200000,0,0,sbe.dll,6.06.7601.17528 Microsoft DTV-DVD Video Decoder,0x005fffff,2,4,msmpeg2vdec.dll,6.01.7140.0000 AC3 Parser Filter,0x00600000,1,1,mpg2splt.ax,6.06.7601.17528 StreamBufferSink,0x00200000,0,0,sbe.dll,6.06.7601.17528 MJPEG Decompressor,0x00600000,1,1,quartz.dll,6.06.7601.17713 MPEG-I Stream Splitter,0x00600000,1,2,quartz.dll,6.06.7601.17713 SAMI (CC) Parser,0x00400000,1,1,quartz.dll,6.06.7601.17713 VBI Codec,0x00600000,1,4,VBICodec.ax,6.06.7601.17514 MPEG-2 Splitter,0x005fffff,1,0,mpg2splt.ax,6.06.7601.17528 Closed Captions Analysis Filter,0x00200000,2,5,cca.dll,6.06.7601.17514 SBE2FileScan,0x00200000,0,0,sbe.dll,6.06.7601.17528 Microsoft MPEG-2 Video Encoder,0x00200000,1,1,msmpeg2enc.dll,6.01.7601.17514 Internal Script Command Renderer,0x00800001,1,0,quartz.dll,6.06.7601.17713 MPEG Audio Decoder,0x03680001,1,1,quartz.dll,6.06.7601.17713 DV Splitter,0x00600000,1,2,qdv.dll,6.06.7601.17514 Video Mixing Renderer 9,0x00200000,1,0,quartz.dll,6.06.7601.17713 Microsoft MPEG-2 Encoder,0x00200000,2,1,msmpeg2enc.dll,6.01.7601.17514 ACM Wrapper,0x00600000,1,1,quartz.dll,6.06.7601.17713 Video Renderer,0x00800001,1,0,quartz.dll,6.06.7601.17713 MPEG-2 Video Stream Analyzer,0x00200000,0,0,sbe.dll,6.06.7601.17528 Line 21 Decoder,0x00600000,1,1,qdvd.dll,6.06.7601.17835 Video Port Manager,0x00600000,2,1,quartz.dll,6.06.7601.17713 Video Renderer,0x00400000,1,0,quartz.dll,6.06.7601.17713 VPS Decoder,0x00200000,0,0,WSTPager.ax,6.06.7601.17514 WM ASF Writer,0x00400000,0,0,qasf.dll,12.00.7601.17514 VBI Surface Allocator,0x00600000,1,1,vbisurf.ax,6.01.7601.17514 File writer,0x00200000,1,0,qcap.dll,6.06.7601.17514 iTV Data Sink,0x00600000,1,0,itvdata.dll,6.06.7601.17514 iTV Data Capture filter,0x00600000,1,1,itvdata.dll,6.06.7601.17514 DVD Navigator,0x00200000,0,3,qdvd.dll,6.06.7601.17835 Overlay Mixer2,0x00200000,1,1,qdvd.dll,6.06.7601.17835 AVI Draw,0x00600064,9,1,quartz.dll,6.06.7601.17713 RDP DShow Redirection Filter,0xffffffff,1,0,DShowRdpFilter.dll, Microsoft MPEG-2 Audio Encoder,0x00200000,1,1,msmpeg2enc.dll,6.01.7601.17514 WST Pager,0x00200000,1,1,WSTPager.ax,6.06.7601.17514 MPEG-2 Demultiplexer,0x00600000,1,1,mpg2splt.ax,6.06.7601.17528 DV Video Decoder,0x00800000,1,1,qdv.dll,6.06.7601.17514 SampleGrabber,0x00200000,1,1,qedit.dll,6.06.7601.17514 Null Renderer,0x00200000,1,0,qedit.dll,6.06.7601.17514 MPEG-2 Sections and Tables,0x005fffff,1,0,Mpeg2Data.ax,6.06.7601.17514 Microsoft AC3 Encoder,0x00200000,1,1,msac3enc.dll,6.01.7601.17514 StreamBufferSource,0x00200000,0,0,sbe.dll,6.06.7601.17528 Smart Tee,0x00200000,1,2,qcap.dll,6.06.7601.17514 Overlay Mixer,0x00200000,0,0,qdvd.dll,6.06.7601.17835 AVI Decompressor,0x00600000,1,1,quartz.dll,6.06.7601.17713 AVI/WAV File Source,0x00400000,0,2,quartz.dll,6.06.7601.17713 Wave Parser,0x00400000,1,1,quartz.dll,6.06.7601.17713 MIDI Parser,0x00400000,1,1,quartz.dll,6.06.7601.17713 Multi-file Parser,0x00400000,1,1,quartz.dll,6.06.7601.17713 File stream renderer,0x00400000,1,1,quartz.dll,6.06.7601.17713 Microsoft DTV-DVD Audio Decoder,0x005fffff,1,1,msmpeg2adec.dll,6.01.7140.0000 StreamBufferSink2,0x00200000,0,0,sbe.dll,6.06.7601.17528 AVI Mux,0x00200000,1,0,qcap.dll,6.06.7601.17514 Line 21 Decoder 2,0x00600002,1,1,quartz.dll,6.06.7601.17713 File Source (Async.),0x00400000,0,1,quartz.dll,6.06.7601.17713 File Source (URL),0x00400000,0,1,quartz.dll,6.06.7601.17713 Infinite Pin Tee Filter,0x00200000,1,1,qcap.dll,6.06.7601.17514 Enhanced Video Renderer,0x00200000,1,0,evr.dll,6.01.7601.17514 BDA MPEG2 Transport Information Filter,0x00200000,2,0,psisrndr.ax,6.06.7601.17669 MPEG Video Decoder,0x40000001,1,1,quartz.dll,6.06.7601.17713 WDM Streaming Tee/Splitter Devices: Tee/Sink-to-Sink Converter,0x00200000,1,1,ksproxy.ax,6.01.7601.17514 Video Compressors: WMVideo8 Encoder DMO,0x00600800,1,1,wmvxencd.dll,6.01.7600.16385 WMVideo9 Encoder DMO,0x00600800,1,1,wmvencod.dll,6.01.7600.16385 MSScreen 9 encoder DMO,0x00600800,1,1,wmvsencd.dll,6.01.7600.16385 DV Video Encoder,0x00200000,0,0,qdv.dll,6.06.7601.17514 MJPEG Compressor,0x00200000,0,0,quartz.dll,6.06.7601.17713 Cinepak Codec by Radius,0x00200000,1,1,qcap.dll,6.06.7601.17514 Intel IYUV codec,0x00200000,1,1,qcap.dll,6.06.7601.17514 Intel IYUV codec,0x00200000,1,1,qcap.dll,6.06.7601.17514 Microsoft RLE,0x00200000,1,1,qcap.dll,6.06.7601.17514 Microsoft Video 1,0x00200000,1,1,qcap.dll,6.06.7601.17514 Audio Compressors: WM Speech Encoder DMO,0x00600800,1,1,WMSPDMOE.DLL,6.01.7600.16385 WMAudio Encoder DMO,0x00600800,1,1,WMADMOE.DLL,6.01.7600.16385 IMA ADPCM,0x00200000,1,1,quartz.dll,6.06.7601.17713 PCM,0x00200000,1,1,quartz.dll,6.06.7601.17713 Microsoft ADPCM,0x00200000,1,1,quartz.dll,6.06.7601.17713 GSM 6.10,0x00200000,1,1,quartz.dll,6.06.7601.17713 CCITT A-Law,0x00200000,1,1,quartz.dll,6.06.7601.17713 CCITT u-Law,0x00200000,1,1,quartz.dll,6.06.7601.17713 MPEG Layer-3,0x00200000,1,1,quartz.dll,6.06.7601.17713 Audio Capture Sources: Microphone (High Definition Aud,0x00200000,0,0,qcap.dll,6.06.7601.17514 PBDA CP Filters: PBDA DTFilter,0x00600000,1,1,CPFilters.dll,6.06.7601.17528 PBDA ETFilter,0x00200000,0,0,CPFilters.dll,6.06.7601.17528 PBDA PTFilter,0x00200000,0,0,CPFilters.dll,6.06.7601.17528 Midi Renderers: Default MidiOut Device,0x00800000,1,0,quartz.dll,6.06.7601.17713 Microsoft GS Wavetable Synth,0x00200000,1,0,quartz.dll,6.06.7601.17713 WDM Streaming Capture Devices: HD Audio Microphone 2,0x00200000,1,1,ksproxy.ax,6.01.7601.17514 Integrated Webcam,0x00200000,1,2,ksproxy.ax,6.01.7601.17514 WDM Streaming Rendering Devices: HD Audio Headphone/Speakers,0x00200000,1,1,ksproxy.ax,6.01.7601.17514 HD Audio SPDIF out,0x00200000,1,1,ksproxy.ax,6.01.7601.17514 BDA Network Providers: Microsoft ATSC Network Provider,0x00200000,0,1,MSDvbNP.ax,6.06.7601.17514 Microsoft DVBC Network Provider,0x00200000,0,1,MSDvbNP.ax,6.06.7601.17514 Microsoft DVBS Network Provider,0x00200000,0,1,MSDvbNP.ax,6.06.7601.17514 Microsoft DVBT Network Provider,0x00200000,0,1,MSDvbNP.ax,6.06.7601.17514 Microsoft Network Provider,0x00200000,0,1,MSNP.ax,6.06.7601.17514 Video Capture Sources: Integrated Webcam,0x00200000,1,2,ksproxy.ax,6.01.7601.17514 Multi-Instance Capable VBI Codecs: VBI Codec,0x00600000,1,4,VBICodec.ax,6.06.7601.17514 BDA Transport Information Renderers: BDA MPEG2 Transport Information Filter,0x00600000,2,0,psisrndr.ax,6.06.7601.17669 MPEG-2 Sections and Tables,0x00600000,1,0,Mpeg2Data.ax,6.06.7601.17514 BDA CP/CA Filters: Decrypt/Tag,0x00600000,1,1,EncDec.dll,6.06.7601.17708 Encrypt/Tag,0x00200000,0,0,EncDec.dll,6.06.7601.17708 PTFilter,0x00200000,0,0,EncDec.dll,6.06.7601.17708 XDS Codec,0x00200000,0,0,EncDec.dll,6.06.7601.17708 WDM Streaming Communication Transforms: Tee/Sink-to-Sink Converter,0x00200000,1,1,ksproxy.ax,6.01.7601.17514 Audio Renderers: Speakers (High Definition Audio,0x00200000,1,0,quartz.dll,6.06.7601.17713 Default DirectSound Device,0x00800000,1,0,quartz.dll,6.06.7601.17713 Default WaveOut Device,0x00200000,1,0,quartz.dll,6.06.7601.17713 Digital Audio (S/PDIF) (High De,0x00200000,1,0,quartz.dll,6.06.7601.17713 DirectSound: Digital Audio (S/PDIF) (High Definition Audio Device),0x00200000,1,0,quartz.dll,6.06.7601.17713 DirectSound: Speakers (High Definition Audio Device),0x00200000,1,0,quartz.dll,6.06.7601.17713 --------------- EVR Power Information --------------- Current Setting: {651288E5-A7ED-4076-A96B-6CC62D848FE1} (Balanced) Quality Flags: 2576 Enabled: Force throttling Allow half deinterlace Allow scaling Decode Power Usage: 100 Balanced Flags: 1424 Enabled: Force throttling Allow batching Force half deinterlace Force scaling Decode Power Usage: 50 PowerFlags: 1424 Enabled: Force throttling Allow batching Force half deinterlace Force scaling Decode Power Usage: 0

    Read the article

  • video and file caching with squid lusca?

    - by moon
    hello all i have configured squid lusca on ubuntu 11.04 version and also configured the video caching but the problem is the squid cannot configure the video more than 2 min long and the file of size upto 5.xx mbs only. here is my config please guide me how can i cache the long videos and files with squid: > # PORT and Transparent Option http_port 8080 transparent server_http11 on icp_port 0 > > # Cache Directory , modify it according to your system. > # but first create directory in root by mkdir /cache1 > # and then issue this command chown proxy:proxy /cache1 > # [for ubuntu user is proxy, in Fedora user is SQUID] > # I have set 500 MB for caching reserved just for caching , > # adjust it according to your need. > # My recommendation is to have one cache_dir per drive. zzz > > #store_dir_select_algorithm round-robin cache_dir aufs /cache1 500 16 256 cache_replacement_policy heap LFUDA memory_replacement_policy heap > LFUDA > > # If you want to enable DATE time n SQUID Logs,use following emulate_httpd_log on logformat squid %tl %6tr %>a %Ss/%03Hs %<st %rm > %ru %un %Sh/%<A %mt log_fqdn off > > # How much days to keep users access web logs > # You need to rotate your log files with a cron job. For example: > # 0 0 * * * /usr/local/squid/bin/squid -k rotate logfile_rotate 14 debug_options ALL,1 cache_access_log /var/log/squid/access.log > cache_log /var/log/squid/cache.log cache_store_log > /var/log/squid/store.log > > #I used DNSAMSQ service for fast dns resolving > #so install by using "apt-get install dnsmasq" first dns_nameservers 127.0.0.1 101.11.11.5 ftp_user anonymous@ ftp_list_width 32 ftp_passive on ftp_sanitycheck on > > #ACL Section acl all src 0.0.0.0/0.0.0.0 acl manager proto cache_object acl localhost src 127.0.0.1/255.255.255.255 acl > to_localhost dst 127.0.0.0/8 acl SSL_ports port 443 563 # https, snews > acl SSL_ports port 873 # rsync acl Safe_ports port 80 # http acl > Safe_ports port 21 # ftp acl Safe_ports port 443 563 # https, snews > acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl > Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port > 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port > 591 # filemaker acl Safe_ports port 777 # multiling http acl > Safe_ports port 631 # cups acl Safe_ports port 873 # rsync acl > Safe_ports port 901 # SWAT acl purge method PURGE acl CONNECT method > CONNECT http_access allow manager localhost http_access deny manager > http_access allow purge localhost http_access deny purge http_access > deny !Safe_ports http_access deny CONNECT !SSL_ports http_access allow > localhost http_access allow all http_reply_access allow all icp_access > allow all > > #========================== > # Administrative Parameters > #========================== > > # I used UBUNTU so user is proxy, in FEDORA you may use use squid cache_effective_user proxy cache_effective_group proxy cache_mgr > [email protected] visible_hostname proxy.aacable.net unique_hostname > [email protected] > > #============= > # ACCELERATOR > #============= half_closed_clients off quick_abort_min 0 KB quick_abort_max 0 KB vary_ignore_expire on reload_into_ims on log_fqdn > off memory_pools off > > # If you want to hide your proxy machine from being detected at various site use following via off > > #============================================ > # OPTIONS WHICH AFFECT THE CACHE SIZE / zaib > #============================================ > # If you have 4GB memory in Squid box, we will use formula of 1/3 > # You can adjust it according to your need. IF squid is taking too much of RAM > # Then decrease it to 128 MB or even less. > > cache_mem 256 MB minimum_object_size 512 bytes maximum_object_size 500 > MB maximum_object_size_in_memory 128 KB > > #============================================================$ > # SNMP , if you want to generate graphs for SQUID via MRTG > #============================================================$ > #acl snmppublic snmp_community gl > #snmp_port 3401 > #snmp_access allow snmppublic all > #snmp_access allow all > > #============================================================ > # ZPH , To enable cache content to be delivered at full lan speed, > # To bypass the queue at MT. > #============================================================ tcp_outgoing_tos 0x30 all zph_mode tos zph_local 0x30 zph_parent 0 > zph_option 136 > > # Caching Youtube acl videocache_allow_url url_regex -i \.youtube\.com\/get_video\? acl videocache_allow_url url_regex -i > \.youtube\.com\/videoplayback \.youtube\.com\/videoplay > \.youtube\.com\/get_video\? acl videocache_allow_url url_regex -i > \.youtube\.[a-z][a-z]\/videoplayback \.youtube\.[a-z][a-z]\/videoplay > \.youtube\.[a-z][a-z]\/get_video\? acl videocache_allow_url url_regex > -i \.googlevideo\.com\/videoplayback \.googlevideo\.com\/videoplay \.googlevideo\.com\/get_video\? acl videocache_allow_url url_regex -i > \.google\.com\/videoplayback \.google\.com\/videoplay > \.google\.com\/get_video\? acl videocache_allow_url url_regex -i > \.google\.[a-z][a-z]\/videoplayback \.google\.[a-z][a-z]\/videoplay > \.google\.[a-z][a-z]\/get_video\? acl videocache_allow_url url_regex > -i proxy[a-z0-9\-][a-z0-9][a-z0-9][a-z0-9]?\.dailymotion\.com\/ acl videocache_allow_url url_regex -i vid\.akm\.dailymotion\.com\/ acl > videocache_allow_url url_regex -i > [a-z0-9][0-9a-z][0-9a-z]?[0-9a-z]?[0-9a-z]?\.xtube\.com\/(.*)flv acl > videocache_allow_url url_regex -i \.vimeo\.com\/(.*)\.(flv|mp4) acl > videocache_allow_url url_regex -i > va\.wrzuta\.pl\/wa[0-9][0-9][0-9][0-9]? acl videocache_allow_url > url_regex -i \.youporn\.com\/(.*)\.flv acl videocache_allow_url > url_regex -i \.msn\.com\.edgesuite\.net\/(.*)\.flv acl > videocache_allow_url url_regex -i \.tube8\.com\/(.*)\.(flv|3gp) acl > videocache_allow_url url_regex -i \.mais\.uol\.com\.br\/(.*)\.flv acl > videocache_allow_url url_regex -i > \.blip\.tv\/(.*)\.(flv|avi|mov|mp3|m4v|mp4|wmv|rm|ram|m4v) acl > videocache_allow_url url_regex -i > \.apniisp\.com\/(.*)\.(flv|avi|mov|mp3|m4v|mp4|wmv|rm|ram|m4v) acl > videocache_allow_url url_regex -i \.break\.com\/(.*)\.(flv|mp4) acl > videocache_allow_url url_regex -i redtube\.com\/(.*)\.flv acl > videocache_allow_dom dstdomain .mccont.com .metacafe.com > .cdn.dailymotion.com acl videocache_deny_dom dstdomain > .download.youporn.com .static.blip.tv acl dontrewrite url_regex > redbot\.org \.php acl getmethod method GET > > storeurl_access deny dontrewrite storeurl_access deny !getmethod > storeurl_access deny videocache_deny_dom storeurl_access allow > videocache_allow_url storeurl_access allow videocache_allow_dom > storeurl_access deny all > > storeurl_rewrite_program /etc/squid/storeurl.pl > storeurl_rewrite_children 7 storeurl_rewrite_concurrency 10 > > acl store_rewrite_list urlpath_regex -i > \/(get_video\?|videodownload\?|videoplayback.*id) acl > store_rewrite_list urlpath_regex -i \.flv$ \.mp3$ \.mp4$ \.swf$ \ > storeurl_access allow store_rewrite_list storeurl_access deny all > > refresh_pattern -i \.flv$ 10080 80% 10080 override-expire > override-lastmod reload-into-ims ignore-reload ignore-no-cache > ignore-private ignore-auth refresh_pattern -i \.mp3$ 10080 80% 10080 > override-expire override-lastmod reload-into-ims ignore-reload > ignore-no-cache ignore-private ignore-auth refresh_pattern -i \.mp4$ > 10080 80% 10080 override-expire override-lastmod reload-into-ims > ignore-reload ignore-no-cache ignore-private ignore-auth > refresh_pattern -i \.swf$ 10080 80% 10080 override-expire > override-lastmod reload-into-ims ignore-reload ignore-no-cache > ignore-private ignore-auth refresh_pattern -i \.gif$ 10080 80% 10080 > override-expire override-lastmod reload-into-ims ignore-reload > ignore-no-cache ignore-private ignore-auth refresh_pattern -i \.jpg$ > 10080 80% 10080 override-expire override-lastmod reload-into-ims > ignore-reload ignore-no-cache ignore-private ignore-auth > refresh_pattern -i \.jpeg$ 10080 80% 10080 override-expire > override-lastmod reload-into-ims ignore-reload ignore-no-cache > ignore-private ignore-auth refresh_pattern -i \.exe$ 10080 80% 10080 > override-expire override-lastmod reload-into-ims ignore-reload > ignore-no-cache ignore-private ignore-auth > > # 1 year = 525600 mins, 1 month = 10080 mins, 1 day = 1440 refresh_pattern (get_video\?|videoplayback\?|videodownload\?|\.flv?) > 10080 80% 10080 ignore-no-cache ignore-private override-expire > override-lastmod reload-into-ims refresh_pattern > (get_video\?|videoplayback\?id|videoplayback.*id|videodownload\?|\.flv?) > 10080 80% 10080 ignore-no-cache ignore-private override-expire > override-lastmod reload-into-ims refresh_pattern \.(ico|video-stats) > 10080 80% 10080 override-expire ignore-reload ignore-no-cache > ignore-private ignore-auth override-lastmod negative-ttl=10080 > refresh_pattern \.etology\? 10080 > 80% 10080 override-expire ignore-reload ignore-no-cache > refresh_pattern galleries\.video(\?|sz) 10080 > 80% 10080 override-expire ignore-reload ignore-no-cache > refresh_pattern brazzers\? 10080 > 80% 10080 override-expire ignore-reload ignore-no-cache > refresh_pattern \.adtology\? 10080 > 80% 10080 override-expire ignore-reload ignore-no-cache > refresh_pattern > ^.*(utm\.gif|ads\?|rmxads\.com|ad\.z5x\.net|bh\.contextweb\.com|bstats\.adbrite\.com|a1\.interclick\.com|ad\.trafficmp\.com|ads\.cubics\.com|ad\.xtendmedia\.com|\.googlesyndication\.com|advertising\.com|yieldmanager|game-advertising\.com|pixel\.quantserve\.com|adperium\.com|doubleclick\.net|adserving\.cpxinteractive\.com|syndication\.com|media.fastclick.net).* > 10080 20% 10080 ignore-no-cache ignore-private override-expire > ignore-reload ignore-auth negative-ttl=40320 max-stale=10 > refresh_pattern ^.*safebrowsing.*google 10080 80% 10080 > override-expire ignore-reload ignore-no-cache ignore-private > ignore-auth negative-ttl=10080 refresh_pattern > ^http://((cbk|mt|khm|mlt)[0-9]?)\.google\.co(m|\.uk) 10080 80% > 10080 override-expire ignore-reload ignore-private negative-ttl=10080 > refresh_pattern ytimg\.com.*\.jpg > 10080 80% 10080 override-expire ignore-reload refresh_pattern > images\.friendster\.com.*\.(png|gif) 10080 80% > 10080 override-expire ignore-reload refresh_pattern garena\.com > 10080 80% 10080 override-expire reload-into-ims refresh_pattern > photobucket.*\.(jp(e?g|e|2)|tiff?|bmp|gif|png) 10080 80% > 10080 override-expire ignore-reload refresh_pattern > vid\.akm\.dailymotion\.com.*\.on2\? 10080 80% > 10080 ignore-no-cache override-expire override-lastmod refresh_pattern > mediafire.com\/images.*\.(jp(e?g|e|2)|tiff?|bmp|gif|png) 10080 80% > 10080 reload-into-ims override-expire ignore-private refresh_pattern > ^http:\/\/images|pics|thumbs[0-9]\. 10080 80% > 10080 reload-into-ims ignore-no-cache ignore-reload override-expire > refresh_pattern ^http:\/\/www.onemanga.com.*\/ > 10080 80% 10080 reload-into-ims ignore-no-cache ignore-reload > override-expire refresh_pattern > ^http://v\.okezone\.com/get_video\/([a-zA-Z0-9]) 10080 80% 10080 > override-expire ignore-reload ignore-no-cache ignore-private > ignore-auth override-lastmod negative-ttl=10080 > > #images facebook refresh_pattern -i \.facebook.com.*\.(jpg|png|gif) 10080 80% 10080 ignore-reload override-expire ignore-no-cache > refresh_pattern -i \.fbcdn.net.*\.(jpg|gif|png|swf|mp3) > 10080 80% 10080 ignore-reload override-expire ignore-no-cache > refresh_pattern static\.ak\.fbcdn\.net*\.(jpg|gif|png) > 10080 80% 10080 ignore-reload override-expire ignore-no-cache > refresh_pattern ^http:\/\/profile\.ak\.fbcdn.net*\.(jpg|gif|png) > 10080 80% 10080 ignore-reload override-expire ignore-no-cache > > #All File refresh_pattern -i \.(3gp|7z|ace|asx|bin|deb|divx|dvr-ms|ram|rpm|exe|inc|cab|qt) > 10080 80% 10080 ignore-no-cache override-expire override-lastmod > reload-into-ims refresh_pattern -i > \.(rar|jar|gz|tgz|bz2|iso|m1v|m2(v|p)|mo(d|v)|arj|lha|lzh|zip|tar) > 10080 80% 10080 ignore-no-cache override-expire override-lastmod > reload-into-ims refresh_pattern -i > \.(jp(e?g|e|2)|gif|pn[pg]|bm?|tiff?|ico|swf|dat|ad|txt|dll) > 10080 80% 10080 ignore-no-cache override-expire override-lastmod > reload-into-ims refresh_pattern -i > \.(avi|ac4|mp(e?g|a|e|1|2|3|4)|mk(a|v)|ms(i|u|p)|og(x|v|a|g)|rm|r(a|p)m|snd|vob) > 10080 80% 10080 ignore-no-cache override-expire override-lastmod > reload-into-ims refresh_pattern -i > \.(pp(t?x)|s|t)|pdf|rtf|wax|wm(a|v)|wmx|wpl|cb(r|z|t)|xl(s?x)|do(c?x)|flv|x-flv) > 10080 80% 10080 ignore-no-cache override-expire override-lastmod > reload-into-ims > > refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 refresh_pattern ^gopher: > 1440 0% 1440 refresh_pattern ^ftp: 10080 95% 10080 > override-lastmod reload-into-ims refresh_pattern . 1440 > 95% 10080 override-lastmod reload-into-ims

    Read the article

  • MySQL on Linux out of memory

    - by Sunrays
    OS: Redhat Enterprise Linux Server Release 5.3 (Tikanga) Architecture: Intel Xeon 64Bit MySQL Server 5.5.20 Enterprise Server advanced edition. Application: Liferay. My database size is 200MB. RAM is 64GB. The memory consumption increases gradually and we run out of memory. Then only rebooting releases all the memory, but then process of memory consumption starts again and reaches 63-64GB in less than a day. Parameters detail: key_buffer_size=16M innodb_buffer_pool_size=3GB inndb_buffer_pool_instances=3 max_connections=1000 innodb_flush_method=O_DIRECT innodb_change_buffering=inserts read_buffer_size=2M read_rnd_buffer_size=256K It's a serious production server issue that I am facing. What could be the reason behind this and how to resolve. This is the report of 2pm today, after Linux was rebooted yesterday @ around 10pm. Output of free -m total used free shared buffers cached Mem: 64455 22053 42402 0 1544 1164 -/+ buffers/cache: 19343 45112 Swap: 74998 0 74998 Output of vmstat 2 5 procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------ r b swpd free buff cache si so bi bo in cs us sy id wa st 0 0 0 43423976 1583700 1086616 0 0 1 173 22 27 1 1 98 0 0 2 0 0 43280200 1583712 1228636 0 0 0 146 1265 491 2 2 96 1 0 0 0 0 43421940 1583724 1087160 0 0 0 138 1469 738 2 1 97 0 0 1 0 0 43422604 1583728 1086736 0 0 0 5816 1615 934 1 1 97 0 0 0 0 0 43422372 1583732 1086752 0 0 0 2784 1323 545 2 1 97 0 0 Output of top -n 3 -b top - 14:16:22 up 16:32, 5 users, load average: 0.79, 0.77, 0.93 Tasks: 345 total, 1 running, 344 sleeping, 0 stopped, 0 zombie Cpu(s): 1.0%us, 0.9%sy, 0.0%ni, 98.1%id, 0.1%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 66002772k total, 22656292k used, 43346480k free, 1582152k buffers Swap: 76798724k total, 0k used, 76798724k free, 1163616k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 6434 mysql 15 0 4095m 841m 5500 S 113.5 1.3 426:53.69 mysqld 1 root 15 0 10344 680 572 S 0.0 0.0 0:03.09 init 2 root RT -5 0 0 0 S 0.0 0.0 0:00.01 migration/0 3 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/0 4 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/0 5 root RT -5 0 0 0 S 0.0 0.0 0:00.01 migration/1 6 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/1 7 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/1 8 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/2 9 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/2 10 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/2 11 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/3 12 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/3 13 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/3 14 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/4 15 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/4 16 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/4 17 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/5 18 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/5 19 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/5 20 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/6 21 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/6 22 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/6 23 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/7 24 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/7 25 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/7 26 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/8 27 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/8 28 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/8 29 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/9 30 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/9 31 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/9 32 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/10 33 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/10 34 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/10 35 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/11 36 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/11 37 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/11 38 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/12 39 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/12 40 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/12 41 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/13 42 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/13 43 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/13 44 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/14 45 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/14 46 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/14 47 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/15 48 root 34 19 0 0 0 S 0.0 0.0 0:00.01 ksoftirqd/15 49 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/15 50 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/16 51 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/16 52 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/16 53 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/17 54 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/17 55 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/17 56 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/18 57 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/18 58 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/18 59 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/19 60 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/19 61 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/19 62 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/20 63 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/20 64 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/20 65 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/21 66 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/21 67 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/21 68 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/22 69 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/22 70 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/22 71 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/23 72 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/23 73 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/23 74 root 10 -5 0 0 0 S 0.0 0.0 0:00.02 events/0 75 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 events/1 76 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 events/2 77 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 events/3 78 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 events/4 79 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 events/5 80 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 events/6 81 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 events/7 82 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 events/8 83 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 events/9 84 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 events/10 85 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 events/11 86 root 10 -5 0 0 0 S 0.0 0.0 0:00.01 events/12 87 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 events/13 88 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 events/14 89 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 events/15 90 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 events/16 91 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 events/17 92 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 events/18 93 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 events/19 94 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 events/20 95 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 events/21 96 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 events/22 97 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 events/23 98 root 10 -5 0 0 0 S 0.0 0.0 0:00.01 khelper 615 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 kthread 643 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 kblockd/0 644 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 kblockd/1 645 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 kblockd/2 646 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 kblockd/3 647 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 kblockd/4 648 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 kblockd/5 649 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 kblockd/6 650 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 kblockd/7 651 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 kblockd/8 652 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 kblockd/9 653 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 kblockd/10 654 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 kblockd/11 655 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 kblockd/12 656 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 kblockd/13 657 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 kblockd/14 658 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 kblockd/15 659 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 kblockd/16 660 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 kblockd/17 661 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 kblockd/18 662 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 kblockd/19 663 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 kblockd/20 664 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 kblockd/21 665 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 kblockd/22 666 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 kblockd/23 667 root 17 -5 0 0 0 S 0.0 0.0 0:00.00 kacpid 840 root 17 -5 0 0 0 S 0.0 0.0 0:00.00 cqueue/0 841 root 18 -5 0 0 0 S 0.0 0.0 0:00.00 cqueue/1 842 root 19 -5 0 0 0 S 0.0 0.0 0:00.00 cqueue/2 843 root 20 -5 0 0 0 S 0.0 0.0 0:00.00 cqueue/3 844 root 20 -5 0 0 0 S 0.0 0.0 0:00.00 cqueue/4 845 root 20 -5 0 0 0 S 0.0 0.0 0:00.00 cqueue/5 846 root 20 -5 0 0 0 S 0.0 0.0 0:00.00 cqueue/6 847 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 cqueue/7 848 root 12 -5 0 0 0 S 0.0 0.0 0:00.00 cqueue/8 849 root 13 -5 0 0 0 S 0.0 0.0 0:00.00 cqueue/9 850 root 13 -5 0 0 0 S 0.0 0.0 0:00.00 cqueue/10 851 root 13 -5 0 0 0 S 0.0 0.0 0:00.00 cqueue/11 852 root 15 -5 0 0 0 S 0.0 0.0 0:00.00 cqueue/12 853 root 16 -5 0 0 0 S 0.0 0.0 0:00.00 cqueue/13 854 root 17 -5 0 0 0 S 0.0 0.0 0:00.00 cqueue/14 855 root 18 -5 0 0 0 S 0.0 0.0 0:00.00 cqueue/15 856 root 19 -5 0 0 0 S 0.0 0.0 0:00.00 cqueue/16 857 root 19 -5 0 0 0 S 0.0 0.0 0:00.00 cqueue/17 858 root 19 -5 0 0 0 S 0.0 0.0 0:00.00 cqueue/18 859 root 19 -5 0 0 0 S 0.0 0.0 0:00.00 cqueue/19 860 root 19 -5 0 0 0 S 0.0 0.0 0:00.00 cqueue/20 861 root 19 -5 0 0 0 S 0.0 0.0 0:00.00 cqueue/21 862 root 19 -5 0 0 0 S 0.0 0.0 0:00.00 cqueue/22 863 root 19 -5 0 0 0 S 0.0 0.0 0:00.00 cqueue/23 866 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 khubd 868 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 kseriod 1118 root 23 0 0 0 0 S 0.0 0.0 0:00.00 pdflush 1119 root 15 0 0 0 0 S 0.0 0.0 0:00.11 pdflush 1120 root 19 -5 0 0 0 S 0.0 0.0 0:00.00 kswapd0 1121 root 19 -5 0 0 0 S 0.0 0.0 0:00.00 kswapd1 1122 root 19 -5 0 0 0 S 0.0 0.0 0:00.00 aio/0 1123 root 19 -5 0 0 0 S 0.0 0.0 0:00.00 aio/1 1124 root 19 -5 0 0 0 S 0.0 0.0 0:00.00 aio/2 1125 root 20 -5 0 0 0 S 0.0 0.0 0:00.00 aio/3 1126 root 20 -5 0 0 0 S 0.0 0.0 0:00.00 aio/4 1127 root 20 -5 0 0 0 S 0.0 0.0 0:00.00 aio/5 1128 root 20 -5 0 0 0 S 0.0 0.0 0:00.00 aio/6 1129 root 20 -5 0 0 0 S 0.0 0.0 0:00.00 aio/7 1130 root 20 -5 0 0 0 S 0.0 0.0 0:00.00 aio/8 1131 root 20 -5 0 0 0 S 0.0 0.0 0:00.00 aio/9 1132 root 20 -5 0 0 0 S 0.0 0.0 0:00.00 aio/10 1133 root 20 -5 0 0 0 S 0.0 0.0 0:00.00 aio/11 1134 root 20 -5 0 0 0 S 0.0 0.0 0:00.00 aio/12 1135 root 20 -5 0 0 0 S 0.0 0.0 0:00.00 aio/13 1136 root 20 -5 0 0 0 S 0.0 0.0 0:00.00 aio/14 1137 root 20 -5 0 0 0 S 0.0 0.0 0:00.00 aio/15 1138 root 20 -5 0 0 0 S 0.0 0.0 0:00.00 aio/16 1139 root 20 -5 0 0 0 S 0.0 0.0 0:00.00 aio/17 1140 root 20 -5 0 0 0 S 0.0 0.0 0:00.00 aio/18 1141 root 20 -5 0 0 0 S 0.0 0.0 0:00.00 aio/19 1142 root 20 -5 0 0 0 S 0.0 0.0 0:00.00 aio/20 1143 root 20 -5 0 0 0 S 0.0 0.0 0:00.00 aio/21 1144 root 20 -5 0 0 0 S 0.0 0.0 0:00.00 aio/22 1145 root 20 -5 0 0 0 S 0.0 0.0 0:00.00 aio/23 1308 root 11 -5 0 0 0 S 0.0 0.0 0:00.00 kpsmoused 1566 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 ata/0 1567 root 10 -5 0 0 0 S 0.0 0.0 0:00.27 ata/1 1568 root 10 -5 0 0 0 S 0.0 0.0 0:02.39 ata/2 1569 root 10 -5 0 0 0 S 0.0 0.0 0:00.07 ata/3 1570 root 10 -5 0 0 0 S 0.0 0.0 0:00.72 ata/4 1571 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 ata/5 1572 root 10 -5 0 0 0 S 0.0 0.0 0:00.15 ata/6 1573 root 10 -5 0 0 0 S 0.0 0.0 0:00.07 ata/7 1574 root 10 -5 0 0 0 S 0.0 0.0 0:00.06 ata/8 1575 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 ata/9 1576 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 ata/10 1577 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 ata/11 1578 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 ata/12 1579 root 10 -5 0 0 0 S 0.0 0.0 0:00.14 ata/13 1580 root 10 -5 0 0 0 S 0.0 0.0 0:01.56 ata/14 1581 root 10 -5 0 0 0 S 0.0 0.0 0:00.04 ata/15 1582 root 10 -5 0 0 0 S 0.0 0.0 0:00.40 ata/16 1583 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 ata/17 1584 root 10 -5 0 0 0 S 0.0 0.0 0:00.11 ata/18 1585 root 10 -5 0 0 0 S 0.0 0.0 0:00.03 ata/19 1586 root 10 -5 0 0 0 S 0.0 0.0 0:00.02 ata/20 1587 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 ata/21 1588 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 ata/22 1589 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 ata/23 1590 root 13 -5 0 0 0 S 0.0 0.0 0:00.00 ata_aux 1616 root 10 -5 0 0 0 S 0.0 0.0 0:17.20 scsi_eh_0 1617 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 scsi_eh_1 1668 root 11 -5 0 0 0 S 0.0 0.0 0:00.00 scsi_eh_2 1669 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 qla2xxx_2_dpc 1670 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 scsi_wq_2 1671 root 11 -5 0 0 0 S 0.0 0.0 0:00.00 fc_wq_2 1672 root 12 -5 0 0 0 S 0.0 0.0 0:00.00 fc_dl_2 1673 root 12 -5 0 0 0 S 0.0 0.0 0:00.00 scsi_eh_3 1674 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 qla2xxx_3_dpc 1675 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 scsi_wq_3 1676 root 11 -5 0 0 0 S 0.0 0.0 0:00.00 fc_wq_3 1677 root 12 -5 0 0 0 S 0.0 0.0 0:00.00 fc_dl_3 1728 root 12 -5 0 0 0 S 0.0 0.0 0:00.00 kstriped 1829 root 10 -5 0 0 0 S 0.0 0.0 1:09.14 kjournald 1857 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 kauditd 1891 root 11 -4 13008 1188 388 S 0.0 0.0 0:00.40 udevd 4555 root 11 -5 0 0 0 S 0.0 0.0 0:00.00 kmpathd/0 4556 root 13 -5 0 0 0 S 0.0 0.0 0:00.00 kmpathd/1 4557 root 13 -5 0 0 0 S 0.0 0.0 0:00.00 kmpathd/2 4558 root 14 -5 0 0 0 S 0.0 0.0 0:00.00 kmpathd/3 4559 root 15 -5 0 0 0 S 0.0 0.0 0:00.00 kmpathd/4 4560 root 16 -5 0 0 0 S 0.0 0.0 0:00.00 kmpathd/5 4561 root 16 -5 0 0 0 S 0.0 0.0 0:00.00 kmpathd/6 4562 root 17 -5 0 0 0 S 0.0 0.0 0:00.00 kmpathd/7 4563 root 18 -5 0 0 0 S 0.0 0.0 0:00.00 kmpathd/8 4564 root 19 -5 0 0 0 S 0.0 0.0 0:00.00 kmpathd/9 4565 root 20 -5 0 0 0 S 0.0 0.0 0:00.00 kmpathd/10 4566 root 20 -5 0 0 0 S 0.0 0.0 0:00.00 kmpathd/11 4567 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 kmpathd/12 4568 root 13 -5 0 0 0 S 0.0 0.0 0:00.00 kmpathd/13 4569 root 13 -5 0 0 0 S 0.0 0.0 0:00.00 kmpathd/14 4570 root 13 -5 0 0 0 S 0.0 0.0 0:00.00 kmpathd/15 4571 root 14 -5 0 0 0 S 0.0 0.0 0:00.00 kmpathd/16 4572 root 14 -5 0 0 0 S 0.0 0.0 0:00.00 kmpathd/17 4573 root 14 -5 0 0 0 S 0.0 0.0 0:00.00 kmpathd/18 4574 root 15 -5 0 0 0 S 0.0 0.0 0:00.00 kmpathd/19 4575 root 16 -5 0 0 0 S 0.0 0.0 0:00.00 kmpathd/20 4576 root 15 -5 0 0 0 S 0.0 0.0 0:00.00 kmpathd/21 4577 root 16 -5 0 0 0 S 0.0 0.0 0:00.00 kmpathd/22 4578 root 16 -5 0 0 0 S 0.0 0.0 0:00.00 kmpathd/23 4579 root 18 -5 0 0 0 S 0.0 0.0 0:00.00 kmpath_handlerd 4734 root 13 -5 0 0 0 S 0.0 0.0 0:00.00 kjournald 4736 root 10 -5 0 0 0 S 0.0 0.0 0:04.82 kjournald 4744 root 13 -5 0 0 0 S 0.0 0.0 0:00.00 kjournald 5238 root RT 0 87584 3648 2768 S 0.0 0.0 0:03.60 multipathd 5537 root 11 -4 27328 812 580 S 0.0 0.0 0:00.14 auditd 5539 root 7 -8 81804 768 616 S 0.0 0.0 0:00.04 audispd 5564 root 15 0 5904 632 512 S 0.0 0.0 0:00.10 syslogd 5567 root 15 0 3800 432 344 S 0.0 0.0 0:00.01 klogd 5579 root 18 0 10728 384 244 S 0.0 0.0 0:00.42 irqbalance 5592 rpc 18 0 8048 584 464 S 0.0 0.0 0:00.00 portmap 5625 root 18 0 11032 768 632 S 0.0 0.0 0:00.00 rpc.statd 5681 root 11 -5 0 0 0 S 0.0 0.0 0:00.00 rpciod/0 5682 root 11 -5 0 0 0 S 0.0 0.0 0:00.00 rpciod/1 5683 root 12 -5 0 0 0 S 0.0 0.0 0:00.00 rpciod/2 5684 root 12 -5 0 0 0 S 0.0 0.0 0:00.00 rpciod/3 5685 root 12 -5 0 0 0 S 0.0 0.0 0:00.00 rpciod/4 5686 root 12 -5 0 0 0 S 0.0 0.0 0:00.00 rpciod/5 5687 root 12 -5 0 0 0 S 0.0 0.0 0:00.00 rpciod/6 5688 root 12 -5 0 0 0 S 0.0 0.0 0:00.00 rpciod/7 5689 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 rpciod/8 5690 root 12 -5 0 0 0 S 0.0 0.0 0:00.00 rpciod/9 5691 root 12 -5 0 0 0 S 0.0 0.0 0:00.00 rpciod/10 5692 root 13 -5 0 0 0 S 0.0 0.0 0:00.00 rpciod/11

    Read the article

  • httpd high cpu usage slowing down server response

    - by max
    my client has a image sharing website with about 100.000 visitor per day it has been slowed down considerably since this morning when i checked processes i've notice high cpu usage from http .... some has suggested ddos attack ... i'm not a webmaster and i've no idea whts going on top top - 20:13:30 up 5:04, 4 users, load average: 4.56, 4.69, 4.59 Tasks: 284 total, 3 running, 281 sleeping, 0 stopped, 0 zombie Cpu(s): 12.1%us, 0.9%sy, 1.7%ni, 69.0%id, 16.4%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 16037152k total, 15875096k used, 162056k free, 360468k buffers Swap: 4194288k total, 888k used, 4193400k free, 14050008k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 4151 apache 20 0 277m 84m 3784 R 50.2 0.5 0:01.98 httpd 4115 apache 20 0 210m 16m 4480 S 18.3 0.1 0:00.60 httpd 12885 root 39 19 4296 692 308 S 13.0 0.0 11:09.53 gzip 4177 apache 20 0 214m 20m 3700 R 12.3 0.1 0:00.37 httpd 2219 mysql 20 0 4257m 198m 5668 S 11.0 1.3 42:49.70 mysqld 3691 apache 20 0 206m 14m 6416 S 1.7 0.1 0:03.38 httpd 3934 apache 20 0 211m 17m 4836 S 1.0 0.1 0:03.61 httpd 4098 apache 20 0 209m 17m 3912 S 1.0 0.1 0:04.17 httpd 4116 apache 20 0 211m 17m 4476 S 1.0 0.1 0:00.43 httpd 3867 apache 20 0 217m 23m 4672 S 0.7 0.1 1:03.87 httpd 4146 apache 20 0 209m 15m 3628 S 0.7 0.1 0:00.02 httpd 4149 apache 20 0 209m 15m 3616 S 0.7 0.1 0:00.02 httpd 12884 root 39 19 22336 2356 944 D 0.7 0.0 0:19.21 tar 4054 apache 20 0 206m 12m 4576 S 0.3 0.1 0:00.32 httpd another top top - 15:46:45 up 5:08, 4 users, load average: 5.02, 4.81, 4.64 Tasks: 288 total, 6 running, 281 sleeping, 0 stopped, 1 zombie Cpu(s): 18.4%us, 0.9%sy, 2.3%ni, 56.5%id, 21.8%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 16037152k total, 15792196k used, 244956k free, 360924k buffers Swap: 4194288k total, 888k used, 4193400k free, 13983368k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 4622 apache 20 0 209m 16m 3868 S 54.2 0.1 0:03.99 httpd 4514 apache 20 0 213m 20m 3924 R 50.8 0.1 0:04.93 httpd 4627 apache 20 0 221m 27m 4560 R 18.9 0.2 0:01.20 httpd 12885 root 39 19 4296 692 308 S 18.9 0.0 11:51.79 gzip 2219 mysql 20 0 4257m 199m 5668 S 18.3 1.3 43:19.04 mysqld 4512 apache 20 0 227m 33m 4736 R 5.6 0.2 0:01.93 httpd 4520 apache 20 0 213m 19m 4640 S 1.3 0.1 0:01.48 httpd 4590 apache 20 0 212m 19m 3932 S 1.3 0.1 0:00.06 httpd 4573 apache 20 0 210m 16m 3556 R 1.0 0.1 0:00.03 httpd 4562 root 20 0 15164 1388 952 R 0.7 0.0 0:00.08 top 98 root 20 0 0 0 0 S 0.3 0.0 0:04.89 kswapd0 100 root 39 19 0 0 0 S 0.3 0.0 0:02.85 khugepaged 4579 apache 20 0 209m 16m 3900 S 0.3 0.1 0:00.83 httpd 4637 apache 20 0 209m 15m 3668 S 0.3 0.1 0:00.03 httpd ps aux [root@server ~]# ps aux | grep httpd root 2236 0.0 0.0 207524 10124 ? Ss 15:09 0:03 /usr/sbin/http d -k start -DSSL apache 3087 2.7 0.1 226968 28232 ? S 20:04 0:06 /usr/sbin/http d -k start -DSSL apache 3170 2.6 0.1 221296 22292 ? R 20:05 0:05 /usr/sbin/http d -k start -DSSL apache 3171 9.0 0.1 225044 26768 ? R 20:05 0:17 /usr/sbin/http d -k start -DSSL apache 3188 1.5 0.1 223644 24724 ? S 20:05 0:03 /usr/sbin/http d -k start -DSSL apache 3197 2.3 0.1 215908 17520 ? S 20:05 0:04 /usr/sbin/http d -k start -DSSL apache 3198 1.1 0.0 211700 13000 ? S 20:05 0:02 /usr/sbin/http d -k start -DSSL apache 3272 2.4 0.1 219960 21540 ? S 20:06 0:03 /usr/sbin/http d -k start -DSSL apache 3273 2.0 0.0 211600 12804 ? S 20:06 0:03 /usr/sbin/http d -k start -DSSL apache 3279 3.7 0.1 229024 29900 ? S 20:06 0:05 /usr/sbin/http d -k start -DSSL apache 3280 1.2 0.0 0 0 ? Z 20:06 0:01 [httpd] <defun ct> apache 3285 2.9 0.1 218532 21604 ? S 20:06 0:04 /usr/sbin/http d -k start -DSSL apache 3287 30.5 0.4 265084 65948 ? R 20:06 0:43 /usr/sbin/http d -k start -DSSL apache 3297 1.9 0.1 216068 17332 ? S 20:06 0:02 /usr/sbin/http d -k start -DSSL apache 3342 2.7 0.1 216716 17828 ? S 20:06 0:03 /usr/sbin/http d -k start -DSSL apache 3356 1.6 0.1 217244 18296 ? S 20:07 0:01 /usr/sbin/http d -k start -DSSL apache 3365 6.4 0.1 226044 27428 ? S 20:07 0:06 /usr/sbin/http d -k start -DSSL apache 3396 0.0 0.1 213844 16120 ? S 20:07 0:00 /usr/sbin/http d -k start -DSSL apache 3399 5.8 0.1 215664 16772 ? S 20:07 0:05 /usr/sbin/http d -k start -DSSL apache 3422 0.7 0.1 214860 17380 ? S 20:07 0:00 /usr/sbin/http d -k start -DSSL apache 3435 3.3 0.1 216220 17460 ? S 20:07 0:02 /usr/sbin/http d -k start -DSSL apache 3463 0.1 0.0 212732 15076 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3492 0.0 0.0 207660 7552 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3493 1.4 0.1 218092 19188 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3500 1.9 0.1 224204 26100 ? R 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3501 1.7 0.1 216916 17916 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3502 0.0 0.0 207796 7732 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3505 0.0 0.0 207660 7548 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3529 0.0 0.0 207660 7524 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3531 4.0 0.1 216180 17280 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3532 0.0 0.0 207656 7464 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3543 1.4 0.1 217088 18648 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3544 0.0 0.0 207656 7548 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3545 0.0 0.0 207656 7560 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3546 0.0 0.0 207660 7540 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3547 0.0 0.0 207660 7544 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3548 2.3 0.1 216904 17888 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3550 0.0 0.0 207660 7540 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3551 0.0 0.0 207660 7536 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3552 0.2 0.0 214104 15972 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3553 6.5 0.1 216740 17712 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3554 6.3 0.1 216156 17260 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3555 0.0 0.0 207796 7716 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3556 1.8 0.0 211588 12580 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3557 0.0 0.0 207660 7544 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3565 0.0 0.0 207660 7520 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3570 0.0 0.0 207660 7516 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3571 0.0 0.0 207660 7504 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL root 3577 0.0 0.0 103316 860 pts/2 S+ 20:08 0:00 grep httpd httpd error log [Mon Jul 01 18:53:38 2013] [error] [client 2.178.12.67] request failed: error reading the headers, referer: http://akstube.com/image/show/27023/%D9%86%DB%8C%D9%88%D8%B4%D8%A7-%D8%B6%DB%8C%D8%BA%D9%85%DB%8C-%D9%88-%D8%AE%D9%88%D8%A7%D9%87%D8%B1-%D9%88-%D9%87%D9%85%D8%B3%D8%B1%D8%B4 [Mon Jul 01 18:55:33 2013] [error] [client 91.229.215.240] request failed: error reading the headers, referer: http://akstube.com/image/show/44924 [Mon Jul 01 18:57:02 2013] [error] [client 2.178.12.67] Invalid method in request [Mon Jul 01 18:57:02 2013] [error] [client 2.178.12.67] File does not exist: /var/www/html/501.shtml [Mon Jul 01 19:21:36 2013] [error] [client 127.0.0.1] client denied by server configuration: /var/www/html/server-status [Mon Jul 01 19:21:36 2013] [error] [client 127.0.0.1] File does not exist: /var/www/html/403.shtml [Mon Jul 01 19:23:57 2013] [error] [client 151.242.14.31] request failed: error reading the headers [Mon Jul 01 19:37:16 2013] [error] [client 2.190.16.65] request failed: error reading the headers [Mon Jul 01 19:56:00 2013] [error] [client 151.242.14.31] request failed: error reading the headers Not a JPEG file: starts with 0x89 0x50 also there is lots of these in the messages log Jul 1 20:15:47 server named[2426]: client 203.88.6.9#11926: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 20:15:47 server named[2426]: client 203.88.6.9#26255: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 20:15:48 server named[2426]: client 203.88.6.9#20093: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 20:15:48 server named[2426]: client 203.88.6.9#8672: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:07 server named[2426]: client 203.88.6.9#39352: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:08 server named[2426]: client 203.88.6.9#25382: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:08 server named[2426]: client 203.88.6.9#9064: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:09 server named[2426]: client 203.88.23.9#35375: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:45:09 server named[2426]: client 203.88.6.9#61932: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:09 server named[2426]: client 203.88.23.9#4423: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:45:09 server named[2426]: client 203.88.6.9#40229: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:14 server named[2426]: client 203.88.23.9#46128: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:45:14 server named[2426]: client 203.88.6.10#62128: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:14 server named[2426]: client 203.88.23.9#35240: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:45:14 server named[2426]: client 203.88.6.10#36774: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:14 server named[2426]: client 203.88.23.9#28361: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:45:14 server named[2426]: client 203.88.6.10#14970: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:14 server named[2426]: client 203.88.23.9#20216: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:14 server named[2426]: client 203.88.23.10#31794: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:45:14 server named[2426]: client 203.88.23.9#23042: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:14 server named[2426]: client 203.88.6.10#11333: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:14 server named[2426]: client 203.88.23.10#41807: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:45:14 server named[2426]: client 203.88.23.9#20092: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:14 server named[2426]: client 203.88.6.10#43526: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:15 server named[2426]: client 203.88.23.9#17173: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:15 server named[2426]: client 203.88.23.9#62412: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:15 server named[2426]: client 203.88.23.10#63961: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:45:15 server named[2426]: client 203.88.23.10#64345: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:45:15 server named[2426]: client 203.88.23.10#31030: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:45:16 server named[2426]: client 203.88.6.9#17098: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:16 server named[2426]: client 203.88.6.9#17197: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:16 server named[2426]: client 203.88.6.9#18114: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:16 server named[2426]: client 203.88.6.9#59138: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:17 server named[2426]: client 203.88.6.9#28715: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:48:33 server named[2426]: client 203.88.23.9#26355: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:48:34 server named[2426]: client 203.88.23.9#34473: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:48:34 server named[2426]: client 203.88.23.9#62658: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:48:34 server named[2426]: client 203.88.23.9#51631: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:48:35 server named[2426]: client 203.88.23.9#54701: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:48:36 server named[2426]: client 203.88.6.10#63694: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:48:36 server named[2426]: client 203.88.6.10#18203: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:48:37 server named[2426]: client 203.88.6.10#9029: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:48:38 server named[2426]: client 203.88.6.10#58981: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:48:38 server named[2426]: client 203.88.6.10#29321: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:49:47 server named[2426]: client 119.160.127.42#42355: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:49:49 server named[2426]: client 119.160.120.42#46285: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:49:53 server named[2426]: client 119.160.120.42#30696: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:49:54 server named[2426]: client 119.160.127.42#14038: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:49:55 server named[2426]: client 119.160.120.42#33586: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:49:56 server named[2426]: client 119.160.127.42#55114: query (cache) 'xxxmaza.com/A/IN' denied

    Read the article

< Previous Page | 425 426 427 428 429 430 431  | Next Page >