Search Results

Search found 3589 results on 144 pages for 'cluster computing'.

Page 136/144 | < Previous Page | 132 133 134 135 136 137 138 139 140 141 142 143  | Next Page >

  • What do the participants say about the Open Day in South Africa?

    - by Maria Sandu
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 On the 26th of September, a group of students who were specifically selected to attend an Open day at Oracle South Africa, joined us at our offices in Woodmead, Johannesburg. The Conference room was filled with inquisitive minds. What we had in store for them was a detailed presentation about Oracle which was delivered by Zuko - Cluster Leader: Tech GB South Africa. The student’s many questions were all answered especially when we started addressing the opportunities we have and detailed information on our Graduate Programme. Our employees then came to talk about their experience. This allowed all the students to have an integrated learning experience. By inviting the students to walk around our Oracle Offices allowed them to see, talk, experience a bit of the culture and ask more questions. Here is some of the feedback from the attendees: Maxwell Moloi: “The open day truly served its purpose and exceeded expectations in the sense that I got to find out more about Oracle and all the different opportunities it has to offer. The fact that Oracle supplies a full solution to a customer and not just part of it and how the company manages to setup professional development for their employees is what entices me to want to join the rapidly growing team of Oracle.” Nqobile Mabaso: “I found the open day to be quite informative and enlightening because coming from a marketing background I could apply the knowledge I got from varsity to the Company I was able to point out what they do as part of their corporate social responsibility (Oracle recently partnered with the department of education to build a school), how Oracle emphasizes on relationship building because they know they sell to people and not companies and how they offer the full stack of solutions which gives them a competitive advantage over their competitors.” Nondumiso Mvelase: “The Open Day was a wonderful experience for me especially because I have never been part of an Open Day before, so it was absolutely amazing for me. It gave me a good idea of how it is to be part of Oracle. We were served with lovely breakfast and lunch which I enjoyed. I wish the Open Day went on for a whole week. Seeing and hearing from 2013 Graduates, telling us about their experience within Oracle was very inspiring to me. They were encouraging us to work hard if we ever got the opportunity they had. After hearing this from them I will definitely not take it for granted.” Itumeleng Moraka: “Before I walked into the Oracle offices all that was in my mind was databases and cloud storage. I was then surrounded by passionate, enthusiastic and welcoming employees. I came across a positive energy within the multinational company. I realized that Oracle is not a company that operates in survival mode. This may sound idealistic, but they operate in a non-traditional way investing more into innovation, they stay focused on what matters most about where technology is going and at the same time they are not losing sight of how their products make a difference in the world.” For more information on how to be part of the Oracle Graduate Programme please follow us on Facebook! https://www.facebook.com/CampusAtOracle /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Oracle Cloud Applications: The Right Ingredients Baked In

    - by yaldahhakim
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} Oracle Cloud Applications: The Right Ingredients Baked In Eggs, flour, milk, and sugar. The magic happens when you mix these ingredients together. The same goes for the hottest technologies fast changing how IT impacts our organizations today: cloud, social, mobile, and big data. By themselves they’re pretty good; combining them with a great recipe is what unlocks real transformation power. Choosing the right cloud can be very similar to choosing the right cake. First consider comparing the core ingredients that go into baking a cake and the core design principles in building a cloud-based application. For instance, if flour is the base ingredient of a cake, then rich functionality that spans complete business processes is the base of an enterprise-grade cloud. Cloud computing is more than just consuming an "application as service", and having someone else manage it for you. Rather, the value of cloud is about making your business more agile in the marketplace, and shortening the time it takes to deliver and adopt new innovation. It’s also about improving not only the efficiency at which we communicate but the actual quality of the information shared as well. Data from different systems, like ingredients in a cake, must also be blended together effectively and evaluated through a consolidated lens. When this doesn’t happen, for instance when data in your sales cloud doesn't seamlessly connect with your order management and other “back office” applications, the speed and quality of information can decrease drastically. It’s like mixing ingredients in a strainer with a straw – you just can’t bring it all together without losing something. Mixing ingredients is similar to bringing clouds together, and co-existing cloud applications with traditional on premise applications. This is where a shared services  platform built on open standards and Service Oriented Architecture (SOA) is critical. It’s essentially a cloud recipe that calls for not only great ingredients, but also ingredients you can get locally or most likely already have in your kitchen (or IT shop.) Open standards is the best way to deliver a cost effective, durable application integration strategy – regardless of where your apps are deployed. It’s also the best way to build your own cloud applications, or extend the ones you consume from a third party. Just like using standard ingredients and tools you already have in your kitchen, a standards based cloud enables your IT resources to ensure a cloud works easily with other systems. Your IT staff can also make changes using tools they are already familiar with. Or even more ideal, enable business users to actually tailor their experience without having to call upon IT for help at all. This frees IT resources to focus more on developing new innovative services for the organization vs. run and maintain. Carrying the cake analogy forward, you need to add all the ingredients in before you bake it. The same is true with a modern cloud. To harness the full power of cloud, you can’t leave out some of the most important ingredients and just layer them on top later. This is what a lot of our niche competitors have done when it comes to social, mobile, big data and analytics, and other key technologies impacting the way we do business. The transformational power of these technology trends comes from having a strategy from the get-go that combines them into a winning recipe, and delivers them in a unified way. In looking at ways Oracle’s cloud is different from other clouds – not only is breadth of functionality rich across functional pillars like CRM, HCM, ERP, etc. but it embeds social, mobile, and rich intelligence capabilities where they make the most sense across business processes. This strategy enables the Oracle Cloud to uniquely deliver on all three of these dimensions to help our customers unlock the full power of these transformational technologies.

    Read the article

  • Waiting for Windows 8: A Long, Hot Summer

    - by andrewbrust
    Microsoft has revealed some things about Windows 8, and revealed a part of the developer story for new Windows 8 “tailored,” “immersive” applications.  In retrospect, very little was shared.  The bit that was revealed to us is that those applications can be developed using a combination of HTML 5 and JavaScript.  Not much else was said, except that additional details would be revealed at Microsoft’s //Build/ conference in Anaheim, California in September. This has left a lot of people in suspense, and it seems that suspended state is going to last all summer.  The problem, of course, is that in the absence of hard information, people fill the void with Speculation, Rumor and Gloom.  That’s a bit like Fear, Uncertainty and Doubt, except that it’s self-imposed by the Microsoft community and not planted by Microsoft’s competitors. This is a less-than-perfect situation.  Not only is it causing developers to worry about the value of their skill sets, but I am already hearing from consulting shops that customers are getting nervous too and, in extreme cases, opting for non-Microsoft tools for their projects as a result.  I’m also hearing from dev tool ISVs that sales have suffered as a result. It’s quite possible that the customers moving off .NET wanted to do so anyway and it’s also possible that dev tool ISVs are suffering slower sales this year due a slowed rate of economic recovery. Without hard information, tend to people interpret things negatively.  Actually, that’s the major point in all of this. While there is multitude of opinions about what the Windows 8 development platform will look like once fully revealed, there is an emerging consensus around one thing: it sure would help if Microsoft revealed more of its strategy…just enough to quash absurd rumors, stabilize the .NET ecosystem and get people to stay calm. We’ve had some reassurances thus far: there will be a Windows desktop mode; we’ll still have Windows Explorer, we’ll still run Office, we’ll still have a task bar, and all the skills and tools we use now will still work there.  But with reassurances like that…people still feel insecure.  Because telling us that Windows 8 will have what is essentially a “classic” mode sure makes it sound like today’s skill sets will soon be “classic” too…and then maybe they’ll just become obsolete. Humans find change scary; it’s natural.  And when left alone with their fears – because no one is saying anything to dispel them – people can go from frightened to paranoid, and can start to viewing things in a downright conspiratorial light.  It would be great if Microsoft stepped into the void now and told us what is coming – especially because whatever they tell us is bound to be at least a little better than what people think they are going to hear. I don’t know what the announcements will be, but I do have it on authority, from a number of sources, that Microsoft isn’t gong to talk until //Build/.  That means no news until September September 13th.  Nothing until after Labor Day.  You get zippo until after the Back-to-School sales are done. What to do?  Try not to let the dark voices of gloom and doom fill your head.  Even in the absence of answers, we still have some important facts: The .NET developer community is huge. Microsoft’s customers have major investments in .NET, and in .NET skills. Political infighting in Redmond might make for irrational decisions, but ultimately public companies can’t just alienate their advocates and piss off their customers.  Spite doesn’t trump fiduciary responsibility. The computing device markets are changing, software is changing, software business models are changing and developers are changing.  Microsoft has to keep up. The HTML + JavaScript community is huge too, and it includes many of the “changed” developers. Public companies can’t ignore new markets nor the popular standards that can help them enter those new markets.  Loyalty doesn’t trump fiduciary responsibility either. If Microsoft can appeal to new developers, then it should. If Microsoft can keep catering to its existing developers and customers -- not just through legacy support, but also through empowering futures -- then it probably will. You don’t have to shove your old friends out into the rain to make room for new ones; you can bring those new constituents in under a bigger tent.  I hope Microsoft will enlarge the tent, and I have trouble imagining why it would not.

    Read the article

  • Clouds, Clouds, Clouds Everywhere, Not a Drop of Rain!

    - by sxkumar
    At the recently concluded Oracle OpenWorld 2012, the center of discussion was clearly Cloud. Over the five action packed days, I got to meet a large number of customers and most of them had serious interest in all things cloud.  Public Cloud - particularly the Oracle Cloud - clearly got a lot of attention and interest. I think the use cases and the value proposition for public cloud is pretty straight forward. However, when it comes to private cloud, there were some interesting revelations.  Well, I shouldn’t really call them revelations since they are pretty consistent with what I have heard from customers at other conferences as well as during 1:1 interactions. While the interest in enterprise private cloud remains to be very high, only a handful of enterprises have truly embarked on a journey to create what the purists would call true private cloud - with capabilities such as self-service and chargeback/show back. For a large majority, today's reality is simply consolidation and virtualization - and they are quite far off from creating an agile, self-service and transparent IT infrastructure which is what the enterprise cloud is all about.  Even a handful of those who have actually implemented a close-to-real enterprise private cloud have taken an infrastructure centric approach and are seeing only limited business upside. Quite a few were frank enough to admit that chargeback and self-service isn’t something that they see an immediate need for.  This is in quite contrast to the picture being painted by all those surveys out there that show a large number of enterprises having already implemented an enterprise private cloud.  On the face of it, this seems quite contrary to the observations outlined above. So what exactly is the reality? Well, the reality is that there is undoubtedly a huge amount of interest among enterprises about transforming their legacy IT environment - which is often seen as too rigid, too fragmented, and ultimately too expensive - to something more agile, transparent and business-focused. At the same time however, there is a great deal of confusion among CIOs and architects about how to get there. This isn't very surprising given all the buzz and hype surrounding cloud computing. Every IT vendor claims to have the most unique solution and there isn't a single IT product out there that does not have a cloud angle to it. Add to this the chatter on the blogosphere, it will get even a sane mind spinning.  Consequently, most  enterprises are still struggling to fully understand the concept and value of enterprise private cloud.  Even among those who have chosen to move forward relatively early, quite a few have made their decisions more based on vendor influence/preferences rather than what their businesses actually need.  Clearly, there is a disconnect between the promise of the enterprise private cloud and the current adoption trends.  So what is the way forward?  I certainly do not claim to have all the answers. But here is a perspective that many cloud practitioners have found useful and thus worth sharing. To take a step back, the fundamental premise of the enterprise private cloud is IT transformation. It is the quest to create a more agile, transparent and efficient IT infrastructure that is driven more by business needs rather than constrained by operational and procedural inefficiencies. It is the new way of delivering and consuming IT services - where the IT organizations operate more like enablers of  strategic services rather than just being the gatekeepers of IT resources. In an enterprise private cloud environment, IT organizations are expected to empower the end users via self-service access/control and provide the business stakeholders a transparent view of how the resources are being used, what’s the cost of delivering a given service, how well are the customers being served, etc.  But the most important thing to note here is the enterprise private cloud is not just an IT project, rather it is a business initiative to create an IT setup that is more aligned with the needs of today's dynamic and highly competitive business environment. Surprised? You shouldn’t be. Just remember how the business users have been at the forefront of public cloud adoption within enterprises and private cloud is no exception.   Such a broad-based transformation makes cloud more than a technology initiative. It requires people (organizational) and process changes as well, and these changes are as critical as is the choice of right tools and technology. In my next blog,  I will share how essential it is for enterprise cloud technology to go hand-in hand with process re-engineering and organization changes to unlock true value of  enterprise cloud. I am sharing a short video from my session "Managing your private Cloud" at Oracle OpenWorld 2012. More videos from this session will be posted at the recently introduced Zero to Cloud resource page. Many other experts of Oracle enterprise private cloud solution will join me on this blog "Zero to Cloud"  and share best practices , deployment tips and information on how to plan, build, deploy, monitor, manage , meter and optimize the enterprise private cloud. We look forward to your feedback, suggestions and having an engaging conversion with you on this blog.

    Read the article

  • Lesi, from Graduate Trainee to Territory Manager

    - by Maria Sandu
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 It’s the final year, University is now coming to an end. A new chapter now awaits my arrival. This part of my life is called “Looking for a Job”. With no form of experience whatsoever, getting a job at a well renowned IT company is something that every IT student dreams about. CV: v, Application form: v, interviews: v. Acceptance Call, “Lesi I’m pleased to inform you that you have been accepted to be part of the Oracle Graduate Program for 2012”. Life would never again be the same. Being Part of the Graduate Program Going into the Graduate program, I felt like a baby seeing candy for the first time. The Program gave me the platform to not only break in to the workplace but also to help launch my career. Over the next 3 months, I went through various trainings / workshops / events / coaching / mentorship sessions. Like a construction worker building a solid foundation for a beautifully designed architecture, a clear path to build my career was set. With training out the way, it was now time to start working closely with my team. For the rest of the year, it was all about selling. Sales, Pipeline, Forecasting and numbers soon became the common words in my career. As the saying goes, “once a sales man, always a sales man”. There was no turning back now, a career in sales was the new hustle in my life. I worked closely with my mentor & coach (Ibrahim) who was heading up Zambia and Malawi. This was to be one of my best moments in the program as I started engaging with customers and getting some hands on experience in the field. By the end of the program all the experience, hard work, training and resources came in handy as I was now ready and fully groomed to be a sales rep. Life after the Graduate Program I’m proud to say that now I’m a Territory Manager, heading up Malawi, selling Technology, Middleware & Applications across all industries. I’m part of the Transition Cluster Team, a powerful team headed by the seasoned Senior Director. As a Territory Manager my role is to push for coverage, to penetrate the market by selling Oracle from end- to- end to all accounts in Malawi. I now spend my days living out of a suitcase, moving from hotel to hotel, chasing after business in all areas of Malawi. It’s the life of a Sales Man and I’m enjoying every minute of it. I’m truly fortunate and grateful to have been part of such a wonderful graduate program. I owe my Sales career to the graduate program, and I truly hope that the program will continue to develop and to groom new talent amongst the youth of this world. If you're interested in joining the Graduate Program in South Africa keep an eye on our CampusatOracle Facebook Page page to get the latest updates! /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Tech Talk: Managing Cloud Integration

    - by Tanu Sood
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Cloud computing solutions are widely hailed as a way to reduce capital expenditures yet organizations are realizing they need to also consider all of the nuances of integrating cloud applications with existing information systems.Cloud integration, after all, has a direct impact on your costs, maintenance and upgrade efforts. Catch this conversation on Tech Talk with Oracle Vice President, Amit Zavery, to understand how Oracle Fusion Middleware provides a simple and consistent method to maintaining integration interfaces across disparate systems across cloud and on-premise applications. Simplify your IT infrastructure and seamlessly manage data and application integration across your applications with Oracle solutions. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} For other Fusion Middleware talks, subscribe to Fusion Middleware Radio today and visit us on oracle.com Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Photo courtesy: www.cloudtweaks.com

    Read the article

  • GI ????

    - by Allen Gao
    Normal 0 7.8 ? 0 2 false false false MicrosoftInternetExplorer4 classid="clsid:38481807-CA0E-42D2-BF39-B33AF135CC4D" id=ieooui st1\:*{behavior:url(#ieooui) } /* Style Definitions */ table.MsoNormalTable {mso-style-name:????; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman"; mso-ansi-language:#0400; mso-fareast-language:#0400; mso-bidi-language:#0400;} ??????????11gR2 GI ?????????,??????GI????????????????????? ????????GI???????3???,ohasd??,??????,??????? ??,ohasd??? 1. /etc/inittab?????? h1:35:respawn:/etc/init.d/init.ohasd run >/dev/null 2>&1 </dev/null ???,??????? root 4865 1 0 Dec02 ? 00:01:01 /bin/sh /etc/init.d/init.ohasd run ??????????????,???? +init.ohasd ????????? + os????????? + ??S* ohasd????, ??S96ohasd + GI????????(crsctl enable crs) ??,ohasd.bin ??????,????OLR????,??,??ohasd.bin??????,?????OLR??????????????OLR???$GRID_HOME/cdata/${HOSTNAME}.olr 2. ohasd.bin????????agents(orarootagent, oraagent, cssdagnet ? cssdmonitor) ???????????????,?????agent??????,??????????$GRID_HOME/bin ???????????,??,?????????,??corruption. ???,??????? 1. Mdnsd ??????(Multicast)???????????????????,??????????????????????????? 2. Gpnpd ????,??????????bootstrap ??,??????????????gpnp profile???,?????mdnsd??????,???????????,?????????????,??gpnp profile (<gi_home>/gpnp/profiles/peer/profile.xml)?????????? 3. Gipcd ????,????????????????(cluster interconnect)?????,???????gpnpd???,??,??????????,?????gpnpd ??????? 4. Ocssd.bin ?????????????gpnp profile?????????(Voting Disk),????gpnpd ??????????,?????????????,??ocssd.bin ??????,?????????? + gpnp profile ?????????? + gpnpd ??????? + ??????asm disk ??????????? + ??????????? 5. ??????????:ora.ctssd, ora.asm, ora.cluster_interconnect.haip, ora.crf, ora.crsd ?? ??:????????????????ocssd.bin, gpnpd.bin ? gipcd.bin ????,??gpnpd.bin????,ocssd.bin ? gipcd.bin ?????????,?gpnpd.bin????????,ocssd.bin ? gipcd.bin ????????gpnp profile?????????? ??,????????????,?????crsd????????? 1. Crsd?????????????OCR,????OCR????ASM?,???? ASM??????,??OCR???ASM??????????OCR???????,???????????????? 2. Crsd ?????agents(orarootagent, oraagent_<rdbms_owner>, oraagent_<gi_owner> )???agent????,??????????$GRID_HOME/bin ???????????,??,?????????,??corruption. 3. ????????  ora.net1.network : ????,?????????????,scanvip, vip, listener?????????????,??????????,vip, scanvip ?listener ??offline,?????????????? ora.<scan_name>.vip:scan???vip??,?????3?? ora.<node_name>.vip : ?????vip ?? ora.<listener_name>.lsnr: ???????????????,?11gR2??,listener.ora???????,????????? ora.LISTENER_SCAN<n>.lsnr: scan ????? ora.<????>.dg: ASM ????????????????mount???,dismount???? ora.<????>.db: ???????11gR2????????????,??????????rac ????????,??????????,???????“USR_ORA_INST_NAME@SERVERNAME(<node name> )”???????,??????????ASM???,???????????????????,??dependency?????????,??????????????????,???dependancy???????,??????(crsctl modify res ……)? ora.<???>.svc:?????????11gR2 ??,?????????,???10gR2??,???????????,srv ?cs ????? ora.cvu :?????11.2.0.2???,???????cluvfy??,???????????????? ora.ons : ONS??,????????,????? ??,?????GI??????????????????? $GRID_HOME/log/<node_name>/ocssd <== ocssd.bin ?? $GRID_HOME/log/<node_name>/gpnpd <== gpnpd.bin ?? $GRID_HOME/log/<node_name>/gipcd <== gipcd.bin ?? $GRID_HOME/log/<node_name>/agent/crsd <== crsd.bin ?? $GRID_HOME/log/<node_name>/agent/ohasd <== ohasd.bin ?? $GRID_HOME/log/<node_name>/mdnsd <== mdnsd.bin ?? $GRID_HOME/log/<node_name>/client <== ????GI ??(ocrdump, crsctl, ocrcheck, gpnptool??)??????????? $GRID_HOME/log/<node_name>/ctssd <== ctssd.bin ?? $GRID_HOME/log/<node_name>/crsd <== crsd.bin ?? $GRID_HOME/log/<node_name>/cvu <== cluvfy ????????? $GRID_HOME/bin/diagcollection.sh <== ????????????????? ??,????????(/var/tmp/.oracle ? /tmp/.oracle),??????????????????ipc???,??,?????????????????????,???GI?????????????????????,??????????GI??????????????

    Read the article

  • Computer science undergraduate project ideas

    - by Mehrdad Afshari
    Hopefully, I'm going to finish my undergraduate studies next semester and I'm thinking about the topic of my final project. And yes, I've read the questions with duplicate title. I'm asking this from a bit different viewpoint, so it's not an exact dupe. I've spent at least half of my life coding stuff in different languages and frameworks so I'm not looking at this project as a way to learn much about coding and preparing for real world apps or such. I've done lots of those already. But since I have to do it to complete my degree, I felt I should spend my time doing something useful instead of throwing the whole thing out. I'm planning to make it an open source project or a hosted Web app (depending on the type) if I can make a high quality thing out of it, so I decided to ask StackOverflow what could make a useful project. Situation I've plenty of freedom about the topic. They also require 30-40 pages of text describing the project. I have the following points in mind (the more satisfied, the better): Something useful for software development Something that benefits the community Having academic value is great Shouldn't take more than a month of development (I know I'm lazy). Shouldn't be related to advanced theoretical stuff (soft computing, fuzzy logic, neural networks, ...). I've been a business-oriented software developer. It should be software oriented. While I love hacking microcontrollers and other fun embedded electronic things, I'm not really good at soldering and things like that. I'm leaning toward a Web application (think StackOverflow, PasteBin, NerdDinner, things like those). Technology It's probably going to be done in .NET (C#, F#) and Windows platform. If I really like the project (cool low level hacking), I might actually slip to C/C++. But really, C# is what I'm efficient at. Ideas Programming language, parsing and compiler related stuff: Designing a domain specific programming language and compiler Templating language compiled to C# or IL Database tools and related code generation stuff Web related technologies: ASP.NET MVC View engine doing something cool (don't know what exactly...) Specific-purpose, small, fast ASP.NET-based Web framework Applications: Visual Studio plugin to integrate with Bazaar (it's too much work, I think). ASP.NET based, jQuery-powered issue tracker (and possibly, project lifecycle management as a whole - poor man's TFS) Others: Something related to GPGPU Looking forward for great ideas! Unfortunately, I can't help on a currently existing project. I need to start my own to prevent further problems (as it's an undergrad project, nevertheless).

    Read the article

  • Problem with AquaTerm on Snow Leopard

    - by cheetah
    When i try to install AquaTerm on Snow leopard from MacPorts i got this: ---> Computing dependencies for aquaterm ---> Building aquaterm Error: Target org.macports.build returned: shell command "cd "/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_aqua_aquaterm/work/aquaterm" && /usr/bin/xcodebuild -target "AquaTerm" -configuration Deployment build OBJROOT=build/ SYMROOT=build/ MACOSX_DEPLOYMENT_TARGET=10.6 ARCHS=x86_64 SDKROOT= USER_APPS_DIR=/Applications/MacPorts FRAMEWORKS_DIR=/opt/local/Library/Frameworks" returned error 1 Command output: [WARN]Warning: The Copy Bundle Resources build phase contains this target's Info.plist file 'AquaTerm.framework-Info.plist'. CopyPlistFile build/Deployment/AquaTerm.framework/Versions/A/Resources/AquaTerm.framework-Info.plist AquaTerm.framework-Info.plist cd /opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_aqua_aquaterm/work/aquaterm /Developer/Library/Xcode/Plug-ins/CoreBuildTasks.xcplugin/Contents/Resources/copyplist AquaTerm.framework-Info.plist --outdir /opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_aqua_aquaterm/work/aquaterm/build/Deployment/AquaTerm.framework/Versions/A/Resources error: can't exec '/Developer/Library/Xcode/Plug-ins/CoreBuildTasks.xcplugin/Contents/Resources/copyplist' (No such file or directory) Command /Developer/Library/Xcode/Plug-ins/CoreBuildTasks.xcplugin/Contents/Resources/copyplist failed with exit code 71 Command /Developer/Library/Xcode/Plug-ins/CoreBuildTasks.xcplugin/Contents/Resources/copyplist failed with exit code 71 === BUILD NATIVE TARGET AquaTerm OF PROJECT AquaTerm WITH CONFIGURATION Deployment === Check dependencies Warning: Multiple build commands for output file /opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_aqua_aquaterm/work/aquaterm/build/Deployment/AquaTerm.app/Contents/Resources/help.html [WARN]Warning: Multiple build commands for output file /opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_aqua_aquaterm/work/aquaterm/build/Deployment/AquaTerm.app/Contents/Resources/help.html CopyTiffFile build/Deployment/AquaTerm.app/Contents/Resources/Cross.tiff English.lproj/Cross.tiff cd /opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_aqua_aquaterm/work/aquaterm /Developer/Library/Xcode/Plug-ins/CoreBuildTasks.xcplugin/Contents/Resources/copytiff English.lproj/Cross.tiff --outdir /opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_aqua_aquaterm/work/aquaterm/build/Deployment/AquaTerm.app/Contents/Resources error: can't exec '/Developer/Library/Xcode/Plug-ins/CoreBuildTasks.xcplugin/Contents/Resources/copytiff' (No such file or directory) Command /Developer/Library/Xcode/Plug-ins/CoreBuildTasks.xcplugin/Contents/Resources/copytiff failed with exit code 71 Command /Developer/Library/Xcode/Plug-ins/CoreBuildTasks.xcplugin/Contents/Resources/copytiff failed with exit code 71 ** BUILD FAILED ** The following build commands failed: AQTFwk: CopyPlistFile /opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_aqua_aquaterm/work/aquaterm/build/Deployment/AquaTerm.framework/Versions/A/Resources/AquaTerm.framework-Info.plist AquaTerm.framework-Info.plist AquaTerm: CopyTiffFile /opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_aqua_aquaterm/work/aquaterm/build/Deployment/AquaTerm.app/Contents/Resources/Cross.tiff English.lproj/Cross.tiff (2 failures) Error: Status 1 encountered during processing. Before reporting a bug, first run the command again with the -d flag to get complete output. How i can solve this problem?

    Read the article

  • Memory Leak Issue in Weblogic, SUN, Apache and Oracle classes Options

    - by Amit
    Hi All, Please find below the description of memory leaks issues. Statistics show major growth in the perm area (static classes). Flows were ran for 8 hours , Heap dump was taken after 2 hours and at the end. A growth in Perm area was identified Statistics show from our last run 240MB growth in 6 hour,40mb growth every hour 2GB heap –can hold ¾ days ,heap will be full in ¾ days Heap dump show –growth in area as mentioned below JMS connection/session Area Apache org.apache.xml.dtm.DTM[] org.apache.xml.dtm.ref.ExpandedNameTable$ExtendedType org.jdom.AttributeList org.jdom.Content[] org.jdom.ContentList org.jdom.Element SUN * ConstantPoolCacheKlass * ConstantPoolKlass * ConstMethodKlass * MethodDataKlass * MethodKlass * SymbolKlass byte[] char[] com.sun.org.apache.xml.internal.dtm.DTM[] com.sun.org.apache.xml.internal.dtm.ref.ExtendedType java.beans.PropertyDescriptor java.lang.Class java.lang.Long java.lang.ref.WeakReference java.lang.ref.SoftReference java.lang.String java.text.Format[] java.util.concurrent.ConcurrentHashMap$Segment java.util.LinkedList$Entry Weblogic com.bea.console.cvo.ConsoleValueObject$PropertyInfo com.bea.jsptools.tree.TreeNode com.bea.netuix.servlets.controls.content.StrutsContent com.bea.netuix.servlets.controls.layout.FlowLayout com.bea.netuix.servlets.controls.layout.GridLayout com.bea.netuix.servlets.controls.layout.Placeholder com.bea.netuix.servlets.controls.page.Book com.bea.netuix.servlets.controls.window.Window[] com.bea.netuix.servlets.controls.window.WindowMode javax.management.modelmbean.ModelMBeanAttributeInfo weblogic.apache.xerces.parsers.SecurityConfiguration weblogic.apache.xerces.util.AugmentationsImpl weblogic.apache.xerces.util.AugmentationsImpl$SmallContainer weblogic.apache.xerces.util.SymbolTable$Entry weblogic.apache.xerces.util.XMLAttributesImpl$Attribute weblogic.apache.xerces.xni.QName weblogic.apache.xerces.xni.QName[] weblogic.ejb.container.cache.CacheKey weblogic.ejb20.manager.SimpleKey weblogic.jdbc.common.internal.ConnectionEnv weblogic.jdbc.common.internal.StatementCacheKey weblogic.jms.common.Item weblogic.jms.common.JMSID weblogic.jms.frontend.FEConnection weblogic.logging.MessageLogger$1 weblogic.logging.WLLogRecord weblogic.rjvm.BubblingAbbrever$BubblingAbbreverEntry weblogic.rjvm.ClassTableEntry weblogic.rjvm.JVMID weblogic.rmi.cluster.ClusterableRemoteRef weblogic.rmi.internal.CollocatedRemoteRef weblogic.rmi.internal.PhantomRef weblogic.rmi.spi.ServiceContext[] weblogic.security.acl.internal.AuthenticatedSubject weblogic.security.acl.internal.AuthenticatedSubject$SealableSet weblogic.servlet.internal.ServletRuntimeMBeanImpl weblogic.transaction.internal.XidImpl weblogic.utils.collections.ConcurrentHashMap$Entry Oracle XA Transaction oracle.jdbc.driver.Binder[] oracle.jdbc.driver.OracleDatabaseMetaData oracle.jdbc.driver.T4C7Ocommoncall oracle.jdbc.driver.T4C7Oversion oracle.jdbc.driver.T4C8Oall oracle.jdbc.driver.T4C8Oclose oracle.jdbc.driver.T4C8TTIBfile oracle.jdbc.driver.T4C8TTIBlob oracle.jdbc.driver.T4C8TTIClob oracle.jdbc.driver.T4C8TTIdty oracle.jdbc.driver.T4C8TTILobd oracle.jdbc.driver.T4C8TTIpro oracle.jdbc.driver.T4C8TTIrxh oracle.jdbc.driver.T4C8TTIuds oracle.jdbc.driver.T4CCallableStatement oracle.jdbc.driver.T4CClobAccessor oracle.jdbc.driver.T4CConnection oracle.jdbc.driver.T4CMAREngine oracle.jdbc.driver.T4CNumberAccessor oracle.jdbc.driver.T4CPreparedStatement oracle.jdbc.driver.T4CTTIdcb oracle.jdbc.driver.T4CTTIk2rpc oracle.jdbc.driver.T4CTTIoac oracle.jdbc.driver.T4CTTIoac[] oracle.jdbc.driver.T4CTTIoauthenticate oracle.jdbc.driver.T4CTTIokeyval oracle.jdbc.driver.T4CTTIoscid oracle.jdbc.driver.T4CTTIoses oracle.jdbc.driver.T4CTTIOtxen oracle.jdbc.driver.T4CTTIOtxse oracle.jdbc.driver.T4CTTIsto oracle.jdbc.driver.T4CXAConnection oracle.jdbc.driver.T4CXAResource oracle.jdbc.oracore.OracleTypeADT[] oracle.jdbc.xa.OracleXAResource$XidListEntry oracle.net.ano.Ano oracle.net.ns.ClientProfile oracle.net.ns.ClientProfile oracle.net.ns.NetInputStream oracle.net.ns.NetOutputStream oracle.net.ns.SessionAtts oracle.net.nt.ConnOption oracle.net.nt.ConnStrategy oracle.net.resolver.AddrResolution oracle.sql.CharacterSet1Byte we are using Oracle BEA Weblogic 9.2 MP3 JDK 1.5.12 Oracle versoin 10.2.0.4 (for oracle we found one path which is needed to applied to avoid XA transaction memory leaks). But we are stuck to resolve SUN, BEA Weblgogic and Apache leaks. please suggest... regards, Amit J.

    Read the article

  • Resizing QT's QTextEdit to Match Text Height: maximumViewportSize()

    - by Aaron
    I am trying to use a QTextEdit widget inside of a form containing several QT widgets. The form itself sits inside a QScrollArea that is the central widget for a window. My intent is that any necessary scrolling will take place in the main QScrollArea (rather than inside any widgets), and any widgets inside will automatically resize their height to hold their contents. I have tried to implement the automatic resizing of height with a QTextEdit, but have run into an odd issue. I created a sub-class of QTextEdit and reimplemented sizeHint() like this: QSize OperationEditor::sizeHint() const { QSize sizehint = QTextBrowser::sizeHint(); sizehint.setHeight(this->fitted_height); return sizehint; } this-fitted_height is kept up-to-date via this slot that is wired to the QTextEdit's "contentsChanged()" signal: void OperationEditor::fitHeightToDocument() { this->document()->setTextWidth(this->viewport()->width()); QSize document_size(this->document()->size().toSize()); this->fitted_height = document_size.height(); this->updateGeometry(); } The size policy of the QTextEdit sub-class is: this->setSizePolicy(QSizePolicy::MinimumExpanding, QSizePolicy::Preferred); I took this approach after reading this post. Here is my problem: As the QTextEdit gradually resizes to fill the window, it stops getting larger and starts scrolling within the QTextEdit, no matter what height is returned from sizeHint(). If I initially have sizeHint() return some large constant number, then the QTextEdit is very big and is contained nicely within the outer QScrollArea, as one would expect. However, if sizeHint gradually adjusts the size of the QTextEdit rather than just making it really big to start, then it tops out when it fills the current window and starts scrolling instead of growing. I have traced this problem to be that, no matter what my sizeHint() returns, it will never resize the QTextEdit larger than the value returned from maximumViewportSize(), which is inherited from QAbstractScrollArea. Note that this is not the same number as viewport()-maximumSize(). I am unable to figure out how to set that value. Looking at QT's source code, maximumViewportSize() is returning "the size of the viewport as if the scroll bars had no valid scrolling range." This value is basically computed as the current size of the widget minus (2 * frameWidth + margins) plus any scrollbar widths/heights. This does not make a lot of sense to me, and it's not clear to me why that number would be used anywhere in a way that supercede's the sub-class's sizeHint() implementation. Also, it does seem odd that the single "frameWidth" integer is used in computing both the width and the height. Can anyone please shed some light on this? I suspect that my poor understanding of QT's layout engine is to blame here.

    Read the article

  • Loading, listing, and using R Modules and Functions in PL/R

    - by Dave Jarvis
    I am having difficulty with: Listing the R packages and functions available to PostgreSQL. Installing a package (such as Kendall) for use with PL/R Calling an R function within PostgreSQL Listing Available R Packages Q.1. How do you find out what R modules have been loaded? SELECT * FROM r_typenames(); That shows the types that are available, but what about checking if Kendall( X, Y ) is loaded? For example, the documentation shows: CREATE TABLE plr_modules ( modseq int4, modsrc text ); That seems to allow inserting records to dictate that Kendall is to be loaded, but the following code doesn't explain, syntactically, how to ensure that it gets loaded: INSERT INTO plr_modules VALUES (0, 'pg.test.module.load <-function(msg) {print(msg)}'); Q.2. What would the above line look like if you were trying to load Kendall? Q.3. Is it applicable? Installing R Packages Using the "synaptic" package manager the following packages have been installed: r-base r-base-core r-base-dev r-base-html r-base-latex r-cran-acepack r-cran-boot r-cran-car r-cran-chron r-cran-cluster r-cran-codetools r-cran-design r-cran-foreign r-cran-hmisc r-cran-kernsmooth r-cran-lattice r-cran-matrix r-cran-mgcv r-cran-nlme r-cran-quadprog r-cran-robustbase r-cran-rpart r-cran-survival r-cran-vr r-recommended Q.4. How do I know if Kendall is in there? Q.5. If it isn't, how do I find out what package it is in? Q.6. If it isn't in a package suitable for installing with apt-get (aptitude, synaptic, dpkg, what have you), how do I go about installing it on Ubuntu? Q.7. Where are the installation steps documented? Calling R Functions I have the following code: EXECUTE 'SELECT ' 'regr_slope( amount, year_taken ),' 'regr_intercept( amount, year_taken ),' 'corr( amount, year_taken ),' 'sum( measurements ) AS total_measurements ' 'FROM temp_regression' INTO STRICT slope, intercept, correlation, total_measurements; This code calls the PostgreSQL function corr to calculate Pearson's correlation over the data. Ideally, I'd like to do the following (by switching corr for plr_kendall): EXECUTE 'SELECT ' 'regr_slope( amount, year_taken ),' 'regr_intercept( amount, year_taken ),' 'plr_kendall( amount, year_taken ),' 'sum( measurements ) AS total_measurements ' 'FROM temp_regression' INTO STRICT slope, intercept, correlation, total_measurements; Q.8. Do I have to write plr_kendall myself? Q.9. Where can I find a simple example that walks through: Loading an R module into PG. Writing a PG wrapper for the desired R function. Calling the PG wrapper from a SELECT. For example, would the last two steps look like: create or replace function plr_kendall( _float8, _float8 ) returns float as ' agg_kendall(arg1, arg2) ' language 'plr'; CREATE AGGREGATE agg_kendall ( sfunc = plr_array_accum, basetype = float8, -- ??? stype = _float8, -- ??? finalfunc = plr_kendall ); And then the SELECT as above? Thank you!

    Read the article

  • How do I verify a DKIM signature in PHP?

    - by angrychimp
    I'll admit I'm not very adept at key verification. What I have is a script that downloads messages from a POP3 server, and I'm attempting to verify the DKIM signatures in PHP. I've already figured out the body hash (bh) validation check, but I can't figure out the header validation. http://www.dkim.org/specs/rfc4871-dkimbase.html#rfc.section.6.1.3 Below is an example of my message headers. I've been able to use the Mail::DKIM package to validate the signature in Perl, so I know it's good. I just can't seem to figure out the instructions in the RFC and translate them into PHP code. DomainKey-Signature: q=dns; a=rsa-sha1; c=nofws; s=angrychimp-1.bh; d=angrychimp.net; h=From:X-Outgoing; b=RVkenibHQ7GwO5Y3tun2CNn5wSnooBSXPHA1Kmxsw6miJDnVp4XKmA9cUELwftf9 nGiRCd3rLc6eswAcVyNhQ6mRSsF55OkGJgDNHiwte/pP5Z47Lo/fd6m7rfCnYxq3 DKIM-Signature: v=1; a=rsa-sha1; d=angrychimp.net; s=angrychimp-1.bh; c=relaxed/simple; q=dns/txt; [email protected]; t=1268436255; h=From:Subject:X-Outgoing:Date; bh=gqhC2GEWbg1t7T3IfGMUKzt1NCc=; b=ZmeavryIfp5jNDIwbpifsy1UcavMnMwRL6Fy6axocQFDOBd2KjnjXpCkHxs6yBZn Wu+UCFeAP+1xwN80JW+4yOdAiK5+6IS8fiVa7TxdkFDKa0AhmJ1DTHXIlPjGE4n5; To: [email protected] Message-ID: From: DKIM Tester Reply-To: [email protected] Subject: Automated DKIM Testing (angrychimp.net) X-Outgoing: dhaka Date: Fri, 12 Mar 2010 15:24:15 -0800 Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: quoted-printable Content-Disposition: inline MIME-Version: 1.0 Return-Path: [email protected] X-OriginalArrivalTime: 12 Mar 2010 23:25:50.0326 (UTC) FILETIME=[5A0ED160:01CAC23B] I can extract the public key from my DNS just fine, and I believe I'm canonicalizing the headers correctly, but I just can't get the signature validated. I don't think I'm preparing my key or computing the signature validation correctly. Is this something that's possible (do I need pear extensions or something?) or is manually validating a DKIM signature in PHP just not feasible?

    Read the article

  • Tridion Installation

    - by Kevin Brydon
    I am currently upgrading an installation of Tridion from 5.3 to 2011 starting almost from scratch (aside from migrating the database), brand new virtual servers. I just want to ask for some advice on my current server setup... a sanity check. All servers are running Windows Server 2008. The pages on our website are all classic ASP. Database SQL Server cluster. The 5.3 database has been migrated using the DatabaseManager. This is pretty standard and works well (in test anyway). Content Manager A single server to run the Content Manager and the Publisher. There are around 10 people using it at any one time so not under a particularly heavy load. Content Data Store Filesystem located somewhere on the network. One directory for live and one for staging. Content Delivery Two servers (cd1 and cd2) each with the the following server roles installed. cd1 writes to a filesystem content data store for the live website, cd2 writes to the content data store for the staging website. Presentation Two public facing web servers (web1 and web2) serving both the live and staging websites. The web servers read directly from the content data store as its a filesystem. Each of the web servers have the Content Delivery Server installed so that I can use dynamic linking (and other features?). I've so far set up everything but the web servers. Any thoughts? edit Thanks to Ram S who linked me to a decent walkthrough, upvoted. I suppose I should have posed some questions as I didn't really ask a question. I guess I'm a little confused over the content deliver aspect. I have the Content Delivery split in two separate parts. cd1 and cd2 do the work of shifting information from the Content Manager to the Staging/Live web directories. web1 and web2 should do the work of serving the web pages to the outside world and will interact with the content data store (file system). Is this a correct setup? I need some parts of the Content Delivery on my web servers right? Theoretically I could get rid of the cd1 and cd2 servers and use web1 and web2 to do the deployment right? But I suspect this will put the web servers under unnecessary strain should there ever be a big publish. I've been reading the 2011 Installation Manual, Content Delivery section, and I'm finding it quite hard to get my head around!

    Read the article

  • Problem with bootstrap loader and kernel

    - by dboarman-FissureStudios
    We are working on a project to learn how to write a kernel and learn the ins and outs. We have a bootstrap loader written and it appears to work. However we are having a problem with the kernel loading. I'll start with the first part: bootloader.asm: [BITS 16] [ORG 0x0000] ; ; all the stuff in between ; ; the bottom of the bootstrap loader datasector dw 0x0000 cluster dw 0x0000 ImageName db "KERNEL SYS" msgLoading db 0x0D, 0x0A, "Loading Kernel Shell", 0x0D, 0x0A, 0x00 msgCRLF db 0x0D, 0x0A, 0x00 msgProgress db ".", 0x00 msgFailure db 0x0D, 0x0A, "ERROR : Press key to reboot", 0x00 TIMES 510-($-$$) DB 0 DW 0xAA55 ;************************************************************************* The bootloader.asm is too long for the editor without causing it to chug and choke. In addition, the bootloader and kernel do work within bochs as we do get the message "Welcome to our OS". Anyway, the following is what we have for a kernel at this point. kernel.asm: [BITS 16] [ORG 0x0000] [SEGMENT .text] ; code segment mov ax, 0x0100 ; location where kernel is loaded mov ds, ax mov es, ax cli mov ss, ax ; stack segment mov sp, 0xFFFF ; stack pointer at 64k limit sti mov si, strWelcomeMsg ; load message call _disp_str mov ah, 0x00 int 0x16 ; interrupt: await keypress int 0x19 ; interrupt: reboot _disp_str: lodsb ; load next character or al, al ; test for NUL character jz .DONE mov ah, 0x0E ; BIOS teletype mov bh, 0x00 ; display page 0 mov bl, 0x07 ; text attribute int 0x10 ; interrupt: invoke BIOS jmp _disp_str .DONE: ret [SEGMENT .data] ; initialized data segment strWelcomeMsg db "Welcome to our OS", 0x00 [SEGMENT .bss] ; uninitialized data segment Using nasm 2.06rc2 I compile as such: nasm bootloader.asm -o bootloader.bin -f bin nasm kernel.asm -o kernel.sys -f bin We write bootloader.bin to the floppy as such: dd if=bootloader.bin bs=512 count=1 of/dev/fd0 We write kernel.sys to the floppy as such: cp kernel.sys /dev/fd0 As I stated, this works in bochs. But booting from the floppy we get output like so: Loading Kernel Shell ........... ERROR : Press key to reboot Other specifics: OpenSUSE 11.2, GNOME desktop, AMD x64 Any other information I may have missed, feel free to ask. I tried to get everything in here that would be needed. If I need to, I can find a way to get the entire bootloader.asm posted somewhere. We are not really interested in using GRUB either for several reasons. This could change, but we want to see this boot successful before we really consider GRUB.

    Read the article

  • Resultant of a polynomial with x^n–1

    - by devin.omalley
    Resultant of a polynomial with x^n–1 (mod p) I am implementing the NTRUSign algorithm as described in http://grouper.ieee.org/groups/1363/lattPK/submissions/EESS1v2.pdf , section 2.2.7.1 which involves computing the resultant of a polynomial. I keep getting a zero vector for the resultant which is obviously incorrect. private static CompResResult compResMod(IntegerPolynomial f, int p) { int N = f.coeffs.length; IntegerPolynomial a = new IntegerPolynomial(N); a.coeffs[0] = -1; a.coeffs[N-1] = 1; IntegerPolynomial b = new IntegerPolynomial(f.coeffs); IntegerPolynomial v1 = new IntegerPolynomial(N); IntegerPolynomial v2 = new IntegerPolynomial(N); v2.coeffs[0] = 1; int da = a.degree(); int db = b.degree(); int ta = da; int c = 0; int r = 1; while (db > 0) { c = invert(b.coeffs[db], p); c = (c * a.coeffs[da]) % p; IntegerPolynomial cb = b.clone(); cb.mult(c); cb.shift(da - db); a.sub(cb, p); IntegerPolynomial v2c = v2.clone(); v2c.mult(c); v2c.shift(da - db); v1.sub(v2c, p); if (a.degree() < db) { r *= (int)Math.pow(b.coeffs[db], ta-a.degree()); r %= p; if (ta%2==1 && db%2==1) r = (-r) % p; IntegerPolynomial temp = a; a = b; b = temp; temp = v1; v1 = v2; v2 = temp; ta = db; } da = a.degree(); db = b.degree(); } r *= (int)Math.pow(b.coeffs[0], da); r %= p; c = invert(b.coeffs[0], p); v2.mult(c); v2.mult(r); v2.mod(p); return new CompResResult(v2, r); } There is pseudocode in http://www.crypto.rub.de/imperia/md/content/texte/theses/da_driessen.pdf which looks very similar. Why is my code not working? Are there any intermediate results I can check? I am not posting the IntegerPolynomial code because it isn't too interesting and I have unit tests for it that pass. CompResResult is just a simple "Java struct".

    Read the article

  • With sqlalchemy how to dynamically bind to database engine on a per-request basis

    - by Peter Hansen
    I have a Pylons-based web application which connects via Sqlalchemy (v0.5) to a Postgres database. For security, rather than follow the typical pattern of simple web apps (as seen in just about all tutorials), I'm not using a generic Postgres user (e.g. "webapp") but am requiring that users enter their own Postgres userid and password, and am using that to establish the connection. That means we get the full benefit of Postgres security. Complicating things still further, there are two separate databases to connect to. Although they're currently in the same Postgres cluster, they need to be able to move to separate hosts at a later date. We're using sqlalchemy's declarative package, though I can't see that this has any bearing on the matter. Most examples of sqlalchemy show trivial approaches such as setting up the Metadata once, at application startup, with a generic database userid and password, which is used through the web application. This is usually done with Metadata.bind = create_engine(), sometimes even at module-level in the database model files. My question is, how can we defer establishing the connections until the user has logged in, and then (of course) re-use those connections, or re-establish them using the same credentials, for each subsequent request. We have this working -- we think -- but I'm not only not certain of the safety of it, I also think it looks incredibly heavy-weight for the situation. Inside the __call__ method of the BaseController we retrieve the userid and password from the web session, call sqlalchemy create_engine() once for each database, then call a routine which calls Session.bind_mapper() repeatedly, once for each table that may be referenced on each of those connections, even though any given request usually references only one or two tables. It looks something like this: # in lib/base.py on the BaseController class def __call__(self, environ, start_response): # note: web session contains {'username': XXX, 'password': YYY} url1 = 'postgres://%(username)s:%(password)s@server1/finance' % session url2 = 'postgres://%(username)s:%(password)s@server2/staff' % session finance = create_engine(url1) staff = create_engine(url2) db_configure(staff, finance) # see below ... etc # in another file Session = scoped_session(sessionmaker()) def db_configure(staff, finance): s = Session() from db.finance import Employee, Customer, Invoice for c in [ Employee, Customer, Invoice, ]: s.bind_mapper(c, finance) from db.staff import Project, Hour for c in [ Project, Hour, ]: s.bind_mapper(c, staff) s.close() # prevents leaking connections between sessions? So the create_engine() calls occur on every request... I can see that being needed, and the Connection Pool probably caches them and does things sensibly. But calling Session.bind_mapper() once for each table, on every request? Seems like there has to be a better way. Obviously, since a desire for strong security underlies all this, we don't want any chance that a connection established for a high-security user will inadvertently be used in a later request by a low-security user.

    Read the article

  • NoSQL DB for .Net document-based database (ECM)

    - by Dane
    I'm halfway through coding a basic multi-tenant SaaS ECM solution. Each client has it's own instance of the database / datastore, but the .Net app is single instance. The documents are pretty much read only (i.e. an image archive of tiffs or PDFs) I've used MSSQL so far, but then started thinking this might be viable in a NoSQL DB (e.g. MongoDB, CouchDB). The basic premise is that it stores documents, each with their own particular indexes. Each tenant can have multiple document types. e.g. One tenant might have an invoice type, which has Customer ID, Invoice Number and Invoice Date. Another tenant might have an application form, which has Member Number, Application Number, Member Name, and Application Date. So far I've used the old method which Sharepoint (used?) to use, and created a document table which has int_field_1, int_field_2, date_field_1, date_field_2, etc. Then, I've got a "mapping" table which stores the customer specific index name, and the database field that will map to. I've avoided the key-value pair model in the DB due to volume of documents. This way, we can support multiple document types in the one table, and get reasonably high performance out of it, and allow for custom document type searches (i.e. user selects a document type, then they're presented with a list of search fields). However, a NoSQL DB might make this a lot simpler, as I don't need to worry about denormalizing the document. However, I've just got concerns about the rest of the data around a document. We store an "action history" against the document. This tracks views, whether someone emails the document from within the system, and other "future" functionality (e.g. faxing). We have control over the document load process, so we can manipulate the data however it needs to be to get it in the document store (e.g. assign unique IDs). Users will not be adding in their own documents, so we shouldn't need to worry about ACID compliance, as the documents are relatively static. So, my questions I guess : Is a NoSQL DB a good fit Is MongoDB the best for Asp.Net (I saw Raven and Velocity, but they're still kinda beta) Can I store a key for each document, and then store the action history in a MSSQL DB with this key? I don't need to do joins, it would be if a person clicks "View History" against a document. How would performance compare between the two (NoSQL DB vs denormalized "document" table) Volumes would be up to 200,000 new documents per month for a single tenant. My current scaling plan with the SQL DB involves moving the SQL DB into a cluster when certain thresholds are reached, and then reviewing partitioning and indexing structures.

    Read the article

  • Unable to verify body hash for DKIM

    - by Joshua
    I'm writing a C# DKIM validator and have come across a problem that I cannot solve. Right now I am working on calculating the body hash, as described in Section 3.7 Computing the Message Hashes. I am working with emails that I have dumped using a modified version of EdgeTransportAsyncLogging sample in the Exchange 2010 Transport Agent SDK. Instead of converting the emails when saving, it just opens a file based on the MessageID and dumps the raw data to disk. I am able to successfully compute the body hash of the sample email provided in Section A.2 using the following code: SHA256Managed hasher = new SHA256Managed(); ASCIIEncoding asciiEncoding = new ASCIIEncoding(); string rawFullMessage = File.ReadAllText(@"C:\Repositories\Sample-A.2.txt"); string headerDelimiter = "\r\n\r\n"; int headerEnd = rawFullMessage.IndexOf(headerDelimiter); string header = rawFullMessage.Substring(0, headerEnd); string body = rawFullMessage.Substring(headerEnd + headerDelimiter.Length); byte[] bodyBytes = asciiEncoding.GetBytes(body); byte[] bodyHash = hasher.ComputeHash(bodyBytes); string bodyBase64 = Convert.ToBase64String(bodyHash); string expectedBase64 = "2jUSOH9NhtVGCQWNr9BrIAPreKQjO6Sn7XIkfJVOzv8="; Console.WriteLine("Expected hash: {1}{0}Computed hash: {2}{0}Are equal: {3}", Environment.NewLine, expectedBase64, bodyBase64, expectedBase64 == bodyBase64); The output from the above code is: Expected hash: 2jUSOH9NhtVGCQWNr9BrIAPreKQjO6Sn7XIkfJVOzv8= Computed hash: 2jUSOH9NhtVGCQWNr9BrIAPreKQjO6Sn7XIkfJVOzv8= Are equal: True Now, most emails come across with the c=relaxed/relaxed setting, which requires you to do some work on the body and header before hashing and verifying. And while I was working on it (failing to get it to work) I finally came across a message with c=simple/simple which means that you process the whole body as is minus any empty CRLF at the end of the body. (Really, the rules for Body Canonicalization are quite ... simple.) Here is the real DKIM email with a signature using the simple algorithm (with only unneeded headers cleaned up). Now, using the above code and updating the expectedBase64 hash I get the following results: Expected hash: VnGg12/s7xH3BraeN5LiiN+I2Ul/db5/jZYYgt4wEIw= Computed hash: ISNNtgnFZxmW6iuey/3Qql5u6nflKPTke4sMXWMxNUw= Are equal: False The expected hash is the value from the bh= field of the DKIM-Signature header. Now, the file used in the second test is a direct raw output from the Exchange 2010 Transport Agent. If so inclined, you can view the modified EdgeTransportLogging.txt. At this point, no matter how I modify the second email, changing the start position or number of CRLF at the end of the file I cannot get the files to match. What worries me is that I have been unable to validate any body hash so far (simple or relaxed) and that it may not be feasible to process DKIM through Exchange 2010.

    Read the article

  • Which mobile operating system should I code for?

    - by samgoody
    It seems as though mobile computing has fully arrived. I would like to rewrite two of our programs for mobile devices, but am a bit lost as to which platform to target. Complicating this decision: I would need to learn the relevant languages and IDEs - my coding to date has been almost all web based (PHP, JS, Actionscript, etc. Some ASPX). Most users seem to be religious about their mobile decision, so oral conversations leave me more confused then enlightened. I do not yet own a smartphone - will have to buy one once I know which platform to be aiming for. Both of my programs are more for business users, (one is only useful for C.P.A.s). I am a single developer, and cannot develop for more than one platform at a time. Getting it right is important. Based on what I've found on the web, I would've expected RIM to be a shoo-in, and the general order to be as follows: RIM Blackberry - More of them than any other brand. Despite naysayers, they've had double the sales (or perhaps 5X the sales) of any other smartphone, and have continued to grow. And, they have business users. Android - According to Schmidt, they have outsold everyone else except RIM (though I can't find where I read that now), and they are just getting started. According to Comscore, they are already at 8% of the market and expected to hit Shcmidt's claims within six months. Nokia - The largest worldwide. If they would just make up between Maemo or Symbian, I would be far less confused. iPhone - Much more competition by other apps, fewer sales to be had, and a overlord that can delay or cancel my app at any time. Is Cocoa hard to learn? Windows Mobile - Word is that version 7 will not be backwards compatible and losing market share. Palm WebOS - Perhaps this should go first, as it is the only one that offers tools to make my life easy as a web application developer. No competition in marketplace. But not very many users either. However, a search on StackOverflow shows a hugely disproportionate number of iPhone questions versus Blackberry. Likewise, there are clearly more apps on iPhone, so it must be getting developer love. What is the one platform I should develop for? Please back up your answer with the logic.

    Read the article

  • What are five things you hate about your favorite language?

    - by brian d foy
    There's been a cluster of Perl-hate on Stackoverflow lately, so I thought I'd bring my "Five things you hate about your favorite language" question to StackOverflow. Take your favorite language and tell me five things you hate about it. Those might be things that just annoy you, admitted design flaws, recognized performance problems, or any other category. You just have to hate it, and it has to be your favorite language. Don't compare it to another language, and don't talk about languages that you already hate. Don't talk about the things you like in your favorite language. I just want to hear the things that you hate but tolerate so you can use all of the other stuff, and I want to hear it about the language you wished other people would use. I ask this whenever someone tries to push their favorite language on me, and sometimes as an interview question. If someone can't find five things to hate about his favorite tool, he don't know it well enough to either advocate it or pull in the big dollars using it. He hasn't used it in enough different situations to fully explore it. He's advocating it as a culture or religion, which means that if I don't choose his favorite technology, I'm wrong. I don't care that much which language you use. Don't want to use a particular language? Then don't. You go through due diligence to make an informed choice and still don't use it? Fine. Sometimes the right answer is "You have a strong programming team with good practices and a lot of experience in Bar. Changing to Foo would be stupid." This is a good question for code reviews too. People who really know a codebase will have all sorts of suggestions for it, and those who don't know it so well have non-specific complaints. I ask things like "If you could start over on this project, what would you do differently?" In this fantasy land, users and programmers get to complain about anything and everything they don't like. "I want a better interface", "I want to separate the model from the view", "I'd use this module instead of this other one", "I'd rename this set of methods", or whatever they really don't like about the current situation. That's how I get a handle on how much a particular developer knows about the codebase. It's also a clue about how much of the programmer's ego is tied up in what he's telling me. Hate isn't the only dimension of figuring out how much people know, but I've found it to be a pretty good one. The things that they hate also give me a clue how well they are thinking about the subject.

    Read the article

  • ImageMagick on Mac OSX Snow Leopard. Is there any way to get it to work?

    - by ?????
    It seems that I have more trouble getting standard Unix things to run on Snow Leopard than any other platform--including Windows cygwin For the past couple of days, I've been trying to get ImageMagick to run on Snow Leopard. The most obvious way, Mac Ports, fails: tppllc-Mac-Pro:ImageMagick-sl swirsky$ sudo port install imagemagick ---> Computing dependencies for p5-locale-gettext ---> Configuring p5-locale-gettext Error: Target org.macports.configure returned: configure failure: shell command " cd "/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_perl_p5-locale-gettext/work/gettext-1.05" && /opt/local/bin/perl Makefile.PL INSTALLDIRS=vendor " returned error 2 Command output: checking for gettext... no checking for gettext in -I/opt/local/include -arch i386 -L/opt/local/lib -lintl...gettext function not found. Please install libintl at Makefile.PL line 18. no Error: Unable to upgrade port: 1 Error: Unable to execute port: upgrade xorg-libXt failed Before reporting a bug, first run the command again with the -d flag to get complete output. tppllc-Mac-Pro:ImageMagick-sl swirsky$ Not wanting to spend another two days figuring out why my libintl doesn't have a "gettext" function, I tried a different route: the script mentioned here: http://github.com/masterkain/ImageMagick-sl This script downloads and installs an ImageMagic independently of MacPorts issues tppllc-Mac-Pro:ImageMagick-sl swirsky$ /usr/local/bin/convert dyld: Library not loaded: /opt/local/lib/libiconv.2.dylib Referenced from: /opt/local/lib/libfontconfig.1.dylib Reason: Incompatible library version: libfontconfig.1.dylib requires version 8.0.0 or later, but libiconv.2.dylib provides version 7.0.0 Trace/BPT trap It downloads everything and compiles fine, but fails when I try to run it, with the message above. So now I'm two steps away from ImageMagick, trying to get a newer libiconv on my machine. I downloaded the latest libiconv, compiled and built it. I put the resulting library in /opt/local/lib, and I still get the same error message: tppllc-Mac-Pro:.libs swirsky$ sudo mv libiconv.2.dylib /opt/local/lib/libiconv.2.dylib tppllc-Mac-Pro:.libs swirsky$ convert dyld: Library not loaded: /opt/local/lib/libiconv.2.dylib Referenced from: /opt/local/lib/libfontconfig.1.dylib Reason: Incompatible library version: libfontconfig.1.dylib requires version 8.0.0 or later, but libiconv.2.dylib provides version 7.0.0 Trace/BPT trap So here's my question: Is it possible to get ImageMagick to run on OSX Snow Leopard? Are there any binary distributions that have static libraries baked in so I don't have to worry about these issue/

    Read the article

  • How to sort my paws?

    - by Ivo Flipse
    In my previous question I got an excellent answer that helped me detect where a paw hit a pressure plate, but now I'm struggling to link these results to their corresponding paws: I manually annotated the paws (RF=right front, RH= right hind, LF=left front, LH=left hind). As you can see there's clearly a pattern repeating pattern and it comes back in aknist every measurement. Here's a link to a presentation of 6 trials that were manually annotated. My initial thought was to use heuristics to do the sorting, like: There's a ~60-40% ratio in weight bearing between the front and hind paws; The hind paws are generally smaller in surface; The paws are (often) spatially divided in left and right. However, I’m a bit skeptical about my heuristics, as they would fail on me as soon as I encounter a variation I hadn’t thought off. They also won’t be able to cope with measurements from lame dogs, whom probably have rules of their own. Furthermore, the annotation suggested by Joe sometimes get's messed up and doesn't take into account what the paw actually looks like. Based on the answers I received on my question about peak detection within the paw, I’m hoping there are more advanced solutions to sort the paws. Especially because the pressure distribution and the progression thereof are different for each separate paw, almost like a fingerprint. I hope there's a method that can use this to cluster my paws, rather than just sorting them in order of occurrence. So I'm looking for a better way to sort the results with their corresponding paw. For anyone up to the challenge, I have pickled a dictionary with all the sliced arrays that contain the pressure data of each paw (bundled by measurement) and the slice that describes their location (location on the plate and in time). To clarfiy: walk_sliced_data is a dictionary that contains ['ser_3', 'ser_2', 'sel_1', 'sel_2', 'ser_1', 'sel_3'], which are the names of the measurements. Each measurement contains another dictionary, [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10] (example from 'sel_1') which represent the impacts that were extracted. Also note that 'false' impacts, such as where the paw is partially measured (in space or time) can be ignored. They are only useful because they can help recognizing a pattern, but won't be analyzed. And for anyone interested, I’m keeping a blog with all the updates regarding the project!

    Read the article

  • FASM vc MASM trasnlation problem in mov si, offset msg

    - by Ruben Trancoso
    hi folks, just did my first test with MASM and FASM with the same code (almos) and I falled in trouble. The only difference is that to produce just the 104 bytes I need to write to MBR in FASM I put org 7c00h and in MASM 0h. The problem is on the mov si, offset msg that in the first case transletes it to 44 7C (7c44h) and with masm translates to 44 00 (0044h)! but just when I change org 7c00h to org 0h in MASM. Otherwise it will produce the entire segment from 0 to 7dff. how do I solve it? or in short, how to make MASM produce a binary that begins at 7c00h as it first byte and subsequent jumps remain relative to 7c00h? .model TINY .code org 7c00h ; Boot entry point. Address 07c0:0000 on the computer memory xor ax, ax ; Zero out ax mov ds, ax ; Set data segment to base of RAM jmp start ; Jump to the first byte after DOS boot record data ; ---------------------------------------------------------------------- ; DOS boot record data ; ---------------------------------------------------------------------- brINT13Flag db 90h ; 0002h - 0EH for INT13 AH=42 READ brOEM db 'MSDOS5.0' ; 0003h - OEM name & DOS version (8 chars) brBPS dw 512 ; 000Bh - Bytes/sector brSPC db 1 ; 000Dh - Sectors/cluster brResCount dw 1 ; 000Eh - Reserved (boot) sectors brFATs db 2 ; 0010h - FAT copies brRootEntries dw 0E0h ; 0011h - Root directory entries brSectorCount dw 2880 ; 0013h - Sectors in volume, < 32MB brMedia db 240 ; 0015h - Media descriptor brSPF dw 9 ; 0016h - Sectors per FAT brSPH dw 18 ; 0018h - Sectors per track brHPC dw 2 ; 001Ah - Number of Heads brHidden dd 0 ; 001Ch - Hidden sectors brSectors dd 0 ; 0020h - Total number of sectors db 0 ; 0024h - Physical drive no. db 0 ; 0025h - Reserved (FAT32) db 29h ; 0026h - Extended boot record sig brSerialNum dd 404418EAh ; 0027h - Volume serial number (random) brLabel db 'OSAdventure' ; 002Bh - Volume label (11 chars) brFSID db 'FAT12 ' ; 0036h - File System ID (8 chars) ;------------------------------------------------------------------------ ; Boot code ; ---------------------------------------------------------------------- start: mov si, offset msg call showmsg hang: jmp hang msg db 'Loading...',0 showmsg: lodsb cmp al, 0 jz showmsgd push si mov bx, 0007 mov ah, 0eh int 10h pop si jmp showmsg showmsgd: retn ; ---------------------------------------------------------------------- ; Boot record signature ; ---------------------------------------------------------------------- dw 0AA55h ; Boot record signature END

    Read the article

  • Postergres could not connect to server

    - by Gary Lai
    After I did brew update and brew upgrade, my postgres got some problem. I tried to uninstall postgres and install again, but it didn't work as well. This is the error message.(I also got this error message when I try to do rake db:migrate) $ psql psql: could not connect to server: No such file or directory Is the server running locally and accepting connections on Unix domain socket "/tmp/.s.PGSQL.5432"? How can I solve it? Mac version: Mountain lion. homebrew version: 0.9.3 postgres version: psql (PostgreSQL) 9.2.1 And this is what I did. 12:30 ~/D/works$ brew uninstall postgresql Uninstalling /usr/local/Cellar/postgresql/9.2.1... 12:31 ~/D/works$ brew uninstall postgresql Uninstalling /usr/local/Cellar/postgresql/9.1.4... 12:31 ~/D/works$ psql --version bash: /usr/local/bin/psql: No such file or directory 12:33 ~/D/works$ brew install postgresql ==> Downloading http://ftp.postgresql.org/pub/source/v9.2.1/postgresql-9.2.1.tar.bz2 Already downloaded: /Library/Caches/Homebrew/postgresql-9.2.1.tar.bz2 ...... ...... ==> Summary /usr/local/Cellar/postgresql/9.2.1: 2814 files, 38M, built in 2.7 minutes 12:37 ~/D/works$ initdb /usr/local/var/postgres -E utf8 The files belonging to this database system will be owned by user "laigary". This user must also own the server process. The database cluster will be initialized with locale "en_US.UTF-8". The default text search configuration will be set to "english". initdb: directory "/usr/local/var/postgres" exists but is not empty If you want to create a new database system, either remove or empty the directory "/usr/local/var/postgres" or run initdb with an argument other than "/usr/local/var/postgres". 12:39 ~/D/works$ mkdir -p ~/Library/LaunchAgents 12:39 ~/D/works$ cp /usr/local/Cellar/postgresql/9.2.1/homebrew.mxcl.postgresql.plist ~/Library/LaunchAgents/ 12:39 ~/D/works$ launchctl load -w ~/Library/LaunchAgents/homebrew.mxcl.postgresql.plist homebrew.mxcl.postgresql: Already loaded 12:39 ~/D/works$ pg_ctl -D /usr/local/var/postgres -l /usr/local/var/postgres/server.log start server starting 12:39 ~/D/works$ env ARCHFLAGS="-arch x86_64" gem install pg Building native extensions. This could take a while... Successfully installed pg-0.14.1 1 gem installed 12:42 ~/D/works$ psql --version psql (PostgreSQL) 9.2.1 12:42 ~/D/works$ psql psql: could not connect to server: No such file or directory Is the server running locally and accepting connections on Unix domain socket "/tmp/.s.PGSQL.5432"?

    Read the article

< Previous Page | 132 133 134 135 136 137 138 139 140 141 142 143  | Next Page >