Search Results

Search found 5762 results on 231 pages for 'clients'.

Page 73/231 | < Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >

  • Access denied error while mounting a shared folder?

    - by SSH
    I am a linux newbie and I have a very basic question. I have three machines - machineA 10.108.24.132 machineB 10.108.24.133 machineC 10.108.24.134 and all those machines have Ubuntu 12.04 installed in it and I have root access to all those three machines. Now I am supposed to do below things in my above machines - Create mount point /opt/exhibitor/conf Mount the directory in all servers. sudo mount <NFS-SERVER>:/opt/exhibitor/conf /opt/exhibitor/conf/ I have already created /opt/exhibitor/conf directory in all those three machines as mentioned above. Now I am trying to create a Mount Point on all those three machines. So I followed the below process - Install NFS support files and NFS kernel server in all the above three machines $ sudo apt-get install nfs-common nfs-kernel-server Create the shared directory in all the above three machines $ mkdir /opt/exhibitor/conf/ Edited the /etc/exports and added the entry like this in all the above three machines - # /etc/exports: the access control list for filesystems which may be exported # to NFS clients. See exports(5). # # Example for NFSv2 and NFSv3: # /srv/homes hostname1(rw,sync,no_subtree_check) hostname2(ro,sync,no_subtree_check) # # Example for NFSv4: # /srv/nfs4 gss/krb5i(rw,sync,fsid=0,crossmnt,no_subtree_check) # /srv/nfs4/homes gss/krb5i(rw,sync,no_subtree_check) # /opt/exhibitor/conf/ 10.108.24.*(rw) Run exportfs in all the above three machines root@machineA:/# exportfs -rv exportfs: /etc/exports [1]: Neither 'subtree_check' or 'no_subtree_check' specified for export "10.108.24.*:/opt/exhibitor/conf/". Assuming default behaviour ('no_subtree_check'). NOTE: this default has changed since nfs-utils version 1.0.x exporting 10.108.24.*:/opt/exhibitor/conf Now I did showmount on machineA root@machineA:/# showmount -e 10.108.24.132 Export list for 10.108.24.132: /opt/exhibitor/conf 10.108.24.* And also I have started the NFS server like this in all the above three machines - sudo /etc/init.d/nfs-kernel-server start And now when I did this, I am getting an error - root@machineA:/# sudo mount -t nfs 10.108.24.132:/opt/exhibitor/conf /opt/exhibitor/conf/ mount.nfs: access denied by server while mounting 10.108.24.132:/opt/exhibitor/conf I have also tried doing the same thing from machineB and machineC as well and still I get the same error- root@machineB:/# sudo mount -t nfs 10.108.24.132:/opt/exhibitor/conf /opt/exhibitor/conf/ root@machineC:/# sudo mount -t nfs 10.108.24.132:/opt/exhibitor/conf /opt/exhibitor/conf/ Did my /etc/exports file looks good? As I have the same content in all the three machines. And also are there any logs related to NFS which I can see to find any clues? Any idea what wrong I am doing here? UPDATE:- So my etc/exports files would be like this in all the three machines - # /etc/exports: the access control list for filesystems which may be exported # to NFS clients. See exports(5). # # Example for NFSv2 and NFSv3: # /srv/homes hostname1(rw,sync,no_subtree_check) hostname2(ro,sync,no_subtree_check) # # Example for NFSv4: # /srv/nfs4 gss/krb5i(rw,sync,fsid=0,crossmnt,no_subtree_check) # /srv/nfs4/homes gss/krb5i(rw,sync,no_subtree_check) # /opt/exhibitor/conf/ 10.108.24.132(rw) /opt/exhibitor/conf/ 10.108.24.133(rw) /opt/exhibitor/conf/ 10.108.24.134(rw) Just a quick check - The IP Address that I am taking for each machine as mentioned above is like this - root@machineB:/# ifconfig eth0 Link encap:Ethernet HWaddr 00:50:56:ad:5b:a7 inet addr:10.108.24.133 Bcast:10.108.27.255 Mask:255.255.252.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:5696812 errors:0 dropped:12462 overruns:0 frame:0 TX packets:5083427 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:7904369145 (7.9 GB) TX bytes:601844910 (601.8 MB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:187144 errors:0 dropped:0 overruns:0 frame:0 TX packets:187144 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:24012302 (24.0 MB) TX bytes:24012302 (24.0 MB) Here the IP Address that I am taking for machineB is 10.108.24.133.

    Read the article

  • Finding Leaders Breakfasts - Adelaide and Perth

    - by rdatson-Oracle
    HR Executives Breakfast Roundtables: Find the best leaders using science and social media! Perth, 22nd July & Adelaide, 24th July What is leadership in the 21st century? What does the latest research tell us about leadership? How do you recognise leadership qualities in individuals? How do you find individuals with these leadership qualities, hire and develop them? Join the Neuroleadership Institute, the Hay Group, and Oracle to hear: 1. the latest neuroscience research about human bias, and how it applies to finding and building better leaders; 2. the latest techniques to recognise leadership qualities in people; 3. and how you can harness your people and social media to find the best people for your company. Reflect on your hiring practices at this thought provoking breakfast, where you will be challenged to consider whether you are using best practices aimed at getting the right people into your company. Speakers Abigail Scott, Hay Group Abigail is a UK registered psychologist with 10 years international experience in the design and delivery of talent frameworks and assessments. She has delivered innovative assessment programmes across a range of organisations to identify and develop leaders. She is experienced in advising and supporting clients through new initiatives using evidence-based approach and has published a number of research papers on fairness and predictive validity in assessment. Karin Hawkins, NeuroLeadership Institute Karin is the Regional Director of NeuroLeadership Institute’s Asia-Pacific region. She brings over 20 years experience in the financial services sector delivering cultural and commercial results across a variety of organisations and functions. As a leadership risk specialist Karin understands the challenge of building deep bench strength in teams and she is able to bring evidence, insight, and experience to support executives in meeting today’s challenges. Robert Datson, Oracle Robert is a Human Capital Management specialist at Oracle, with several years as a practicing manager at IBM, learning and implementing latest management techniques for hiring, deploying and developing staff. At Oracle he works with clients to enable best practices for HR departments, and drawing the linkages between HR initiatives and bottom-line improvements. Agenda 07:30 a.m. Breakfast and Registrations 08:00 a.m. Welcome and Introductions 08:05 a.m. Breaking Bias in leadership decisions - Karin Hawkins 08:30 a.m. Identifying and developing leaders - Abigail Scott 08:55 a.m. Finding leaders, the social way - Robert Datson 09:20 a.m. Q&A and Closing Remarks 09:30 a.m. Event concludes If you are an employee or official of a government organisation, please click here for important ethics information regarding this event. To register for Perth, Tuesday 22nd July, please click HERE To register for Adelaide, Thursday 24th July, please click HERE 1024x768 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 -"/ /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Contact: To register or have questions on the event? Contact Aaron Tait on +61 2 9491 1404

    Read the article

  • Capgemini Global Business Process Management Report

    - by JuergenKress
    Welcome to the Capgemini Global Business Process Management (BPM) Report. This report is an exploration of key trends in BPM as seen by CXOs across a broad selection of sectors and geographies. BPM is perhaps at a tipping point - it’s certainly at an exciting stage in its evolution. As both an engineer and an Operational Research practitioner in my early career, and subsequently as a consultant, I have seen BPM through its development over the last 26 years. BPM has its roots in management practices such as Total Quality Management, Business Process Reengineering & Model Based Development; but the advent of the new generation of sophisticated modelling and process execution technologies has greatly enhanced BPM’s power to truly transform businesses. This has created one of the most rapidly growing and attractive market sectors for both services and technology. We see BPM as a critical management discipline that when executed against clear, cross organizational business objectives, can deliver exceptional value to that organization. However, we also see that the potential for BPM is not well understood. Our decision to conduct this global survey stemmed from discussions with our clients. We sought to gain a better impression of their understanding of BPM, how they measure its value, and how far it is prioritized within their Business and Technology Transformation efforts. This research confirms our belief that BPM needs to be a jointly owned Business and IT discipline. It also demonstrates that it is starting to gain significant traction in the market and investments are starting to pay dividends to the early adopters. At Capgemini we are being asked by our clients to help them simplify and improve their business models and the technology that supports them and we are already seeing BPM become an integral and key part of this proposition. Business Process Management is becoming ever more relevant to both large and small organizations in the current economic climate. At a time when many different market sectors are facing slow revenue growth, customer churn and increased pressures on costs, BPM becomes a critical weapon in the battle for efficiency and effectiveness in processes. Furthermore, in a challenging and changing business environment that is characterized by uncertainty, it allows organizations to adapt, be more agile and fleet of foot. Capgemini is seeing strong demand for BPM services in markets such as the USA, the UK, the Netherlands and France; and there are clear signs of increased interest in other geographies such as, Germany, Sweden, Spain, Italy and Australia. In sector terms, the financial services industry has led the way in BPM adoption over the recent past, driven by increased focus on customer- centricity and regulatory compliance. Other sectors, public sector, utilities, telco, retail and manufacturing are now not only catching up, but are starting o use BPM in new ways to create new business models to serve customers and outsmart the competition. The research findings also show however that this is a complex landscape, and we are not seeing adoption of BPM in a clear and consistent way. This report also looks at some of the barriers to adoption, with organizational silos being a major obstacle. Waters are further muddied by fragmented budgets, lack of clear governance and ownership and internal politics. The objective of our investment in this research project was to shed some light on these elements with a view to assisting organizations to create strategies that avoid or at least mitigate some of these barriers to success. Management of change in such endea vours is a key part in enabling the appropriate alignment of business and technology to support their transformation efforts. I hope that you find this report of benefit in the further adoption of Business Process Management. Get the full report here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Technorati Tags: Capgemini,bpm report,bpm market,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Monitoring Windows Azure Service Bus Endpoint with BizTalk 360?

    - by Michael Stephenson
    I'm currently working with a customer who is undergoing an initiative to expose some of their line of business applications to external partners and SAAS applications and as part of this we have been looking at using the Windows Azure Service Bus. For the first part of the project we were focused on some synchronous request response scenarios where an external application would use the Service Bus relay functionality to get data from some internal applications. When we were looking at the operational monitoring side of the solution it was obvious that although most of the normal server monitoring capabilities would be required for the on premise components we would have to look at new approaches to validate that the operation of the service from outside of the organization was working as expected. A number of months ago one of my colleagues Elton Stoneman wrote about an approach I have introduced with a number of clients in the past where we implement a diagnostics service in each service component we build. This service would allow us to make a call which would flex some of the working parts of the system to prove it was working within any SLA. This approach is discussed on the following article: http://geekswithblogs.net/EltonStoneman/archive/2011/12/12/the-value-of-a-diagnostics-service.aspx In our solution we wanted to take the same approach but we had to consider that the service clients were external to the service. We also had to consider that by going through Windows Azure Service Bus it's not that easy to make most of your standard monitoring solutions just give you an easy way to do this. In a previous article I have described how you can use BizTalk 360 to monitor things using a custom extension to the Web Endpoint Manager and I felt that we could use this approach to provide an excellent way to monitor our service bus endpoint. The previous article is available on the following link: http://geekswithblogs.net/michaelstephenson/archive/2012/09/12/150696.aspx   The Monitoring Solution BizTalk 360 currently has an easy way to hook up the endpoint manager to a url which it will then call and if a successful response is returned it then considers the endpoint to be in a healthy state. We would take advantage of this by creating an ASP.net web page which would be called by BizTalk 360 and behind this page we would implement the functionality to call the diagnostics service on our Service Bus endpoint. The ASP.net page could include logic to work out how to handle the response from the diagnostics service. For example if the overall result of the diagnostics service was successful but the call to the diagnostics service was longer than a certain amount of time then we could return an error and indicate the service is taking too long. The following diagram illustrates the monitoring pattern.   The diagnostics service which is hosted in the line of business application allows us to ping a simple message through the Azure Service Bus relay to the WCF services in the LOB application and we they get a response back indicating that the service is working fine. To implement this I used the exact same approach I described in my previous post to create a custom web page which calls the diagnostics service and then it would return an HTTP response code which would depend on the error condition returned or a 200 if it was successful. One of the limitations of this approach is that the competing consumer pattern for listening to messages from service bus means that you cannot guarantee which server would process your diagnostics check message but with BizTalk 360 you could simply add multiple endpoint checks so that it could access the individual on-premise web servers directly to ensure that each server is working fine and then check that messages can also be processed through the cloud. Conclusion It took me about 15 minutes to get a proof of concept of this up and running which was able to monitor our web services which had been exposed via Windows Azure Service Bus. I was then able to inherit all of the monitoring benefits of BizTalk 360 to provide an enterprise class monitoring solution for our cloud enabled API.

    Read the article

  • Cloud – the forecast is improving

    - by Rob Farley
    There is a lot of discussion about “the cloud”, and how that affects people’s data stories. Today the discussion enters the realm of T-SQL Tuesday, hosted this month by Jorge Segarra. Over the years, companies have invested a lot in making sure that their data is good, and I mean every aspect of it – the quality of it, the security of it, the performance of it, and more. Experts such as those of us at LobsterPot Solutions have helped these companies with this, and continue to work with clients to make sure that data is a strong part of their business, not an oversight. Whether business intelligence systems are being utilised or not, every business needs to be able to rely on its data, and have the confidence in it. Data should be a foundation upon which a business is built. In the past, data had been stored in paper-based systems. Filing cabinets stored vital information. Today, people have server rooms with storage of various kinds, recognising that filing cabinets don’t necessarily scale particularly well. It’s easy to ‘lose’ data in a filing cabinet, when you have people who need to make sure that the sheets of paper are in the right spot, and that you know how things are stored. Databases help solve that problem, but still the idea of a large filing cabinet continues, it just doesn’t involve paper. If something happens to the physical ‘filing cabinet’, then the problems are larger still. Then the data itself is under threat. Many clients have generators in case the power goes out, redundant cables in case the connectivity dies, and spare servers in other buildings just in case they’re required. But still they’re maintaining filing cabinets. You see, people like filing cabinets. There’s something to be said for having your data ‘close’. Even if the data is not in readable form, living as bits on a disk somewhere, the idea that its home is ‘in the building’ is comforting to many people. They simply don’t want to move their data anywhere else. The cloud offers an alternative to this, and the human element is an obstacle. By leveraging the cloud, companies can have someone else look after their filing cabinet. A lot of people really don’t like the idea of this, partly because the administrators of the data, those people who could potentially log in with escalated rights and see more than they should be allowed to, who need to be trusted to respond if there’s a problem, are now a faceless entity in the cloud. But this doesn’t mean that the cloud is bad – this is simply a concern that some people may have. In new functionality that’s on its way, we see other hybrid mechanisms that mean that people can leverage parts of the cloud with less fear. Companies can use cloud storage to hold their backup data, for example, backups that have been encrypted and are therefore not able to be read by anyone (including administrators) who don’t have the right password. Companies can have a database instance that runs locally, but which has its data files in the cloud, complete with Transparent Data Encryption if needed. There can be a higher level of control, making the change easier to accept. Hybrid options allow people who have had fears (potentially very justifiable) to take a new look at the cloud, and to start embracing some of the benefits of the cloud (such as letting someone else take care of storage, high availability, and more) without losing the feeling of the data being close. @rob_farley

    Read the article

  • Can someone explain the true landscape of Rails vs PHP deployment, particularly within the context of Reseller-based web hosting (e.g., Hostgator)?

    - by rcd
    Currently, I have a reseller account with the company HostGator. I design websites, which up until now have occasionally been wrapped in Wordpress CMSs and the like (PHP applications). I then sell hosting (of the site I've designed) to the client, which is pretty simple, in that I can simply click a button and add a new shared hosting account/site with whatever settings I want. Furthermore, I then utilize WHMCS to automate billing and account management. It's a nice package and pretty simple. I pay something like $25 a month, and can sell a hundred accounts under this (because my clients bandwidth requirements are low). Now I am finding the need to develop more customized applications, including a minimalist CMS and several proprietary things. I soon anticipate developing these apps for clients as well. Thus, I've spent the past few months learning Rails, and it's coming along well now. The thing that has nagged at me all along, though, is the deployment issue. I can't wrap my brain around it. It seems like all of the popular options (Heroku, etc) have nice automation with git and are set up in the "Rails Way". I get that (sort of). But it's terribly expensive... a single dyno, a helper, and the cheapest database (which they say is mainly suitable for testing) that isn't limited to 5MB runs $51. This is for ONE app!!! Throw in a "production" DB and you're over $200. This is like... the same prices as getting a server somewhere, right? Meanwhile, going back to what I guess is a "traditional" hosting environment with Hostgator, their server only has Ruby 1.8.7 and Rails 2.3.5... No Rails 3. AND, no Passenger (not that I really understand the difference in CGI or mod_rails or whatever, but they say Passenger is the simplest). So I'm to understand that if I build an app in Rails 3, it won't run at all on this host? But damn, I already have these accounts under my reseller account there, all running static html and/or PHP stuff, right? So what now? How do I get all of this under one simple (and affordable) roof? Forgive my ignorance, but I just don't get it. Managing a VPS is cool and all, but entails learning server admin stuff and security... And it's expensive. I get that a shared and/or reseller "server-based" (forgive the terminology) may be inadequate for large-scale apps that use a lot of bandwidth... But what about for those of us who are building real (but small and low bandwidth) apps (with Rails) and who want to deploy them simply, cheaply, using the same conceptual approach as PHP? Even after learning all of this Ruby and Rails stuff for months, I'm questioning whether it's worth it when it comes to deployment. I want to build a small app, upload it to my home directory on a shared server account, and just make it run. Why should that be so hard? Am I just choosing the wrong language/framework? Forgive my ignorance in the subject; these questions are not rhetorical; just trying to learn here. So: 1) I'd appreciate if someone could give me a good rundown of how to understand deployment in Rails vs. PHP. 2) I'd appreciate if someone could address my issue with running a hosting/web business around reseller hosting (Hostgator) while also being able to host Rails apps. Can it be done? And how can a company like Hostgator completely ignore what's current in Rails/Ruby? Thanks.

    Read the article

  • Building Enterprise Smartphone App &ndash; Part 1: Why Build Smart Phone Apps

    - by Tim Murphy
    This is part 1 in a series of post based on a talk I gave recently at the Chicago Information Technology Architects Group.  Feel free to leave feedback. Intro Most of us already carry smartphones. We play games on them. We keep up with what is going on with our friends and our favorite teams. We take pictures of our kids at their events. But the question is if that is all they are good for. Many companies have aspects of their business that lend themselves to being performed by mobile devices. Some of them lean toward larger device such as tablets, but many can be executed on smartphones. This and the following articles will discuss some of the possible applications of smartphone technology for businesses, the platforms that are available and the considerations you need to make when building them. I'll take a look at some specific scenarios and wrap up with a couple of capabilities that are just emerging that can be used in the future. Why Build Enterprise Smartphone Applications So what are some of the ways that you can leverage smartphone technology to gain efficiency in your business or a clients business. There are a few major areas that I have seen mobile platforms being an advantage to. Your mobile sales force is a key candidate for leveraging smartphone apps.  They can visit clients in their retail location and place orders on site. It is a more personal approach which can gain you customer loyalty.  A sales person may also gather information about the way a client does business or who their target market is. This allows them you to focus marketing information or build customized support for your customer. You may also have need to track physical inventory in a store. This is something that has historically been done with laser scanners, but with the camera capabilities in today's phones and tablets it is possible to use more general multi-purpose devices.  This can save costs on both hardware and telecommunication contracts. Delivery verification is another area that historically has been the domain of specialized devices but can now be accomplished with smartphones.  This also reduces costs because it is also used for communicating with the driver and other operations.  Add to that the navigation capability of smartphones and you can see how the return on investment increases. Executives are always on the go. They spend most of their time in meetings and yet they need access to decision making information at their finger tips. With a smartphone app they can get alerts when major sales are closed or critical accounting process are completed that may need their attention. They can also answer questions by instantly pulling up BI reports. I have often heard operations support people say that they need things like VPN and RDP from their phones. If they can also have notifications of outages or critical support requests they can be react to situations without needing to be tied to their desks. These are all valid reasons to need smartphone applications.  In the next installment I will discuss platforms and features. del.icio.us Tags: Smartphones,Enterprise Smartphone Apps,Architecture

    Read the article

  • Is changing my job now a wise decision? [closed]

    - by FlaminPhoenix
    First a little background about myself. I am a javascript programmer with 3.8 years of experience. I joined my current company a year and 3 months ago, and I was recruited as a javascript programmer. I was under the impression I was a programmer in a programming team but this was not the case. No one else except me and my manager knows anything about programming in my team. The other two teammates, copy paste stuff from websites into excel sheets. I was told I was being recruited for a new project, and it was true. The only problem was that the server side language they were using was PHP. They were using a popular library with PHP, and I had never worked with PHP before. Nevertheless, I learnt it well enough to get things working, and received high praise from my boss's boss on whichever project I worked on. Words like "wow" , "This looks great, the clients gonna be impressed with this." were sprinkled every now and then on reviewing my work. They even managed to sell my work to a couple of clients and as I understand, both of my projects are going to fetch them a pretty buck. The problem: I was asked to move into a project which my manager was handling. I asked them for training on the project which never came, and sure enough I couldnt complete my first task on the new project without shortcomings. I told my manager there were things I didnt know how to get done in the new project due to lack of training. His project had 0 documentation. I was told he would "take care" of everything relating to those shortcomings. In the meantime, I was asked to switch to another project. My manager made the necessary changes and later told me that the build had "broken" on the production server and that I needed to "test" my changes before saying things were done. I never deployed it on the production server. He did. I never saw / had the opportunity to see the final build before it went to production. He called me for a separate meeting and started pointing fingers at me, but I took full responsibility even if I didnt have to. He later on got on a call with his boss, in my presence, and gave him the impression that it was all my fault. I did not confront him about this so far. I have worked late / done overtime without them asking a lot, but last week, I just got home from work, and I got calls asking me to solve an issue which till then they had kept quiet about even though they were informed about it. I asked my manager why I hadnt been tasked with this when I was in office. He started telling me which statements to put where, as if to mock me, and that this "is hardly an overtime issue" and this pissed me off. Also, during the previous meeting, he was constantly talking highly about his work, at the same time trying to demean mine. In the meantime, I have attended an interview with another MNC, and the interviewers there were fully respectful of my decision to leave my current company. Its a software company, so I can expect my colleagues to know a lot more than me. Im told I can expect their offer anytime this week. My questions: Is my anger towards my manager justified? While leaving, do I tell him that its because of his actions that Im leaving? Do I erupt in anger and tell him that he shouldnt have put the blame on me since he was the one doing the deployment? This is going to be my second resignation to this company. The first time I wanted to resign, I was asked to stay back and my manager promised a lot of changes, a couple of which were made. How do I keep myself from getting into such situations with my employers in the future?

    Read the article

  • How do I deploy building blocks (quick parts) for Microsoft Outlook 2007?

    - by now
    I want to deploy some building blocks for Microsoft Outlook 2007. Microsoft has put up a poor solution at http://office.microsoft.com/en-us/outlook/HA102086531033.aspx#4 that asks you to save a template. That solution would require you to distribute that template to all the clients. An optimal solution would allow you to put the template containing the building blocks somewhere on the network and simply use the ”Workgroup building blocks path” group policy setting for shared paths in Microsoft Office 2007. Sadly, Outlook doesn’t respect that policy. Also, the described solution mentioned in the article above doesn’t work. Step 4 requests you to save the template as a Word Template after first asking you to save it as an Outlook Template. It seems that they copy&pasted the steps from the Word article and forgot to check whether it worked (and adjust the steps accordingly). Anyway, does anyone have any suggestions for how to distribute the building blocks without distributing NormalEmail.dotm (which will overwrite the clients’ own building blocks each time it is updated). Thanks!

    Read the article

  • Remoting server forcibly closing client connections in the middle of remote calls

    - by Carsten Hess
    Hello everyone. I have a system consisting of a server accepting remoting calls from clients with TCP as the underlying transportlayer. It normally works like a charm, but if I increase the no. of clients, the server starts at random to close the TCP connections in the middle of the calls. Not all calls are interrupted this way. That is really unexpected behaviour... I get no exceptions on the server side, just the client side exception: System.Net.Sockets.SocketException: An existing connection was forcibly closed by the remote host Server stack trace: ved System.Net.Sockets.Socket.Receive(Byte[] buffer, Int32 offset, Int32 size, SocketFlags socketFlags) ved System.Runtime.Remoting.Channels.SocketStream.Read(Byte[] buffer, Int32 offset, Int32 size) ved System.Runtime.Remoting.Channels.SocketHandler.ReadFromSocket(Byte[] buffer, Int32 offset, Int32 count) ved System.Runtime.Remoting.Channels.SocketHandler.Read(Byte[] buffer, Int32 offset, Int32 count) ved System.Runtime.Remoting.Channels.SocketHandler.ReadAndMatchFourBytes(Byte[] buffer) ved System.Runtime.Remoting.Channels.Tcp.TcpSocketHandler.ReadAndMatchPreamble() ved System.Runtime.Remoting.Channels.Tcp.TcpSocketHandler.ReadVersionAndOperation(UInt16& operation) ved System.Runtime.Remoting.Channels.Tcp.TcpClientSocketHandler.ReadHeaders() ved System.Runtime.Remoting.Channels.Tcp.TcpClientTransportSink.ProcessMessage(IMessage msg, ITransportHeaders requestHeaders, Stream requestStream, ITransportHeaders& responseHeaders, Stream& responseStream) ved System.Runtime.Remoting.Channels.BinaryClientFormatterSink.SyncProcessMessage(IMessage msg) Exception rethrown at [0]: ved System.Runtime.Remoting.Proxies.RealProxy.HandleReturnMessage(IMessage reqMsg, IMessage retMsg) ved System.Runtime.Remoting.Proxies.RealProxy.PrivateInvoke(MessageData& msgData, Int32 type) ved EBH.GuG.AgentKit.Transports.RemotingAgentHostEndPoint.SyncInvoke(Agent a, Int32 port)

    Read the article

  • WCF push to client through firewall?

    - by Sire
    See also How does a WCF server inform a WCF client about changes? (Better solution then simple polling, e.g. Coment or long polling) I need to use push-technology with WCF through client firewalls. This must be a common problem, and I know for a fact it works in theory (see links below), but I have failed to get it working, and I haven't been able to find a code sample that demonstrates it. Requirements: WCF Clients connects to server through tcp port 80 (netTcpBinding). Server pushes back information at irregular intervals (1 min to several hours). Users should not have to configure their firewalls, server pushes must pass through firewalls that have all inbound ports closed. TCP duplex on the same connection is needed for this, a dual binding does not work since a port has to be opened on the client firewall. Clients sends heartbeats to server at regular intervals (perhaps every 15 mins) so server knows client is still alive. Server is IIS7 with WAS. The solution seems to be duplex netTcpBinding. Based on this information: WCF through firewalls and NATs Keeping connections open in IIS But I have yet to find a code sample that works.. I've tried combining the "Duplex" and "TcpActivation" samples from Microsoft's WCF Samples without any luck. Please can someone point me to example code that works, or build a small sample app. Thanks a lot!

    Read the article

  • JSON-RPC and Json-rpc service discovery specifications

    - by Artyom
    Hello, I'm going to implement JSON-PRC web service. I need specifications for this. So far I had found only one resource that can be called as real specifications: JSON-RPC 1.0 http://json-rpc.org/wiki/specification Proposal of JSON-PRC 2.0: http://groups.google.com/group/json-rpc/web/json-rpc-2-0 (why is it on google groups?) However I've seen that JavaScript frameworks like Dojo actively use JSON-RPC SMD Service Mapping Description proposal But it requires JSON Schema specifications, but it redirects to incorrect URL as reference. So far I had found following: http://tools.ietf.org/html/draft-zyp-json-schema-02 And it is still draft... Can anybody point me to some actual specifications... At least something official updated? Because it looks like that implementing JSON-RPC 1.0 as is may be not enough, at least for frameworks like Dojo. Or am I wrong? Questions: Would implementation of JSON-RPC 1.0 specifications be enough to provide JSON-RPC service for most of modern clients, and how many clients there (if at-all) that actually support beyond JSON-RPC 1.0 capabilities (SMD, Schema, 2.0)? Because it looks like that JSON-RPC 1.0 is only one that has official specifications (and not draft) If I should implement SMD, or it is recommended can somebody point to official, most recent specifications of Json Schema and Service Mapping Description or links I found are really "the specifications?" Are JSON-RPC 2.0, SMD and JSON-Schema drafts stable enough to implement them? Note: do not suggest existing JSON-RPC service implementations. Anybody?

    Read the article

  • Report Model; problem regarding many-to-many relations

    - by Koen
    I'm having trouble setting up a report model to create reports with report builder. I guess I'm doing something wrong when configuring the report model, but it might also due to change of primary entity in report builder. I have 3 tables: Client, Address and Product. The Client has PK ClientNumber. The Address and Product both have a FK relation on ClientNumber. The relation between Client and Address is 1-to-many and also between Client and Product: Product-(many:1)-Client-(1:many)-Address. I've created a report model (mostly auto generate) with these 3 tables, for each table I've made an Entity. Now on the Client Entity , I've got 2 roles, Address and Product. They both have a cardinality of 'OptionalMany', because Client can have multiple Addresses or Products. On both Address and Product I have a Client Role with cardinality 'One', because for each Address or Product, there has to be a Client (tried OptionalOne as well...). Now I'm trying to create a report in Report Builder (2.0) where I select fields from these three entities. I'd like an overview of Clients with their main address and their products, but I don't seem to be able to create a report with fields from both Address and Products in it. I start by selecting attributes from Client, and as soon as I add Product for example the Primary entity changes as if I'm selecting Products (instead of Clients). This is a basic example of a problem I'm facing in a much more complex model. I've tried lots of different things for 2 days, but I can't get it to work. Does anyone have an idea how to cope with this? (Using SSRS 2008)

    Read the article

  • Comet and Simultaneous Ajax request

    - by Amitd
    Hi , I am trying to use a COMET solution using ASP.NET . Trouble is i want to implement sending and notification part in the same page. On IE7, whenever i try to send a request ,it just gets queued up. After reading on internet and stackoverflow pages i found that i can only do 2 simultaneous asyn ajax requests per page. So until i close my comet Ajax request,my 2nd request doesnt get completed ,doesnt even go out from the browser. And when i checked with Firefox i just one Ajax comet request running all time..so doesnt that leave me one more ajax request? Also the solution uses IRequiressessionstate for Asynchronous HTTP Handler which i had removed.but still it creates problems on multiple instances of IE7. I had one work around which is stated here http://support.microsoft.com/kb/282402 it means we can increase the request limit from registry by default is 2. By changing "MaxConnectionsPer1_0Server" key in hive "HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Internet Settings" we can increase the number of requests. Basically i want to broadcast information to multiple clients connected to a server using Comet and the clients can also send messages to the Server. Broadcasting works but the send request back to server doesnt work. Im using IIS 6 and ASP.NET . Are there any more workarounds or ways to send more requests? References : http://stackoverflow.com/questions/561046/how-many-concurrent-ajax-xmlhttprequest-requests-are-allowed-in-popular-browser http://stackoverflow.com/questions/349381/ajax-php-sessions-and-simultaneous-requests http://stackoverflow.com/questions/2412807/jquery-ajax-request-blocked-by-long-running-ajax-request http://stackoverflow.com/questions/898190/jquery-making-simultaneous-ajax-requests-is-it-possible

    Read the article

  • C# Application Calling Powershell Script Issues

    - by Ben
    Hi, I have a C# Winforms application which is calling a simple powershell script using the following method: Process process = new Process(); process.StartInfo.FileName = @"powershell.exe"; process.StartInfo.Arguments = String.Format("-noexit \"C:\Develop\{1}\"", scriptName); process.Start(); The powershell script simply reads a registry key and outputs the subkeys. $items = get-childitem -literalPath hklm:\software foreach($item in $items) { Write-Host $item } The problem I have is that when I run the script from the C# application I get one set of results, but when I run the script standalone (from the powershell command line) I get a different set of results entirely. The results from running from the c# app are: HKEY_LOCAL_MACHINE\software\Adobe HKEY_LOCAL_MACHINE\software\Business Objects HKEY_LOCAL_MACHINE\software\Helios HKEY_LOCAL_MACHINE\software\InstallShield HKEY_LOCAL_MACHINE\software\Macrovision HKEY_LOCAL_MACHINE\software\Microsoft HKEY_LOCAL_MACHINE\software\MozillaPlugins HKEY_LOCAL_MACHINE\software\ODBC HKEY_LOCAL_MACHINE\software\Classes HKEY_LOCAL_MACHINE\software\Clients HKEY_LOCAL_MACHINE\software\Policies HKEY_LOCAL_MACHINE\software\RegisteredApplications PS C:\Develop\RnD\SiriusPatcher\Sirius.Patcher.UI\bin\Debug When run from the powershell command line I get: PS M: C:\Develop\RegistryAccess.ps1 HKEY_LOCAL_MACHINE\software\ATI Technologies HKEY_LOCAL_MACHINE\software\Classes HKEY_LOCAL_MACHINE\software\Clients HKEY_LOCAL_MACHINE\software\Equiniti HKEY_LOCAL_MACHINE\software\Microsoft HKEY_LOCAL_MACHINE\software\ODBC HKEY_LOCAL_MACHINE\software\Policies HKEY_LOCAL_MACHINE\software\RegisteredApplications HKEY_LOCAL_MACHINE\software\Wow6432Node PS M: The second set of results match what I have in the registry, but the first set of results (which came from the c# app) don't. Any help or pointers would be greatly apreciated :) Ben

    Read the article

  • UML class diagram vs ER database diagram

    - by salva84
    Hi, I'm a little confused, I'm developing a program, the program consist in two parts, the server and the clients, there are groups, places, messages... stored in the server, and the clients has to connect with it. I've design the use cases diagram, the activity diagrams, and I have design the class diagram too. The thing is that I want to implement the server in a mysql tables for storing the users, groups, places... users in groups... so I've designed a E-R diagram consisting in 6 tables, but the problem is that I think that my class diagram and my ER diagram looks too similar, I mean, I think I'm not doing things right because I have a class for each table practically, and when I have to extract all the users on my system, do I have to convert all the rows into objects at first and write in the database for each object modified? The easy choice for me would be to base my whole application only in the database, and making a class to extract and insert data in it, but I have to follow the UML specification and I'm a little confused what to do with the class diagram, because the books I have read say that I have to create a class for each "entity" of my program. Sorry for my bad English. Thank you.

    Read the article

  • SCM for Xcode?

    - by Gregor
    I am developing an application for the Mac as a small team (me + another person) effort. We are located in different cities, and have started to see the need for solid source control management. None of us have any experience with this, and both of us are relatively new to Cocoa/Obj-C/Xcode (but do have C knowledge). Does anyone have any recommendations as to which SCM system to choose? I understand that a lot of people are using Subversion, which is also supported in Xcode 3.1. Does anyone have experience with using Subversion through Xcode? Or is it a better option to chose a stand alone GUI alternative, such as Versions? Grateful for any input on this. Gregor Tomasevic, Sweden Update/personal experiences: Since this post, we have tried Versions and Cornerstone (both of which are SVN GUI-clients), as well as Xcodes built-in support for SVN. We were not particularly pleased with Versions, which seemed to have some problems with committing unversioned files/build files. The built-in SVN support in Xcode works quite well, although it probably has limitations that we have still not run into. Cornerstone is both simple to use and powerful, and does not seem to suffer from the problems we encountered with Versions. So far, we have just tried committing, updating repo, checking out latest/previous versions of our files and worked some with file comparison. It might be a whole different ball game once you start working extensively with branching, an area which we have been told both these GUI clients might have some weaknesses in. For what it's worth (and with only days of evaluation) Cornerstone seems to be a somewhat better alternative, although for simpler SCM, Xcode works well too. Thanks for all the comments.

    Read the article

  • WCF for a shared data access

    - by Audrius
    Hi all, I have a little experience with WCF and would like to get your opinion/suggestion on how the following problem can be solved: A web service needs to be accessible from multiple clients simultaneously and service needs to return a result from a shared data set. The concrete project I'm working on has to store a list of IP addresses/ranges. This list will be queried by a bunch of web servers for a validation purposes and we speak of a couple of thousand or more queries per minute. My initial draft approach was to use Windows service as a WCF host with service contract implementing class that is decorated with ServiceBehavior(InstanceContextMode = InstanceContextMode.Single, ConcurrencyMode = ConcurrencyMode.Multiple) that has a list object and a custom locking for accessing it. So basically I have a WCF service singleton with a list = shared data - multiple clients. What I do not like about it is that data and communication layers are merged into one and performance wise this doesn't feel "right". What I really really (- want is Windows service running an instance of IP list holding container class object, a second service running WCF service contract implementation and a way the latter querying the former in a nice way with a minimal blocking. Using another WCF channel would not really take me far away from the initial draft implementation or would it? What approach would you take? Project is still in a very early stage so complete design re-do is not out of question. All ideas are appreciated. Thanks! UPDATE: The data set will be changed dynamically. Web service will have a separate method to add IP or IP range and on top of that there will be a scheduled task that will trigger data cleanup every 10-15 minutes according to some rules. UPDATE 2: a separate benchmark project will be kicked up that should use MSSQL as a data backend (instead on in-memory list).

    Read the article

  • Indy IdSMTP and attachments in Thunderbird

    - by Lobuno
    Hello! Using the latest snapshot of Indy tiburon on D2010. A very simple project like: var stream: TFileStream; (s is TidSMTP and m is TidMessage) begin s.Connect; Stream := TFileStream.Create('c:\Test.zip', fmOpenRead or fmShareExclusive); try with TIdAttachmentMemory.Create(m.MessageParts, Stream) do begin ContentType := 'application/x-zip-compressed'; Name := ExtractFilePath('C:\'); //' FileName := 'Test.zip'; end; finally FreeAndNil(Stream); end; s.Send(m); s.Disconnect(); end; Everything works Ok in Outlook, The bat!, OE, yahoo, etc... but in Thunderbird the attachment is not shown. Looking at the source of the message in Thunderbird, the attachment is there. The only difference I can find between messages send by indy and other clients is that Indy messages have this order: Content-Type: multipart/mixed; boundary="Z\=_7oeC98yIhktvxiwiDTVyhv9R9gwkwT1" MIME-Version: 1.0 while any other clients have the order: MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="Z\=_7oeC98yIhktvxiwiDTVyhv9R9gwkwT1" Don't know if THAT is the source of the problem, but if so: is this a bug on Thunderbird or is this a problem with indy which "malforms" the headers of the messages? Is this order a problem? Does that matter anyway?

    Read the article

  • How do I use a .NET class in VBA? Syntax help!

    - by Jordan S
    ok I have couple of .NET classes that I want to use in VBA. So I must register them through COM and all that. I think I have the COM registration figured out (finally) but now I need help with the syntax of how to create the objects. Here is some pseudo code showing what I am trying to do. EDIT: Changed Attached Objects to return an ArrayList instead of a List The .NET classes look like this... public class ResourceManagment { public ResourceManagment() { // Default Constructor } public static List<RandomObject> AttachedObjects() { ArrayList list = new ArrayList(); return list; } } public class RandomObject { // public RandomObject(int someParam) { } } OK, so this is what I would like to do in VBA (demonstrated in C#) but I don't know how... public class VBAClass { public void main() { ArrayList myList = ResourceManagment.AttachedObjects(); foreach(RandomObject x in myList) { // Do something with RandomObject x like list them in a Combobox } } } One thing to note is that RandomObject does not have a public default constructor. So I can not create an instance of it like Dim x As New RandomObject. MSDN says that you can not instantiate an object that doesn't have a default constructor through COM but you can still use the object type if it is returned by another method... Types must have a public default constructor to be instantiated through COM. Managed, public types are visible to COM. However, without a public default constructor (a constructor without arguments), COM clients cannot create an instance of the type. COM clients can still use the type if the type is instantiated in another way and the instance is returned to the COM client. You may include overloaded constructors that accept varying arguments for these types. However, constructors that accept arguments may only be called from managed (.NET) code. Added: Here is my attempt in VB: Dim count As Integer count = 0 Dim myObj As New ResourceManagment For Each RandomObject In myObj.AttachedObjects count = count + 1 Next RandomObject

    Read the article

  • Trying to save file from Flash to PHP using $GLOBALS["HTTP_RAW_POST_DATA"]

    - by jolyonruss
    Let me start by saying PHP isn't my forte, I'm usually reluctant to try working with it because of problems exactly like this. The code works fine on my local machine under MAMP and on my server, but doesn't on the clients server :'( So what am I trying to do, well - save an image from Flash onto the server, simple right?! I'm using the method described on this site here: http://designreviver.com/tutorials/actionscript-3-jpeg-encoder-revealed-saving-images-from-flash/ but have made a small alteration so that instead of echoing the jpg causing the browser to download it locally, I do an fwrite and an fclose to save it to the server. Here is my PHP: $imageFile = '../images/' . $_GET['name']; $imageHandle = fopen($imageFile, "w"); fwrite($imageHandle, $jpg); fclose($imageHandle); } ? I've dona a phpinfo() on my clients server and it's running 5.2.2 my host is running 5.2.11 I don't know if much can have changed in those 9 minor revisions? I've also read another question on here which suggests making suer always_populate_raw_post_data is set to ON, but it's set to OFF on all of the server environments I've been testing in. I'm doing some XML saving using file_get_contents('php://input') which I've tried but failed to get working with images. Any help would be gratefully received, I'm happy to post the AS3 as well but it's EXACTLY the same as example I've linked above and works locally. As far as I can tell the problem lies with the PHP. Cheers.

    Read the article

  • SVN Question regarding branching and third party vendor branching

    - by fritzone
    Hi, We are developing an application which consists of: a source code base given to us by a partner infrequently. This is a somewhat working code, "final" version of something. They have their own release cycle and version tracking. on the code base above we make our changes. These can be either bugfixes or development of new features. Till now, we managed to create some code mayhem, as a result we would like to put all this in a SVN repository. I would like to ask you what you think is the best practice for this to happen with the less pain. The followings are our things that we consider important: We would like to track our bugfixes/changes since we cannot send back bugfixes to our software vendor, but we can report a bug (and they might or might not fix it). All we develop on their code remains "in-house" they are not interested in our changes. As long as we don't get a new codebase from the vendor, we consider their latest version to be the stable one we are working on. This might be branched down further, but the result is always a stable trunk, the build is done based on this "stable" trunk. When the vendor releases a new version we would like to merge our "stable" trunk (which contains a lot of changes) with their changes, thus creating a new "stable" trunk. For each version we deploy (to clients) we should be able later to fix bugs only on that version, for clients who have installed our system using that specific version There are more developers working on the codebase... (as usual :) Thanks a lot for the tips.

    Read the article

  • Create ABPerson Records in a Shared CardDAV (10.6/Server Hosted) AddressBook

    - by Woodster
    Hello, My app presently reads and writes to the local Mac OS X 10.6 client addressbook using the AddressBook.framework. It works fine. 10.6 Server introduced AddressBook Server, which 10.6 clients can connect to by setting up a CardDAV Account. User and Group records can be stored in that account, which is synchronized to the 10.6 server and made available to other clients who access the same CardDAV account. Mail.app is able to autocomplete the email addresses from accounts that are in the local datastore as well as the remote CardDAV datastore. ABPeoplePicker can see both. But, programatically, I'm not getting any CardDAV-based data returned from my queries against the shared AddressBook. I'm not sure if I need to ask it for a different AddressBook, or if I need to modify my fetch-request to indicate that I want it to be able to use the shared data too. My goal is to adapt the current code so that it can read/write to the CardDAV account too, instead of just the local addressbook. Thoughts?

    Read the article

  • Is there a difference between plain text emails, and multipart emails with only plain text?

    - by Brian Armstrong
    I'm using Rails to send emails and I just want to send a plain text email (there is no corresponding HTML part). I've noticed that if I just have one file named email.text.plain.erb it actually generates a multipart email with one part (the plain text part) like this: Content-Type: multipart/alternative; boundary=mimepart_4c04a2d34c4bb_690a4e56b0362 --mimepart_4c04a2d34c4bb_690a4e56b0362 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: Quoted-printable Content-Disposition: inline text of the email here... --mimepart_4c04a2d34c4bb_690a4e56b0362-- But if I take out the text.plain part and name it email.erb ActionMailer generates a regular plain text email without multipart like this: Content-Type: text/plain; charset=utf-8 text of the email here... Both work fine most of the time (so this is kind of nitpicky), but I guess my question is whether the second one is more correct. My goal here is just to make sure deliverability is as high as possible across a wide variety of devices and email clients. I've read that plain text emails can have slightly better deliverability rates than html and was just curious if throwing in this multipart (even if it only contained a plain text part) might throw off some of the dumber email clients. Thanks for your help!

    Read the article

  • Web Sockets: Browser won't receive the message, complains about it not starting with 0x00 (byte)

    - by giggsey
    Here is my code: import java.net.*; import java.io.*; import java.util.*; import org.jibble.pircbot.*; public class WebSocket { public static int port = 12345; public static ArrayList<WebSocketClient> clients = new ArrayList<WebSocketClient>(); public static ArrayList<Boolean> handshakes = new ArrayList<Boolean>(); public static ArrayList<String> nicknames = new ArrayList<String>(); public static ArrayList<String> channels = new ArrayList<String>(); public static int indexNum; public static void main(String args[]) { try { ServerSocket ss = new ServerSocket(WebSocket.port); WebSocket.console("Created socket on port " + WebSocket.port); while (true) { Socket s = ss.accept(); WebSocket.console("New Client connecting..."); WebSocket.handshakes.add(WebSocket.indexNum,false); WebSocket.nicknames.add(WebSocket.indexNum,""); WebSocket.channels.add(WebSocket.indexNum,""); WebSocketClient p = new WebSocketClient(s,WebSocket.indexNum); Thread t = new Thread( p); WebSocket.clients.add(WebSocket.indexNum,p); indexNum++; t.start(); } } catch (Exception e) { WebSocket.console("ERROR - " + e.toString()); } } public static void console(String msg) { Date date = new Date(); System.out.println("[" + date.toString() + "] " + msg); } } class WebSocketClient implements Runnable { private Socket s; private int iAm; private String socket_res = ""; private String socket_host = ""; private String socket_origin = ""; protected String nick = ""; protected String ircChan = ""; WebSocketClient(Socket socket, int mynum) { s = socket; iAm = mynum; } public void run() { String client = s.getInetAddress().toString(); WebSocket.console("Connection from " + client); IRCclient irc = new IRCclient(iAm); Thread t = new Thread( irc ); try { Scanner in = new Scanner(s.getInputStream()); PrintWriter out = new PrintWriter(s.getOutputStream(),true); while (true) { if (! in.hasNextLine()) continue; String input = in.nextLine().trim(); if (input.isEmpty()) continue; // Lets work out what's wrong with our input if (input.length() > 3 && input.charAt(0) == 65533) { input = input.substring(2); } WebSocket.console("< " + input); // Lets work out if they authenticate... if (WebSocket.handshakes.get(iAm) == false) { checkForHandShake(input); continue; } // Lets check for NICK: if (input.length() > 6 && input.substring(0,6).equals("NICK: ")) { nick = input.substring(6); Random generator = new Random(); int rand = generator.nextInt(); WebSocket.console("I am known as " + nick); WebSocket.nicknames.set(iAm, "bo-" + nick + rand); } if (input.length() > 9 && input.substring(0,9).equals("CHANNEL: ")) { ircChan = "bo-" + input.substring(9); WebSocket.console("We will be joining " + ircChan); WebSocket.channels.set(iAm, ircChan); } if (! ircChan.isEmpty() && ! nick.isEmpty() && irc.started == false) { irc.chan = ircChan; irc.nick = WebSocket.nicknames.get(iAm); t.start(); continue; } else { irc.msg(input); } } } catch (Exception e) { WebSocket.console(e.toString()); e.printStackTrace(); } t.stop(); WebSocket.channels.remove(iAm); WebSocket.clients.remove(iAm); WebSocket.handshakes.remove(iAm); WebSocket.nicknames.remove(iAm); WebSocket.console("Closing connection from " + client); } private void checkForHandShake(String input) { // Check for HTML5 Socket getHeaders(input); if (! socket_res.isEmpty() && ! socket_host.isEmpty() && ! socket_origin.isEmpty()) { send("HTTP/1.1 101 Web Socket Protocol Handshake\r\n" + "Upgrade: WebSocket\r\n" + "Connection: Upgrade\r\n" + "WebSocket-Origin: " + socket_origin + "\r\n" + "WebSocket-Location: ws://" + socket_host + "/\r\n\r\n",false); WebSocket.handshakes.set(iAm,true); } return; } private void getHeaders(String input) { if (input.length() >= 8 && input.substring(0,8).equals("Origin: ")) { socket_origin = input.substring(8); return; } if (input.length() >= 6 && input.substring(0,6).equals("Host: ")) { socket_host = input.substring(6); return; } if (input.length() >= 7 && input.substring(0,7).equals("Cookie:")) { socket_res = "."; } /*input = input.substring(4); socket_res = input.substring(0,input.indexOf(" HTTP")); input = input.substring(input.indexOf("Host:") + 6); socket_host = input.substring(0,input.indexOf("\r\n")); input = input.substring(input.indexOf("Origin:") + 8); socket_origin = input.substring(0,input.indexOf("\r\n"));*/ return; } protected void send(String msg, boolean newline) { byte c0 = 0x00; byte c255 = (byte) 0xff; try { PrintWriter out = new PrintWriter(s.getOutputStream(),true); WebSocket.console("> " + msg); if (newline == true) msg = msg + "\n"; out.print(msg + c255); out.flush(); } catch (Exception e) { WebSocket.console(e.toString()); } } protected void send(String msg) { try { WebSocket.console(">> " + msg); byte[] message = msg.getBytes(); byte[] newmsg = new byte[message.length + 2]; newmsg[0] = (byte)0x00; for (int i = 1; i <= message.length; i++) { newmsg[i] = message[i - 1]; } newmsg[message.length + 1] = (byte)0xff; // This prints correctly..., apparently... System.out.println(Arrays.toString(newmsg)); OutputStream socketOutputStream = s.getOutputStream(); socketOutputStream.write(newmsg); } catch (Exception e) { WebSocket.console(e.toString()); } } protected void send(String msg, boolean one, boolean two) { try { WebSocket.console(">> " + msg); byte[] message = msg.getBytes(); byte[] newmsg = new byte[message.length+1]; for (int i = 0; i < message.length; i++) { newmsg[i] = message[i]; } newmsg[message.length] = (byte)0xff; // This prints correctly..., apparently... System.out.println(Arrays.toString(newmsg)); OutputStream socketOutputStream = s.getOutputStream(); socketOutputStream.write(newmsg); } catch (Exception e) { e.printStackTrace(); } } } class IRCclient implements Runnable { protected String nick; protected String chan; protected int iAm; boolean started = false; IRCUser irc; IRCclient(int me) { iAm = me; irc = new IRCUser(iAm); } public void run() { WebSocket.console("Connecting to IRC..."); started = true; irc.setNick(nick); irc.setVerbose(false); irc.connectToIRC(chan); } void msg(String input) { irc.sendMessage("#" + chan, input); } } class IRCUser extends PircBot { int iAm; IRCUser(int me) { iAm = me; } public void setNick(String nick) { this.setName(nick); } public void connectToIRC(String chan) { try { this.connect("irc.appliedirc.com"); this.joinChannel("#" + chan); } catch (Exception e) { WebSocket.console(e.toString()); } } public void onMessage(String channel, String sender,String login, String hostname, String message) { // Lets send this message to me WebSocket.clients.get(iAm).send(message); } } Whenever I try to send the message to the browser (via Web Sockets), it complains that it doesn't start with 0x00 (which is a byte). Any ideas? Edit 19/02 - Added the entire code. I know it's real messy and not neat, but I want to get it functioning first. Spend last two days trying to fix.

    Read the article

< Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >