Search Results

Search found 20883 results on 836 pages for 'wont say'.

Page 766/836 | < Previous Page | 762 763 764 765 766 767 768 769 770 771 772 773  | Next Page >

  • No Customer Left Behind

    - by Kathryn Perry
    A guest post by David Vap, Group Vice President, Oracle Applications Product Development What does customer experience mean to you? Is it a strategy for your executives? A new buzz word and marketing term? A bunch of CRM technology with social software added on? For me, customer experience is a customer-centric worldview that produces a deeper understanding of your business and what it takes to achieve sustainable, differentiated success. It requires you to prioritize and examine the journey your customers are on with your brand, so you can answer the question, "How can we drive greater value for our business by delivering a better customer experience?" Businesses that embrace a customer-centric worldview understand their business at a much deeper level than most. They know who their customers are, what their value is, what they do, what they say, what they want, and ultimately what that means to their business. "Why Isn't Everyone Doing It?" We're all consumers who have our own experiences with many brands. Good or bad, some of those experiences stay with us. So viscerally we understand the concept of customer experience from the stories we share. One that stands out in my mind happened as I was preparing to leave for a 12-month job assignment in Europe. I wanted to put my cable television subscription on hold. I wasn't leaving for another vendor. I wasn't upset. I just had a situation where it made sense to put my $180 per month account on pause until I returned. Unfortunately, there was no way for this cable company to acknowledge that I was a loyal customer with a logical request - and to respond accordingly. So, ultimately, they lost my business. Research shows us that it costs six to seven times more to acquire a new customer than to retain an existing one. Heavily funding the efforts of getting new customers and underfunding the efforts of serving the needs of your existing (who are your greatest advocates) is a vicious and costly cycle. "Hey, These Guys Suck!" I love my Apple iPad because it's so easy to use. The explosion of these types of technologies, combined with new media channels, has raised our expectations and made us hyperaware of what's going on and what's available. In addition, social media has given us a megaphone to share experiences both positive and negative with greater impact. We are now an always-on culture that thrives on our ability to access, connect, and share anywhere anytime. If we don't get the service, product, or value we expect, it is easy to tell many people about it. We also can quickly learn where else to get what we want. Consumers have the power of influence and choice at a global scale. The businesses that understand this principle are able to leverage that power to their advantage. The ones that don't, suffer from it. Which camp are you in?Note: This is Part 1 in a three-part series. Stop back for Part 2 on November 19.

    Read the article

  • No access to Samba shares

    - by koanhead
    I have three shared folders in my local home directory- that is to say, on my Ubuntu desktop's /home/me/. All were set up using "Sharing Options" in Nautilus' right-click menu. The standard "Music" and "Videos" folders are configured identically: the "Guest Access" box is checked, but the "Allow others to create and delete" is not. The third folder, called "shared", is configured to not allow Guest access but to allow others to modify files. I have not altered /etc/samba/smb.conf by hand, I have only used Sharing Options to create and modify these so-called "shares". My roommates have two Windows 7 computers and one Ubuntu Netbook Remix netbook. I have the aforementioned desktop machine and laptop running 10.04. None of these machines can access any of the shares. Attempts to access the Guest shares result in the message \\machine\directory is not accessible. The network name could not be found. This is the error message generated by a VM running Windows 2000. The other Windows machines generate a similar error. The Ubuntu laptop gives the error Unable to mount location: Failed to mount Windows share. Hurrah, once again, for informative error messages. That really helps a lot. When attempting to browse the folder called "shared" from the laptop, I'm confronted with a password dialog. This behavior is the same will all machines I've tried in the situation. On entering my username and password for the account to which the shares belong, the password dialog briefly disappears and is replaced with an identical dialog. No error message, useful or not, appears. When attempting to browse this folder with the VM, the outcome is the same except that the password dialog helpfully states "incorrect username or password". My assumption is that the username and password in question is that of the user which owns the shares. I have tried all other username and password combinations available in this context and the outcome is the same. I would like to be able to share files. Sharing them with Windows machines is a nice feature, or would be if it was available. Really I consider sharing files between two machines with the same version of the same operating system kind of a minimum condition for network usability. Samba last functioned reliably for me more than ten years ago. I have attempted to use it on and off since then with only intermittent success. Oh, and "Personal File Sharing" from the Preferences menu does not result in an entry in Places → Network → my-server. In fact, the old entry "MY-SERVER" goes away and is replaced by "koanhead's public files on my-server", which when I attempt to open it from the laptop gives a "DBus.Error.NoReply: Message did not receive a reply." I know I come here and gripe about Ubuntu a lot, but on the other hand I spend literally hours every day trying to fix things in Ubuntu. It's a good system which aspires to greatness, which is why things like this either Need to work; or Be adequately documented. Ideally both would be the case. Anyway, rant over. Hopefully someone will have some insight on this issue. Thanks all who bother to read this wall o'text for your time.

    Read the article

  • EPPM Is a Must-Have Capability as Global Energy and Power Industries Eye US$38 Trillion in New Investments

    - by Melissa Centurio Lopes
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} “The process manufacturing industry is facing an unprecedented challenge: from now until 2035, cumulative worldwide investments of US$38 trillion will be required for drilling, power generation, and other energy projects,” Iain Graham, director of energy and process manufacturing for Oracle’s Primavera, said in a recent webcast. He adds that process manufacturing organizations such as oil and gas, utilities, and chemicals must manage this level of investment in an environment of constrained capital markets, erratic supply and demand, aging infrastructure, heightened regulations, and declining global skills. In the following interview, Graham explains how the right enterprise project portfolio management (EPPM) technology can help the industry meet these imperatives. Q: Why is EPPM so important for today’s process manufacturers? A: If the industry invests US$38 trillion without proper cost controls in place, a huge amount of resources will be put at risk, especially when it comes to cost overruns that may occur in large capital projects. Process manufacturing companies must not only control costs, but also monitor all the various contractors that will be involved in each project. If you’re not managing your own workers and all the interdependencies among the different contractors, then you’ve got problems. Q: What else should process manufacturers look for? A: It’s also important that an EPPM solution has the ability to manage more than just capital projects. For example, it’s best to manage maintenance and capital projects in the same system. Say you’re due to install a new transformer in a power station as part of a capital project, but routine maintenance in that area of the facility is scheduled for that morning. The lack of coordination could lead to unforeseen delays. There are also IT considerations that impact capital projects, such as adding servers and network cable for a control system in a power station. What organizations need is a true EPPM system that’s not just for capital projects, maintenance, or IT activities, but instead an enterprisewide solution that provides visibility into all types of projects. Read the complete Q&A here and discover the practical framework for successfully managing this massive capital spending.

    Read the article

  • Javascript Inheritance Part 2

    - by PhubarBaz
    A while back I wrote about Javascript inheritance, trying to figure out the best and easiest way to do it (http://geekswithblogs.net/PhubarBaz/archive/2010/07/08/javascript-inheritance.aspx). That was 2 years ago and I've learned a lot since then. But only recently have I decided to just leave classical inheritance behind and embrace prototypal inheritance. For most of us, we were trained in classical inheritance, using class hierarchies in a typed language. Unfortunately Javascript doesn't follow that model. It is both classless and typeless, which is hard to fathom for someone who's been using classes the last 20 years. For the last two or three years since I've got into Javascript I've been trying to find the best way to force it into the class model without much success. It's clunky and verbose and hard to understand. I think my biggest problem was that it felt so wrong to add or change object members at run time. Every time I did it I felt like I needed a shower. That's the 20 years of classical inheritance in me. Finally I decided to embrace change and do something different. I decided to use the factory pattern to build objects instead of trying to use inheritance. Javascript was made for the factory pattern because of the way you can construct objects at runtime. In the factory pattern you have a factory function that you call and tell it to give you a certain type of object back. The factory function takes care of constructing the object to your specification. Here's an example. Say we want to have some shape objects and they have common attributes like id and area that we want to depend on in other parts of your application. So first thing to do is create a factory object and give it a factory method to create an abstract shape object. The factory method builds the object then returns it. var shapeFactory = { getShape: function(id){ var shape = { id: id, area: function() { throw "Not implemented"; } }; return shape; }}; Now we can add another factory method to get a rectangle. It calls the getShape() method first and then adds an implementation to it. getRectangle: function(id, width, height){ var rect = this.getShape(id); rect.width = width; rect.height = height; rect.area = function() { return this.width * this.height; }; return rect;} That's pretty simple right? No worrying about hooking up prototypes and calling base constructors or any of that crap I used to do. Now let's create a factory method to get a cuboid (rectangular cube). The cuboid object will extend the rectangle object. To get the area we will call into the base object's area method and then multiply that by the depth. getCuboid: function(id, width, height, depth){ var cuboid = this.getRectangle(id, width, height); cuboid.depth = depth; var baseArea = cuboid.area; cuboid.area = function() { var a = baseArea.call(this); return a * this.depth; } return cuboid;} See how we called the area method in the base object? First we save it off in a variable then we implement our own area method and use call() to call the base function. For me this is a lot cleaner and easier than trying to emulate class hierarchies in Javascript.

    Read the article

  • Building Enterprise Smartphone App &ndash; Part 1: Why Build Smart Phone Apps

    - by Tim Murphy
    This is part 1 in a series of post based on a talk I gave recently at the Chicago Information Technology Architects Group.  Feel free to leave feedback. Intro Most of us already carry smartphones. We play games on them. We keep up with what is going on with our friends and our favorite teams. We take pictures of our kids at their events. But the question is if that is all they are good for. Many companies have aspects of their business that lend themselves to being performed by mobile devices. Some of them lean toward larger device such as tablets, but many can be executed on smartphones. This and the following articles will discuss some of the possible applications of smartphone technology for businesses, the platforms that are available and the considerations you need to make when building them. I'll take a look at some specific scenarios and wrap up with a couple of capabilities that are just emerging that can be used in the future. Why Build Enterprise Smartphone Applications So what are some of the ways that you can leverage smartphone technology to gain efficiency in your business or a clients business. There are a few major areas that I have seen mobile platforms being an advantage to. Your mobile sales force is a key candidate for leveraging smartphone apps.  They can visit clients in their retail location and place orders on site. It is a more personal approach which can gain you customer loyalty.  A sales person may also gather information about the way a client does business or who their target market is. This allows them you to focus marketing information or build customized support for your customer. You may also have need to track physical inventory in a store. This is something that has historically been done with laser scanners, but with the camera capabilities in today's phones and tablets it is possible to use more general multi-purpose devices.  This can save costs on both hardware and telecommunication contracts. Delivery verification is another area that historically has been the domain of specialized devices but can now be accomplished with smartphones.  This also reduces costs because it is also used for communicating with the driver and other operations.  Add to that the navigation capability of smartphones and you can see how the return on investment increases. Executives are always on the go. They spend most of their time in meetings and yet they need access to decision making information at their finger tips. With a smartphone app they can get alerts when major sales are closed or critical accounting process are completed that may need their attention. They can also answer questions by instantly pulling up BI reports. I have often heard operations support people say that they need things like VPN and RDP from their phones. If they can also have notifications of outages or critical support requests they can be react to situations without needing to be tied to their desks. These are all valid reasons to need smartphone applications.  In the next installment I will discuss platforms and features. del.icio.us Tags: Smartphones,Enterprise Smartphone Apps,Architecture

    Read the article

  • Is my class structure good enough?

    - by Rivten
    So I wanted to try out this challenge on reddit which is mostly about how you structure your data the best you can. I decided to challenge my C++ skills. Here's how I planned this. First, there's the Game class. It deals with time and is the only class main has access to. A game has a Forest. For now, this class does not have a lot of things, only a size and a Factory. Will be put in better use when it will come to SDL-stuff I guess A Factory is the thing that deals with the Game Objects (a.k.a. Trees, Lumberjack and Bears). It has a vector of all GameObjects and a queue of Events which will be managed at the end of one month. A GameObject is an abstract class which can be updated and which can notify the Event Listener The EventListener is a class which handles all the Events of a simulation. It can recieve events from a Game Object and notify the Factory if needed, the latter will manage correctly the event. So, the Tree, Lumberjack and Bear classes all inherits from GameObject. And Sapling and Elder Tree inherits from Tree. Finally, an Event is defined by an event_type enumeration (LUMBERJACK_MAWED, SAPPLING_EVOLUTION, ...) and an event_protagonists union (a GameObject or a pair of GameObject (who killed who ?)). I was quite happy at first with this because it seems quite logic and flexible. But I ended up questionning this structure. Here's why : I dislike the fact that a GameObject need to know about the Factory. Indeed, when a Bear moves somewhere, it needs to know if there's a Lumberjack ! Or it is the Factory which handles places and objects. It would be great if a GameObject could only interact with the EventListener... or maybe it's not that much of a big deal. Wouldn't it be better if I separate the Factory in three vectors ? One for each kind of GameObject. The idea would be to optimize research. If I'm looking do delete a dead lumberjack, I would only have to look in one shorter vector rather than a very long vector. Another problem arises when I want to know if there is any particular object in a given case because I have to look for all the gameObjects and see if they are at the given case. I would tend to think that the other idea would be to use a matrix but then the issue would be that I would have empty cases (and therefore unused space). I don't really know if Sapling and Elder Tree should inherit from Tree. Indeed, a Sapling is a Tree but what about its evolution ? Should I just delete the sapling and say to the factory to create a new Tree at the exact same place ? It doesn't seem natural to me to do so. How could I improve this ? Is the design of an Event quite good ? I've never used unions before in C++ but I didn't have any other ideas about what to use. Well, I hope I have been clear enough. Thank you for taking the time to help me !

    Read the article

  • How to display strings in activity 3 from activity 1, 2? [migrated]

    - by user107160
    I need to get strings values from two different activities say activity1 and activity2, each activity should have maximum 4 edittext field..so totally eight fields should be displayed orderly in activity3. I have tried the code which is not displaying in the activity3. Look at code, Activity1 String namef = fname.getText().toString(); Intent first = new Intent(AssessmentActivity.this, Second.class); first.putExtra("list1", namef); startActivity(first); String namel = lname.getText().toString(); Intent second = new Intent(AssessmentActivity.this, Second.class); second.putExtra("list2", namel); startActivity(second); String phone = mob.getText().toString(); Intent third = new Intent(AssessmentActivity.this, Second.class); third.putExtra("list3", phone); startActivity(third); String mailid = email.getText().toString(); Intent fourth = new Intent(AssessmentActivity.this, Second.class); fourth.putExtra("list4", mailid); startActivity(fourth); Activity2 String cont = addr.getText().toString(); Intent fifth = new Intent(Second.this, Third.class); fifth.putExtra("list5", cont); startActivity(fifth); String db = dob.getText().toString(); Intent sixth = new Intent(Second.this, Third.class); sixth.putExtra("list6", db); startActivity(sixth); String nation = citizen.getText().toString(); Intent Seventh = new Intent(Second.this, Third.class); Seventh.putExtra("list7", nation); startActivity(Seventh); String subject = course.getText().toString(); Intent Eight = new Intent(Second.this, Third.class); Eight.putExtra("list8", subject); startActivity(Eight); *Activity3* TextView first = (TextView)findViewById(R.id.textView2); String fieldone = getIntent().getStringExtra("list1" ); first.setText(fieldone); TextView second = (TextView)findViewById(R.id.textView3); String fieldtwo = getIntent().getStringExtra("list2" ); second.setText(fieldtwo); TextView third = (TextView)findViewById(R.id.textView4); String fieldthree = getIntent().getStringExtra("list3" ); third.setText(fieldthree); TextView fourth = (TextView)findViewById(R.id.textView5); String fieldfour = getIntent().getStringExtra("list4" ); fourth.setText(fieldfour); TextView fifth = (TextView)findViewById(R.id.textView6); String fieldfive = getIntent().getStringExtra("list5" ); fifth.setText(fieldfive); TextView sixth = (TextView)findViewById(R.id.textView7); String fieldsix = getIntent().getStringExtra("list6" ); sixth.setText(fieldsix); TextView seventh = (TextView)findViewById(R.id.textView8); String fieldseven = getIntent().getStringExtra("list7" ); seventh.setText(fieldseven); TextView eight = (TextView)findViewById(R.id.textView3); String fieldeight = getIntent().getStringExtra("list8"); eight.setText(fieldeight);

    Read the article

  • Query total page count via SNMP HP Laserjet

    - by Tim
    I was asked to get hold of the total pages counts for the 100+ printers we have at work. All of them are HP Laser or Business Jets of some description and the vast majority are connected via some form of HP JetDirect network card/switch. After many hours of typing in IP addresses and copying and pasting the relevant figure in to Excel I have now been asked to do this on a weekly basis. This led me to think there must be an easier way, as an IT professional I can surely work out some time saving method to solve this issue. Suffice it to say I do not feel very professional now after a day or so of trying to make SNMP work for me! From what I understand the first thing is to enable SNMP on the printer. Done. Next I would need something to query the SNMP bit. I decided to go open source and free and someone here recommended net-snmp as a decent tool (I would like to have just added the printers as nodes in SolarWinds but we are somewhat tight on licences apparently). Next I need the name of the MIB. For this I believe the HP-LASERJET-COMMON-MIB has the correct information in it. Downloaded this and added to net-snmp. Now I need the OID which I believe after much scouring is printed-media-simplex-count (we have no duplex printers, that we are interested in at least). Running the following command yields the following demoralising output: snmpget -v 2c -c public 10.168.5.1 HP-LASERJET-COMMON-MIB:.1.3.6.1.2.1.1.16.1.1.1 (the OID was derived from running: snmptranslate -IR -On printed-media-simplex-count Unlinked OID in HP-LASERJET-COMMON-MIB: hp ::= { enterprises 11 } Undefined identifier: enterprises near line 3 of C:/usr/share/snmp/mibs/HP-LASER JET-COMMON-MIB..txt .1.3.6.1.2.1.1.16.1.1.1 ) Unlinked OID in HP-LASERJET-COMMON-MIB: hp ::= { enterprises 11 } Undefined identifier: enterprises near line 3 of C:/usr/share/snmp/mibs/HP-LASER JET-COMMON-MIB..txt HP-LASERJET-COMMON-MIB:.1.3.6.1.2.1.1.16.1.1.1: Am I barking up the wrong tree completely with this? My aim was to script it all to output to a file for all the IP addresses of the printers and then plonk that in Excel for my lords and masters to digest at their leisure. I have a feeling I am using either the wrong MIB or the wrong OID from said MIB (or both). Does anyone have any pointers on this for me? Or should I give up and go back to navigationg each printers web page individually (hoping not).

    Read the article

  • Exchange 2010 setup /prepareAD fails to run

    - by MadBoy
    I've tried installing Exchange 2010 on Windows Server 2008 R2 (only domain controller and all-in-one system). I did setup.exe /prepareAD, setup /prepareSchema and it worked fine the first time I did it. Unfortunately due to problem with Hub Transport installation related to (at least from what I've read) IPv6 being disabled (some say disabling it helped them while some enabling helped them). I did it the proper way by using registry entry to disable IPv6 but it still errored out. So i managed to uninstall everything (renamed some old entries in registry of failed Hub Transport roles and tried to reinstal Exchange after rebooting server. Unfortunetly running setup /prepareAD now gives an error: D:setup /PrepareAd Welcome to Microsoft Exchange Server 2010 Unattended Setup By continuing the installation process, you agree to the license terms of Microsoft Exchange Server 2010. If you don't accept these license terms, please cancel the installation. To review these license terms, please go to http://go.microsoft.com/fwlink/?LinkId=150127&clcid=0x409/ Press any key to cancel setup................ No key presses were detected. Setup will continue. Preparing Exchange Setup Copying Setup Files ......................... COMPLETED No server roles will be installed Performing Microsoft Exchange Server Prerequisite Check Organization Checks ......................... COMPLETED Setup is going to prepare the organization for Exchange 2010 by using 'Setup /P repareAD'. No Exchange 2007 server roles have been detected in this topology. Af ter this operation, you will not be able to install any Exchange 2007 server rol es. Configuring Microsoft Exchange Server Organization Preparation ......................... FAILED The following error was generated when "$error.Clear(); buildToBuildUpgrade -ExsetDataAtom -AtomName OrgLevelCt -DomainController $RoleDomainController" was run: "An error occurred with error code '2147504140' and message 'The data type can't be converted to or from a native Active Directory data type.'.". The Exchange Server setup operation did not complete. Visit http://support.micro soft.com and enter the Error ID to find more information. Exchange Server setup encountered an error. Unfortunetly if i rerun the setup it complains that it needs setup /prepareAD to be run first. Basically all that works now is setup /PrepareSchema and setup /PrepareDomain complains that prepareAD wasn't done. For full information I'm also attaching error I had before I've uninstalled everything and tried again: Hub Transport Role Failed Error: The following error was generated when "$error.Clear(); install-ExsetdataAtom -AtomName SharedMachineSettings -DomainController $RoleDomainController" was run: "An error occurred with error code '2147950640' and message 'There is no such object on the server.'.". An error occurred with error code '2147950640' and message 'There is no such object on the server.'.

    Read the article

  • Windows Server 2003 Terminal Server does not give out all available licenses

    - by Erwin Blonk
    I installed the Terminal Server role in Windows Server 2003 Standard 64-bits. Still, only 2 connections are allowed. The License Manager says that there are 10 Device CALs available, which is correct, and that none are given out. For good measure I let the server reboot, to no effect. Before this, there was another server (same Windows, except that it is 32 bits) active as a licensing server. I removed the role first and then then added it to the new server. I then removed the Terminal Server Licensing Server component off the old one and added it to the new one. After that, I added to licenses. When that didn't give the required result, I rebooted to new server. Still, the new server, with licenses and all, acts as if it has the 2 license RDP. The server are all stand-alone, there is no active directory been set up. Both servers are in different workgroups. Update (4/12/10): The server has changed the entries in the Terminal Server Licensing a few times. After installing the licenses it added an entry of which the exact phrasing I forgot but it was about temporary Windows 2003 device licenses. Later it added Windows Server 2003 - TS Per Device CAL. The temporary held 2 licenses (standard RDP licenses, I think) and the other 10. At some point, seemingly unrelated from the testing we did, it used a licenses from the new pool. This morning, 2 licenses were used from the pool of 10 and only 1 from the temporary/RDP pool (I wish I had screenshots to show, it changed every few hours oir so it seems). Although I had already activated the server over the internet, and re-activated it, I decided to go through the whole procedure by phone. Long story short, here is what it says now: Existing Windows 2000 Server, type:built-in [no licenses used, I add for for sake of being complete] Windows Server 2003 - Terminal Server Per Device CAL Token, type:open [none of 10 used] Windows Server 2003 - TS Per Device CAL, type:open [3 of 10 used] As I tried to explain, this is the end result after a few changes, most of which I can't directly connect to any action from my part. Only going to the activation procedure by phone seemed to directly effect the TS, resulting in the above configuration. Still, it is impossible to connect with more than 3 people, which is 1 up from the 2 that could connect yesterday. TS does say 7 licenses are avaible. Yet it won't give them out.

    Read the article

  • ESXI Crash need help to understand log and support about nexentastor on virtual machine

    - by Bgnt44
    If i understand right, the following core dump means that the cpu4 has crashed the Host if i read the next line it seem that at the time the CPU 4 was assigned to the NexentaStore Vm ... SO if im right i can say that NexentaStor Vm crash my esxi Am i right ? Does that core dump can provide me some more informations ? 2012-11-14T03:48:01.046Z cpu4:6089)0x41221f25ba08:[0x41803007abff]PanicvPanicInt@vmkernel#nover+0x56 stack: 0x3000000008, 0x41221f25ba 2012-11-14T03:48:01.046Z cpu4:6089)0x41221f25bae8:[0x41803007b4a7]Panic@vmkernel#nover+0xae stack: 0x2e067c00000010, 0x0, 0x1f25bb38, 2012-11-14T03:48:01.047Z cpu4:6089)0x41221f25bc18:[0x4180300a7823]TLBDoInvalidate@vmkernel#nover+0x45a stack: 0xca, 0x0, 0x0, 0x0, 0x0 2012-11-14T03:48:01.047Z cpu4:6089)0x41221f25bc68:[0x418030489e17]UserMem_CartelFlush@<None>#<None>+0xce stack: 0xcaa0b, 0x0, 0x0, 0x4 2012-11-14T03:48:01.047Z cpu4:6089)0x41221f25bd78:[0x41803048ab91]UserMemUnmapStateCleanup@<None>#<None>+0x58 stack: 0x0, 0x41221f25bd 2012-11-14T03:48:01.047Z cpu4:6089)0x41221f25be58:[0x41803048b97d]UserMemUnmap@<None>#<None>+0x104 stack: 0x41221f267000, 0x41221f25bf 2012-11-14T03:48:01.048Z cpu4:6089)0x41221f25be98:[0x41803048bf20]UserMem_Unmap@<None>#<None>+0xe3 stack: 0x426, 0x0, 0x41221f25bef8, 2012-11-14T03:48:01.048Z cpu4:6089)0x41221f25beb8:[0x4180304a5985]UW64VMKSyscallUnpackReleasePhysMemMap@<None>#<None>+0x18 stack: 0x10 2012-11-14T03:48:01.048Z cpu4:6089)0x41221f25bef8:[0x418030476791]User_LinuxSyscallHandler@<None>#<None>+0x17c stack: 0x41803004cc70, 2012-11-14T03:48:01.048Z cpu4:6089)0x41221f25bf18:[0x4180300a82be]User_LinuxSyscallHandler@vmkernel#nover+0x19 stack: 0x3ffe63bed80, 0 2012-11-14T03:48:01.049Z cpu4:6089)0x41221f25bf28:[0x418030110064]gate_entry@vmkernel#nover+0x63 stack: 0x10b, 0x0, 0x0, 0x426, 0xcf76 2012-11-14T03:48:01.049Z cpu4:6089)VMware ESXi 5.1.0 [Releasebuild-799733 x86_64] PCPU 1 locked up. Failed to ack TLB invalidate (total of 1 locked up, PCPU(s): 1). 2012-11-14T03:48:01.050Z cpu4:6089)cr0=0x80010031 cr2=0xcaa0b750 cr3=0x197d7b000 cr4=0x42768 2012-11-14T03:48:01.050Z cpu4:6089)pcpu:0 world:6111 name:"vmm0:Windows_2012_-_SQL" (V) 2012-11-14T03:48:01.050Z cpu4:6089)pcpu:1 world:6032 name:"vmm0:Windows_2012_-_AD" (V) 2012-11-14T03:48:01.050Z cpu4:6089)pcpu:2 world:6098 name:"vmm0:Windows_2012_-_App" (V) 2012-11-14T03:48:01.050Z cpu4:6089)pcpu:3 world:4099 name:"idle3" (IS) 2012-11-14T03:48:01.050Z cpu4:6089)pcpu:4 world:6089 name:"vmx-vcpu-0:NexentaStor" (U) 2012-11-14T03:48:01.050Z cpu4:6089)pcpu:5 world:6134 name:"vmm0:Ubuntu_-_NGINX" (V) 2012-11-14T03:48:01.050Z cpu4:6089)pcpu:6 world:4102 name:"idle6" (IS) 2012-11-14T03:48:01.050Z cpu4:6089)pcpu:7 world:4103 name:"idle7" (IS) 2012-11-14T03:48:01.050Z cpu4:6089)@BlueScreen: PCPU 1 locked up. Failed to ack TLB invalidate (total of 1 locked up, PCPU(s): 1).

    Read the article

  • Macbook Pro - Randomly sleeps and won't wake up

    - by James
    All, I have a Macbook Pro 13" (mid 2009) that has had a long time issues which seems to be getting worse. Occasionally, I will go to wake the computer with the keyboard and can't wake it. The HDD spins up, the light on the front of the computer stops blinking, but as soon as it seems like the display should light up, the HDD stops and the light begins blinking again. More rarely, the computer will suddenly sleep while I am using it and then enters the same sleep loop. The only way to resume working on the computer is to wait. Doing a hard restart just puts it right back into the 'sleep loop.' Here is an excerpt from kernel.log showing the laptops apparent narcolepsy: Jun 5 22:20:40 james-hales-macbook-pro kernel[0]: Wake reason: OHC1 Jun 5 22:20:40 james-hales-macbook-pro kernel[0]: Previous Sleep Cause: 5 Jun 5 22:20:40 james-hales-macbook-pro kernel[0]: The USB device Apple Internal Keyboard / Trackpad (Port 6 of Hub at 0x4000000) may have caused a wake by issuing a remote wakeup (2) Jun 5 22:20:40 james-hales-macbook-pro kernel[0]: HID tickle 31 ms Jun 5 22:20:41 james-hales-macbook-pro kernel[0]: 00000000 00000020 NVEthernet::setLinkStatus - not Active Jun 5 22:20:45 james-hales-macbook-pro kernel[0]: MacAuthEvent en1 Auth result for: 20:4e:7f:48:c0:ef MAC AUTH succeeded Jun 5 22:20:45 james-hales-macbook-pro kernel[0]: wlEvent: en1 en1 Link UP Jun 5 22:20:45 james-hales-macbook-pro kernel[0]: AirPort: Link Up on en1 Jun 5 22:20:45 james-hales-macbook-pro kernel[0]: en1: BSSID changed to 20:4e:7f:48:c0:ef Jun 5 22:20:46 james-hales-macbook-pro kernel[0]: AirPort: RSN handshake complete on en1 Jun 5 22:20:48 james-hales-macbook-pro kernel[0]: 00000000 00000020 NVEthernet::setLinkStatus - not Active Jun 5 22:20:54 james-hales-macbook-pro kernel[0]: Jun 5 22:20:55 james-hales-macbook-pro kernel[0]: Wake reason: OHC1 Jun 5 22:20:55 james-hales-macbook-pro kernel[0]: Previous Sleep Cause: 5 Jun 5 22:20:55 james-hales-macbook-pro kernel[0]: The USB device Apple Internal Keyboard / Trackpad (Port 6 of Hub at 0x4000000) may have caused a wake by issuing a remote wakeup (2) Jun 5 22:20:55 james-hales-macbook-pro kernel[0]: wlEvent: en1 en1 Link DOWN Jun 5 22:20:55 james-hales-macbook-pro kernel[0]: AirPort: Link Down on en1. Reason 4 (Disassociated due to inactivity). Jun 5 22:20:55 james-hales-macbook-pro kernel[0]: HID tickle 26 ms Jun 5 22:20:55 james-hales-macbook-pro kernel[0]: 00000000 00000020 NVEthernet::setLinkStatus - not Active Jun 5 22:20:58 james-hales-macbook-pro kernel[0]: MacAuthEvent en1 Auth result for: 20:4e:7f:48:c0:ef MAC AUTH succeeded Jun 5 22:20:58 james-hales-macbook-pro kernel[0]: wlEvent: en1 en1 Link UP Jun 5 22:20:58 james-hales-macbook-pro kernel[0]: AirPort: Link Up on en1 Jun 5 22:20:58 james-hales-macbook-pro kernel[0]: en1: BSSID changed to 20:4e:7f:48:c0:ef Jun 5 22:20:58 james-hales-macbook-pro kernel[0]: AirPort: RSN handshake complete on en1 Jun 5 22:21:02 james-hales-macbook-pro kernel[0]: 00000000 00000020 NVEthernet::setLinkStatus - not Active Jun 5 22:21:08 james-hales-macbook-pro kernel[0]: I have tried reseting the SMC and reinstalling Lion (short of erasing and installing) to no avail. The Genius bar has insisted that the problem would be resolved by reinstalling Lion (which they did, but didn't fix anything, still insisting...). Please don't say "logic board." Thoughts?

    Read the article

  • VMware Player loses internet connectivity

    - by Martha
    Periodically, the internet simply stops working in my virtual machine, and the only way I can get it working again is to restart the host computer. Since I use the virtual machine specifically for testing web pages, this is, shall we say, a bother. Details: I have Windows XP Pro running in VMware Player (v. 3.0.0 build-203739) on a Windows 7 host. It's set to NAT (shared IP address) because the firewall won't allow a bridged connection. Every couple of days or so, first the internet slows down to a crawl, then eventually it stops working altogether. Both VMWare and the virtual OS report that they are connected, everything looks just peachy, I can reach the internet from the host, but on the VM, all web pages time out and/or report that the server could not be found. (Browser-independent; tried with IE, FF, Chrome, Safari, and Opera.) When this happens, the only way I've found to restore the internet connectivity is to restart the host machine. Restarting the VM doesn't help, nor does refreshing network connections on either the host or the guest. (Although I'm not entirely sure I've found the proper way to refresh a network connection in Windows 7...) I have not noticed any predictability about when the problem occurs, i.e. it's not immediately after I do anything special. It seems to occur mostly after putting the host to sleep once or twice, but it has happened even if the host has been in continuous use. It also seems independent of when I start using the VM - sometimes, I wake up the VM and the internet is really slow in it, then eventually stops working altogether; other times, I wake up the VM, use it perfectly happily for a while, then suddenly the internet is gone. Does anyone know why this is occurring? Failing that, is there a workaround that's less drastic than restarting the host? (Windows 7 startup times are blazingly fast compared to previous versions of Windows, but it's still a hassle to close all my programs and reopen them again.) Edit: while badges overall are nice, the Tumbleweed badge isn't helping me to solve my problem. Hasn't anyone encountered anything even remotely similar?

    Read the article

  • Too many Tunnel Adapter Interfaces

    - by Tomas Lycken
    If I open a command prompt on my machine and type ipconfig /all, I see lots of Tunnel adapter Local Area Connection* 9: Media state . . . . . . . . . . . . . : Media disconnected Connection-specific DNS Sufficx . . . : Description . . . . . . . . . . . . . : Microsoft 6to4 Adapter #5 Physical address. . . . . . . . . . . : 00-00-00-00-00-00-00-E0 DHCP Enabled. . . . . . . . . . . . . : No Autoconfiguration Enabled . . . . . . : Yes In fact, they're so many that my "real" adapters are pushed out of the stack, and can't be seen anymore. Is there any flag I can use on ipconfig to hide all virtual interfaces? Or is there some other way around this problem? Since they always say "Media disconnected" I suppose disabling could be an option, but if possible I'd rather not turn any functionality off. I just want to control what output I get from ipconfig. Also, I know these are related to IPv6 stuff. However, most of what I find on google merely states what these are, and that they're harmless - nothing about hiding/removing them.

    Read the article

  • /server-status shows over 240 requests like "OPTIONS * HTTP/1.0" 200 - "-" "Apache (internal dummy c

    - by Stefan Lasiewski
    Some details: Webserver: Apache/2.2.13 (FreeBSD) mod_ssl/2.2.13 OpenSSL/0.9.8e OS: FreeBSD 7.2-RELEASE This is a FreeBSD Jail. I believe I use the Apache 'prefork' MPM (I run the default for FreeBSD). I use the default values for MaxClients (256) I have enabled mod_status, with "ExtendedStatus On". When I view /server-status , I see a handful of regular requests. I also see over 240 requests from the 'localhost', like these. 37-0 - 0/0/1 . 0.00 1510 0 0.0 0.00 0.00 127.0.0.2 www.example.gov OPTIONS * HTTP/1.0 38-0 - 0/0/1 . 0.00 1509 0 0.0 0.00 0.00 127.0.0.2 www.example.gov OPTIONS * HTTP/1.0 39-0 - 0/0/3 . 0.00 1482 0 0.0 0.00 0.00 127.0.0.2 www.example.gov OPTIONS * HTTP/1.0 40-0 - 0/0/6 . 0.00 1445 0 0.0 0.00 0.00 127.0.0.2 www.example.gov OPTIONS * HTTP/1.0 I also see about 2417 requests yesterday from the localhost, like these: Apr 14 11:16:40 192.168.16.127 httpd[431]: www.example.gov 127.0.0.2 - - [15/Apr/2010:11:16:40 -0700] "OPTIONS * HTTP/1.0" 200 - "-" "Apache (internal dummy connection)" The page at http://wiki.apache.org/httpd/InternalDummyConnection says "These requests are perfectly normal and you do not, in general, need to worry about them", but I'm not so sure. Why are there over 230 of these? Are these active connections? If I have "MaxClients 256", and over 230 of these connections, it seems that my webserver is dangerously close to running out of available connections. It also seems like Apache should only need a handful of these "internal dummy connections" We actually had two unexplained outages last night, and I am wondering if these "internal dummy connection" caused us to run out of available connections. UPDATE 2010/04/16 It is 8 hours later. The /server-status page still shows that there are 243 lines which say "www.example.gov OPTION *". I believe these connections are not active. The server is mostly idle (1 requests currently being processed, 9 idle workers). There are only 18 active httpd processes on the Unix host. If these connections are not active, why do they show up under /server-status? I would have expected them to expire a few minutes after they were initialized.

    Read the article

  • Windows 7 extremely slow login, exchange performance, printer enumeration, etc...

    - by Jeff
    Background: I have a fresh copy of Windows 7 Professional x64 on a Dell Latitude E6500. The laptop has 8GB RAM, 250GB drive, and all Intel peripherals (net/wifi/graphics). All available Windows updates, as well as hardware drivers are installed. The IT folks where I work joined the computer to our Windows 2003-based Active Directory domain. There are no errors in any logs that we've looked at, and Group Policy templates appear to have applied properly. Problem: Every time I turn on or reboot the computer, it takes between 2 to 10 (all times are actual) minutes after successfully typing my username/password to get to my desktop. My login script does not always run. Sometimes I get a black screen, and a couple of minutes later the login script will pop up and take up to 10 minutes to complete. I can get around this by hitting cntrl-shift-esc and running explorer.exe from the Task Manager. The login script continues to hang, but I can minimize it and go on about my business. Either way, it generally throws errors prior to completing. I often get slow or failed connectivity to Exchange via Outlook. When I bring up printer dialogs, they take several minutes to populate, and block the calling app while doing so. Copies to SMB shares are very slow. On my home network, everything works fine. On both the work network and home network, I can use remote internet resources just fine. Web pages pull up, remote VPN's are fine, I can max out bandwidth on SpeakEasy Speed Test. I can get almost max bandwidth transferring FTP/HTTP over a LAN. Another symptom of the problem is that when I first log in, the work network shows as "Identifying" for a long time in the Network and Sharing Center, and will often then change to the name of the work domain, but say "Unauthenticated Network". Note that this computer previously ran Windows Vista with none of these problems. Attempts to Fix: Installed the Win7 admin pack Uninstalled/reinstalled all hardware drivers Verified Active Directory DNS settings (Vista works relatively well on the same network) Reset all TCP/IP settings on all adapters using the netsh commands to do so Disabled ipv6 on all adapters Disable wifi adapter while on work network Locked the network card to 100/Full, 1000/Full; also tried Auto Added various important addresses to hosts file (exchange, dns, ad) -- removed when didn't help My background is a jpeg (sounds unrelated but there is apparently a win7 login bug related to solid color background) More I have forgotten The IT staff at my company indicated they believe this is due to having Windows 2003 AD servers and not having any Windows 2008 R2 AD servers. Other than that, they have no advice or assistance to offer other than a rebuild (already tried that once with similar symptoms), or downgrade to Vista. Any thoughts out there?

    Read the article

  • IIS 7.5 FTPS external access - 534 Policy requires SSL

    - by markmnl
    I have setup a FTP site that requires SSL but when I try connect to it externally I get the error: 220 Microsoft FTP Service 534 Policy requires SSL. I know - I set it so! Why doesnt it fetch the SSL cert from the site and allow me to logon?! (Incidentally beware of all the tutorials that Allow but do not Require SSL - while that will solve the problem it will be because SSL is not being used!). I suspect it may be I need a client that supports FTPS (FTP over SSL) and Windows explorer just uses IE which does not. But trying FileZilla and WinSCP I get a little further but then it hangs on TLS/SSL negotiation expecting a response from the server.... UPDATE: I have tried (from: http://learn.iis.net/page.aspx/309/configuring-ftp-firewall-settings/): Configure the Passive Port Range for the FTP Service. Configure the external IPv4 Address for a Specific FTP Site. Configure the firewall to allow the FTP service to listen on all ports that it opens. Disabling stateful FTP filtering so that Windows Firewall will not block FTP traffic. And still I get (in FileZilla trying both Active and Passive): Status: Connecting to 203.x.x.x:21... Status: Connection established, waiting for welcome message... Response: 220 Microsoft FTP Service Command: AUTH TLS Response: 234 AUTH command ok. Expecting TLS Negotiation. Status: Initializing TLS... Error: Connection timed out Error: Could not connect to server The Windows firewall logs unhelpfully have nothing to say.. UPDATE2: Turning the firewall off does not resolve the problem. I cannot believe how difficult it is to get something so simple to work and even once following the documentation it does not work. UPDATE3: Running FileZilla locally connecting through the loopback works in Active mode, in Passive mode I get up to: Command: LIST Response: 150 Opening BINARY mode data connection. Error: GnuTLS error -53: Error in the push function. Turning the firewall off at both ends I can still not connect the client and get the same error as above.

    Read the article

  • Installing OpenLDAP on Fedora 12: ldap_bind: Invalid credentials (49)

    - by Arcturus
    Hello. I've been trying to set up the OpenLDAP installed by default on Fedora 12, very unsuccessfully. My ultimate goal is to use LDAP authentication for user login and Apache, using the OpenLDAP server running on the same machine. The server is running, but the error I always get when I try to use ldapsearch or ldapadd is: ldap_bind: Invalid credentials (49) I've been following these tutorials, but none of them helped me: http://www.howtoforge.com/openldap_fedora7 http://www.redhat.com/docs/manuals/linux/RHL-9-Manual/ref-guide/s1-ldap-quickstart.html http://www.howtoforge.com/linux_ldap_authentication http://docs.fedoraproject.org/deployment-guide/f12/en-US/html/s1-ldap-pam.html http://www.openldap.org/doc/admin24/quickstart.html First, some components were already installed, and I installed these with yum: yum install openldap-servers openldap-devel Then, I created a basic slapd.conf file in /etc/openldap: database bdb suffix "dc=sniejana-sandbox,dc=com" rootdn "cn=root,dc=sniejana-sandbox,dc=com" rootpw {SSHA}cxdz55ygPu4T3ykg7dgu+L0VRvsFSeom directory /var/lib/ldap/sniejana-sandbox.com I obtained the rootpw with this command: slappasswd -s changeme I also created the /var/lib/ldap/sniejana-sandbox.com directory and made sure the entire contents of /var/lib/ldap were owned by the ldap user. I found two ldap.conf files, one in /etc and one in /etc/openldap. I don't know which is the right one. If I understood correctly, this file is to configure the client. I put this in both: HOST localhost BASE dc=sniejana-sandbox,dc=com I then ran the server with: service slapd start It said OK. Most of the tutorials above say to use the command ldapsearch -D "cn=Manager,dc=my-domain,dc=com" -W to ensure that everything's working. When I execute this command, a password prompt appears, and after entering the password, I get the error. ldapsearch -D "cn=root,dc=sniejana-sandbox,dc=com" -W Enter LDAP password: ldap_bind: Invalid credentials (49) The same thing happens when trying to use ldapadd. I tried with an encrypted and unencrypted password in slapd.conf, it doesn't change anything. Adding a -x for simple authentication doesn't change anything either. netstat -ap confirms the server is listening: tcp 0 0 *:ldap *:* LISTEN 4148/slapd tcp 0 0 *:ldap *:* LISTEN 4148/slapd ps -ef|grep slapd confirms the process is running: ldap 4148 1 0 15:22 ? 00:00:00 /usr/sbin/slapd -h ldap:/// -u ldap Running slaptest procudes config file testing succeeded. I read somewhere that the command ldapsearch -x -b '' -s base '(objectclass=*)' namingContext can confirm the server is running. It appears to work: # extended LDIF # # LDAPv3 # base <> with scope baseObject # filter: (objectclass=*) # requesting: namingContext # # dn: # search result search: 2 result: 0 Success # numResponses: 2 # numEntries: 1 I'm running out of ideas. Am I missing something obvious?

    Read the article

  • How to calculate CPU % based on raw CPU ticks in SNMP

    - by bjeanes
    According to http://net-snmp.sourceforge.net/docs/mibs/ucdavis.html#scalar_notcurrent ssCpuUser, ssCpuSystem, ssCpuIdle, etc are deprecated in favor of the raw variants (ssCpuRawUser, etc). The former values (which don't cover things like nice, wait, kernel, interrupt, etc) returned a percentage value: The percentage of CPU time spent processing user-level code, calculated over the last minute. This object has been deprecated in favour of 'ssCpuRawUser(50)', which can be used to calculate the same metric, but over any desired time period. The raw values return the "raw" number of ticks the CPU spent: The number of 'ticks' (typically 1/100s) spent processing user-level code. On a multi-processor system, the 'ssCpuRaw*' counters are cumulative over all CPUs, so their sum will typically be N*100 (for N processors). My question is: how do you turn the number of ticks into percentage? That is, how do you know how many ticks per second (it's typically — which implies not always — 1/100s, which either means 1 every 100 seconds or that a tick represents 1/100th of a second). I imagine you also need to know how many CPUs there are or you need to fetch all the CPU values to add them all together. I can't seem to find a MIB that gives you an integer value for # of CPUs which makes the former route awkward. The latter route seems unreliable because some of the numbers overlap (sometimes). For example, ssCpuRawWait has the following warning: This object will not be implemented on hosts where the underlying operating system does not measure this particular CPU metric. This time may also be included within the 'ssCpuRawSystem(52)' counter. Some help would be appreciated. Everywhere seems to just say that % is deprecated because it can be derived, but I haven't found anywhere that shows the official standard way to perform this derivation. The second component is that these "ticks" seem to be cumulative instead of over some time period. How do I sample values over some time period? The ultimate information I want is: % of user, system, idle, nice (and ideally steal, though there doesn't seem to be a standard MIB for this) "currently" (over the last 1-60s would probably be sufficient, with a preference for smaller time spans).

    Read the article

  • Reality behind wireless security - the weakness of encrypting

    - by Cawas
    I welcome better key-wording here, both on tags and title, and I'll add more links as soon as possible. For some years I'm trying to conceive a wireless environment that I'd setup anywhere and advise for everyone, including from big enterprises to small home networks of 1 machine. I've always had the feeling using any kind of the so called "wireless security" methods is actually a bad design. I'm talking mostly about encrypting and pass-phrasing (which are actually two different concepts), since I won't even considering hiding SSID and mac filtering. I understand it's a natural way of thinking. With cable networking nobody can access the network unless they have access to the physical cable, so you're "secure" in the physical way. In a way, encrypting is for wireless what walling (building walls) is for the cables. And giving pass-phrases is adding a door with a key. But the cabling without encryption is also insecure. Someone just need to plugin and get your data! And while I can see the use for encrypting data, I don't think it's a security measure in wireless networks. As I said elsewhere, I believe we should encrypt only sensitive data regardless of wires. And passwords should be added to the users, always, not to wifi. For securing files, truly, best solution is backup. Sure all that doesn't happen that often, but I won't consider the most situations where people just don't care. I think there are enough situations where people actually care on using passwords on their OS users, so let's go with that in mind. For being able to break the walls or the door someone will need proper equipment such as a hammer or a master key of some kind. Same is true for breaking the wireless walls in the analogy. But, I'd say true data security is at another place. I keep promoting the Fonera concept as an instance. It opens up a free wifi port, if you choose so, and anyone can connect to the internet through that, without having any access to your LAN. It also uses a QoS which will never let your bandwidth drop from that public usage. That's security, and it's open. And who doesn't want to be able to use internet freely anywhere you can find wifi spots? I have 3G myself, but that's beyond the point here. If I have a wifi at home I want to let people freely use it for internet as to not be an hypocrite and even guests can easily access my files, just for reading access, so I don't need to keep setting up encryption and pass-phrases that are not whole compatible. I'll probably be bashed for promoting the non-usage of WPA 2 with AES or whatever, but I wanted to know from more experienced (super) users out there: what do you think? Is there really a need for encryption to have true wireless security?

    Read the article

  • Setting up a transparent SSL proxy

    - by badunk
    I've got a linux box set up with 2 network cards to inspect traffic going through port 80. One card is used to go out to the internet, the other one is hooked up to a networking switch. The point is to be able to inspect all HTTP and HTTPS traffic on devices hooked up to that switch for debugging purposes. I've written the following rules for iptables: nat -A PREROUTING -i eth1 -p tcp -m tcp --dport 80 -j DNAT --to-destination 192.168.2.1:1337 -A PREROUTING -i eth1 -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 1337 -A POSTROUTING -s 192.168.2.0/24 -o eth0 -j MASQUERADE On 192.168.2.1:1337, I've got a transparent http proxy using Charles (http://www.charlesproxy.com/) for recording. Everything's fine for port 80, but when I add similar rules for port 443 (SSL) pointing to port 1337, I get an error about invalid message through Charles. I've used SSL proxying on the same computer before with Charles (http://www.charlesproxy.com/documentation/proxying/ssl-proxying/), but have been unsuccessful with doing it transparently for some reason. Some resources I've googled say its not possible - I'm willing to accept that as an answer if someone can explain why. As a note, I have full access to the described set up including all the clients hooked up to the subnet - so I can accept self-signed certs by Charles. The solution doesn't have to be Charles-specific since in theory, any transparent proxy will do. Thanks! Edit: After playing with it a little, I was able to get it working for a specific host. When I modify my iptables to the following (and open 1338 in charles for reverse proxy): nat -A PREROUTING -i eth1 -p tcp -m tcp --dport 80 -j DNAT --to-destination 192.168.2.1:1337 -A PREROUTING -i eth1 -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 1337 -A PREROUTING -i eth1 -p tcp -m tcp --dport 443 -j DNAT --to-destination 192.168.2.1:1338 -A PREROUTING -i eth1 -p tcp -m tcp --dport 443 -j REDIRECT --to-ports 1338 -A POSTROUTING -s 192.168.2.0/24 -o eth0 -j MASQUERADE I am able to get a response, but with no destination host. In the reverse proxy, if I just specify that everything from 1338 goes to a specific host that I wanted to hit, it performs the hand shake properly and I can turn on SSL proxying to inspect the communication. The setup is less than ideal because I don't want to assume everything from 1338 goes to that host - any idea why the destination host is being stripped? Thanks again

    Read the article

  • Linux HA cluster w/Xen, Heartbeat, Pacemaker. domU does not failover to secondary node

    - by Kendall
    I am having the followig problem with an OenSuSE + Heartbeat + Pacemaker + Xen HA cluster: when the node a Xen domU is running on is "dead" the Xen domU running on it is not restarted on the second node. The cluster is setup with two nodes, each running OpenSuSE-11.3, Heartbeat 3.0, and Pacemaker 1.0 in CRM mode. For storage I am using a LUN on an iSCSI SAN device; the LUN is formatted with OCFS2 and managed with LVM. The Xen domU has two logical volumes; one for root and the other for swap. I am using IPMI cards for STONITH devices, and a dedicated ethernet link for heartbeat communications. The ha.cf file is as follows: keepalive 1 deadtime 10 warntime 5 udpport 694 ucast eth1 auto_failback off node dhcp-166 node stage use_logd yes crm yes My resources look as follows: shocrm(live)configure# show node $id="5c1aa924-bba4-4f95-a367-6c9a58ac4a38" dhcp-166 node $id="cebc92eb-af24-4833-aaf0-672adf80b58e" stage primitive Xen-Util ocf:heartbeat:Xen \ meta target-role="Started" \ operations $id="Xen-Util-operations" \ op start interval="0" timeout="60" start-delay="0" \ op stop interval="0" timeout="120" \ params xmfile="/etc/xen/vm/xen-util" primitive my-stonith stonith:external/ipmi \ params hostname="dhcp-166" ipaddr="192.168.3.106" userid="ADMIN" passwd="xxx" \ op monitor interval="2m" timeout="60s" primitive my-stonith2 stonith:external/ipmi \ params hostname="stage" ipaddr="192.168.3.105" userid="ADMIN" passwd="xxx" \ op monitor interval="2m" timeout="60s" property $id="cib-bootstrap-options" \ dc-version="1.0.9-89bd754939df5150de7cd76835f98fe90851b677" \ cluster-infrastructure="Heartbeat" The Xen domU config file is as follows: name = "xen-util" bootloader = "/usr/lib/xen/boot/domUloader.py" #bootargs = "xvda1:/vmlinuz-xen,/initrd-xen" bootargs = "--entry=xvda1:/boot/vmlinuz-xen,/boot/initrd-xen" memory = 4096 disk = [ 'phy:vg_xen/xen-util-root,xvda1,w', 'phy:vg_xen/xen-util-swap,xvda2,w', ] root = "/dev/xvda1" vif = [ 'mac=00:16:3e:42:42:06' ] #vfb = [ 'type=vnc,vncunused=0,vnclisten=192.168.3.172' ] extra = "" Say domU "Xen-Util" is running on node "stage"; if "stage" goes down, "Xen-Util" does not restart on node "dhcp-166". It seems to want to try as an "xm list" will show it for a few seconds and if you "xm console xen-util" it will give a message like "copying /boot/kernel.gz from xvda1 to /var/lib/xen/tmp/kernel.a53gs for booting". However, it never gets past that, eventually gives up, and no longer appears in "xm list". Now, when node "stage" comes back online after being power cycled, it detects that "Xen-Util" isn't running, and starts it (on stage). I've tried starting "Xen-Util" on node "dhcp-166" without the cluster running, and it works fine. No problems. So, I know it works in that respect. Any ideas? Thanks!

    Read the article

  • uploading via http post (multipart/form-data) silently fails with big files

    - by matteo
    When uploading multipart/form-data forms via a http post request to my apache web server, very big files (i.e. 30MB) are silently discarded. On the server side all looks as if the attached file was received with 0 bytes size. On the client side all looks like it had been uploaded succesfully (it takes the expected long time to upload and the browser gives no error message). On the server, nothing is logged into the error log. An entry is logged into the access log as if everything was ok (a post request and a 200 ok response). These uploads are being posted to a php script. In the php script, If I print_r $_FILES, I see the following information for the relevant file: [file5] => Array ( [name] => MOV023.3gp [type] => video/3gpp [tmp_name] => /tmp/phpgOdvYQ [error] => 0 [size] => 0 ) Note both [error] = 0 (which should mean no error) and [size] = 0 (as if the file was empty). My php script runs fine and receives all the rest of the data except these files. move_uploaded_file succeeds on these files and actually copies them as 0byte files. I've already changed the php directives max_upload_size to 50M and post_max_size to 200M, so neither the single file nor the request exceed any size limit. max_execution_time is not relevant, because the time to transfer the data does not count; and I've increased max_input_time to 1000 seconds, though this shouldn't be necessary since this is the time taken to parse the input data, not the time taken to upload it. Is there any apache configuration, prior to php, that could be causing these files to be discarded even prior to php execution? Some limit in size or in upload time? I've read about a default 300 seconds timeout limit, but this should apply to the time the connection is idle, not the time it takes while actually transferring data, right? Needless to say, uploads with all exactly identical conditions (including file format, client and everything) except smaller file size, work seamlessly, so the issue is clearly related to the file or request size, or to the time it takes to send it.

    Read the article

  • cPanel Virtfs won't umount

    - by JPerkSter
    Anyone have any experience with virtfs on cPanel servers? I can't seem to get them to unmount, as they say they are already unmounted: [root@Server ~]# cat /proc/mounts | grep user /dev/root /home/virtfs/user/lib ext3 rw,errors=continue,data=ordered 0 0 /dev/root /home/virtfs/user/opt ext3 rw,errors=continue,data=ordered 0 0 /dev/sda3 /home/virtfs/user/usr/lib ext3 rw,nodev,errors=continue,data=ordered 0 0 /dev/sda3 /home/virtfs/user/usr/sbin ext3 rw,nodev,errors=continue,data=ordered 0 0 /dev/sda3 /home/virtfs/user/usr/share ext3 rw,nodev,errors=continue,data=ordered 0 0 /dev/sda3 /home/virtfs/user/usr/bin ext3 rw,nodev,errors=continue,data=ordered 0 0 /dev/sda3 /home/virtfs/user/usr/man ext3 rw,nodev,errors=continue,data=ordered 0 0 /dev/sda3 /home/virtfs/user/usr/X11R6 ext3 rw,nodev,errors=continue,data=ordered 0 0 /dev/sda3 /home/virtfs/user/usr/kerberos ext3 rw,nodev,errors=continue,data=ordered 0 0 /dev/sda3 /home/virtfs/user/usr/libexec ext3 rw,nodev,errors=continue,data=ordered 0 0 /dev/sda3 /home/virtfs/user/usr/local/bin ext3 rw,nodev,errors=continue,data=ordered 0 0 /dev/sda3 /home/virtfs/user/usr/local/share ext3 rw,nodev,errors=continue,data=ordered 0 0 /dev/sda3 /home/virtfs/user/usr/local/Zend ext3 rw,nodev,errors=continue,data=ordered 0 0 /dev/sda3 /home/virtfs/user/usr/local/IonCube ext3 rw,nodev,errors=continue,data=ordered 0 0 /dev/sda3 /home/virtfs/user/usr/include ext3 rw,nodev,errors=continue,data=ordered 0 0 /dev/sda3 /home/virtfs/user/usr/local/lib ext3 rw,nodev,errors=continue,data=ordered 0 0 /dev/sda2 /home/virtfs/user/var/spool ext3 rw,nodev,noatime,nodiratime,errors=continue,data=ordered 0 0 /dev/sda2 /home/virtfs/user/var/lib ext3 rw,nodev,noatime,nodiratime,errors=continue,data=ordered 0 0 /dev/sda2 /home/virtfs/user/var/cpanel ext3 rw,nodev,noatime,nodiratime,errors=continue,data=ordered 0 0 /dev/sda2 /home/virtfs/user/var/run ext3 rw,nodev,noatime,nodiratime,errors=continue,data=ordered 0 0 /dev/sda2 /home/virtfs/user/var/log ext3 rw,nodev,noatime,nodiratime,errors=continue,data=ordered 0 0 /dev/sda6 /home/virtfs/user/tmp ext3 rw,nosuid,nodev,noexec,noatime,errors=continue,data=ordered 0 0 /dev/root /home/virtfs/user/bin ext3 rw,errors=continue,data=ordered 0 0 [root@Server ~]# for i in cat /proc/mounts |grep virtfs |grep user |awk '{print$2}'; do umount $i; done umount: /home/virtfs/user/lib: not mounted umount: /home/virtfs/user/opt: not mounted umount: /home/virtfs/user/usr/lib: not mounted umount: /home/virtfs/user/usr/sbin: not mounted umount: /home/virtfs/user/usr/share: not mounted umount: /home/virtfs/user/usr/bin: not mounted umount: /home/virtfs/user/usr/man: not mounted umount: /home/virtfs/user/usr/X11R6: not mounted umount: /home/virtfs/user/usr/kerberos: not mounted umount: /home/virtfs/user/usr/libexec: not mounted umount: /home/virtfs/user/usr/local/bin: not mounted umount: /home/virtfs/user/usr/local/share: not mounted umount: /home/virtfs/user/usr/local/Zend: not mounted umount: /home/virtfs/user/usr/local/IonCube: not mounted umount: /home/virtfs/user/usr/include: not mounted umount: /home/virtfs/user/usr/local/lib: not mounted umount: /home/virtfs/user/var/spool: not mounted umount: /home/virtfs/user/var/lib: not mounted umount: /home/virtfs/user/var/cpanel: not mounted umount: /home/virtfs/user/var/run: not mounted umount: /home/virtfs/user/var/log: not mounted umount: /home/virtfs/user/tmp: not mounted umount: /home/virtfs/user/bin: not mounted umount: /home/virtfs/user/dev: not mounted umount: /home/virtfs/user/proc: not mounted

    Read the article

  • Indirect Postfix bounces create new user directories

    - by hheimbuerger
    I'm running Postfix on my personal server in a data centre. I am not a professional mail hoster and not a Postfix expert, it is just used for a few domains served from that server. IIRC, I mostly followed this howto when setting up Postfix. Mails addressed to one of the domains the server manages are delivered locally (/srv/mail) to be fetched with Dovecot. Mails to other domains require usage of SMTPS. The mailbox configuration is stored in MySQL. The problem I have is that I suddenly found new mailboxes being created on the disk. Let's say I have the domain 'example.com'. Then I would have lots of new directories, e.g. /srv/mail/example.com/abenaackart /srv/mail/example.com/abenaacton etc. There are no entries for these addresses in my database, neither as a mailbox nor as an alias. It's clearly spam from auto-generated names. Most of them start with 'a', a few with 'b' and a couple of random ones with other letters. At first I was afraid of an attack, but all security restrictions seem to work. If I try to send mail to these addresses, I get an "Recipient address rejected: User unknown in virtual mailbox table" during the 'RCPT TO' stage. So I looked into the mails stored in these mailboxes. Turns out that all of them are bounces. It seems like all of them were sent from a randomly generated name to an alias that really exists on my system, but pointed to an invalid destination address on another host. So Postfix accepted it, then tried to redirect it to another mail server, which rejected it. This bounced back to my Postfix server, which now took the bounce and stored it locally -- because it seemed to be originating from one of the addresses it manages. Example: My Postfix server handles the example.com domain. [email protected] is configured to redirect to [email protected]. [email protected] has since been deleted from the Hotmail servers. Spammer sends mail with FROM:[email protected] and TO:[email protected]. My Postfix server accepts the mail and tries to hand it off to hotmail.com. hotmail.com sends a bounce back. My Postfix server accepts the bounce and delivers it to /srv/mail/example.com/bob. The last step is what I don't want. I'm not quite sure what it should do instead, but creating hundreds of new mailboxes on my disk is not what I want... Any ideas how to get rid of this behaviour? I'll happily post parts of my configuration, but I'm not really sure where to start debugging the problem at this point.

    Read the article

< Previous Page | 762 763 764 765 766 767 768 769 770 771 772 773  | Next Page >