Search Results

Search found 12222 results on 489 pages for 'initial context'.

Page 289/489 | < Previous Page | 285 286 287 288 289 290 291 292 293 294 295 296  | Next Page >

  • Understanding When Social Interactions Should Be Resolved in Another Channel

    - by Christina McKeon
    Guest Blogger: Aphrodite Brinsmead, Senior Analyst at Ovum Agents need to respond to customers’ social comments and questions quickly and in the right tone. But more importantly, they need to offer resolutions. Customers care most about how long it takes to find information rather than which channel they are using. They choose to use social media because they are comfortable with the channel and it offers a convenient way to communicate. Ideally agents will resolve questions within social media, but they need guidance as to how and when to escalate interactions to a more private channel. First, businesses should assess the way in which customers are using social media to communicate with them and categorize posts into groups: complaints, feedback, technical queries or more general support questions. They should then consider the types of interactions that can easily be handled within social media and those that need to be followed up in another channel. This will be very dependent on the industry. Examples of queries that can be resolved in social media include Shipping pricing and timeframes Outage updates and resolution plans Flight status information Product stock check Technical support videos or forum posts Availability of facilities Both customers and agents need to be educated about the types of questions they can expect to resolve within social media. As the channel matures as a customer service tool, it needs to have value other than just as a forum for complaints. Social customer service agents need the power to start a web chat or phone call Any questions where customers need to divulge personal details in order to get a resolution will need to be addressed in a private channel: a private social message, web chat, email or phone call. Customers should never disclose their date of birth, social security, credit card number, or healthcare records in a public forum. Flight issues, changes to a booking, billing queries or account updates will all need to be completed via a private interaction. Agents responding to questions on social media need the ability to start a web chat or phone call with the customer. The customer doesn’t want to have to repeat their question and the agent should be empowered to connect customer records and access account or billing information. These agents will need to be trained across different channels and should be able to view all customer communications in one application. They also need to follow up questions that began on a public forum in the initial channel to make it clear that the issue was addressed. In order to make this possible, social media needs to be integrated as part of a broader customer service strategy. Irrespective of how many channels are used to complete an interaction, businesses should prioritize customer satisfaction and issue resolution. They need a clear strategy and trained agents that can handle and respond to social interactions. Follow me on Twitter @diteb. 

    Read the article

  • Cannot boot from K/Ubuntu install disk on my UEFI system

    - by user93241
    I just got a new system and have been trying to get it set up w/ Win7 & Kubuntu dual-boot, but I've got a major problem. The BIOS of my motherboard (an Asus Crosshair 990FX) is strictly UEFI -- there is no legacy support mode available. I've been reading up on how to get Kubuntu installed in UEFI mode but no matter what I try I cannot seem to even boot into my install CD/USB key properly. I can get as far as the selection screen ("Try Kubuntu", "Install Kubuntu"...) but this screen starts off not appearing correctly. If I try moving the cursor around it sometimes seems to correct itself and show me my choices. But once I select "Try Kubuntu" it starts loading, the screen goes black and then proceeds to flicker -- about once every 5-10 seconds or so. This continues indefinitely. I've tried this with both Kubuntu & Ubuntu installation media, even the AMD64+Mac Ubuntu variety that is supposed to be a lot more flexible w.r.t. UEFI. The only hint I've had that the system might have booted correctly is a little drum sound that plays when booting from the Ubuntu install disk. Well, that and the fact that when I hit my system's power button it seems to shut down correctly, even ejecting the CD at the end. This might be a video driver issue; my system has two nVidia 550's, one of which is attached to my primary monitor. (The secondary isn't hooked up yet.) I'll keep looking over similar questions but any advice would be greatly appreciated. UPDATE: I've tried booting into my 12.04 install CD twice now, each time using two different options supplied by my BIOS. One seemed to offer the ability to boot into my CD under UEFI mode -- this didn't even produce the initial boot menu. The other method offers the ability to boot into my CD NOT under UEFI mode. This DOES produce the boot menu, but after this point it seems I still cannot get to a proper video mode to see what's going on.

    Read the article

  • How can I improve the battery life under 12.04 on my Inspiron 14z? [duplicate]

    - by cfogelberg
    This question already has an answer here: Tips to extend battery life for laptops and notebooks 24 answers How do I improve the battery life of my Inspiron 14z under Ubuntu 12.04? This laptop gets 4-5 hours of battery life using Windows (e.g. here). I've removed Windows, installed Ubuntu 12.04 and the initial battery life was only 2 hours. With some tweaks (described below) it's still only ~2.5 hours. For reference, the laptop is the latest model of the 14z: i5-3337U processor 32GB MSATA, 500GB HDD (5400rpm) AMD Radeon HD7570M graphics card I have put ext4 partitions on both the SSD and the HDD, and have mounted / to the SSD and /home to the HDD. I also put a 24gb linux swap partition at the start of the HDD, though I figure this won't be used all that much (the laptop has 8gb of RAM). After googling around and reading Ask Ubuntu and other sites extensively, I have done the following steps, and they have improved the battery life ~30 minutes (exact improvement not clear, but battery life is still nowhere near 4-5 hours). Installed Jupiter (and set Performance to "Power Saving") Installed laptop-mode-tools cat /proc/sys/vm/laptop_mode now outputs 5 (previously it output 0) But it's not clear that this will help: AskUbuntu question Turned down the brightness of my screen from full to 1/3 Other things I have heard about but have not tried for fear of frying the laptop or my linux install: Add "pcie_aspm=force" at the end of the line with "quiet splash" in /boot/grub/grub.cfg Enable ALPM, but it may already be enabled in 12.04? Enable i915 framebuffer compression Use a propietary driver for the graphics card? Turn off the graphics card? (what would happen if I relied on the internal Intel bridge?) Use TLP? Spin down the HDD more aggressively (howto, but I think laptop-mode-tools does this already) The only other thing I've noticed is that plastic just above the F5, F6 and F7 keys gets really hot. According to Jupiter my CPU temperature is only 69 celsius and the System Monitor shows CPU load at 7% so I don't think it's the CPU. Maybe it's the graphics card? Also, I've set up MongoDB and LAMP on the machine as well. When I run powertop MongoDB is high in the list, but I'm not sure if that's relevant to battery life because I'm not actually doing anything with MongoDB most of the time. Edit - Additional info as requested $ lspci -nnk | grep -iEA3 "(graphics|vga)" 00:02.0 VGA compatible controller [0300]: Intel Corporation Ivy Bridge Graphics Controller [8086:0166] (rev 09) Subsystem: Dell Device [1028:057f] Kernel driver in use: i915 Kernel modules: i915 -- 02:00.0 VGA compatible controller [0300]: Advanced Micro Devices [AMD] nee ATI Thames [Radeon 7500M/7600M Series] [1002:6841] Subsystem: Dell Device [1028:057f] Kernel driver in use: radeon Kernel modules: radeon

    Read the article

  • [AJAX Numeric Updown Control] Microsoft JScript runtime error: The number of fractional digits is out of range

    - by Jenson
    If you have using Ajax control toolkits a lot (which I will skip the parts on where to download and how to configure it in Visual Studio 2010), you might have encountered some bugs or limitations of the controls, or rather, some weird behaviours. I would call them weird behaviours though. Recently, I've been working on a Ajax numeric updown control, which i remember clearly it was working fine without problems. In fact, I use 2 numeric updown control this time. So I went on to configure it to be as simple as possible and I will just use the default up and down buttons provided by it (so that I won't need to design my own). I have two textbox controls to display the value controlled by the updown control. One for month, and another for year. <asp:TextBox ID="txtMonth" runat="server" CssClass="txtNumeric" ReadOnly="True" Width="150px" /> <asp:TextBox ID="txtYear" runat="server" CssClass="txtNumeric" ReadOnly="True" Width="150px" /> So I will now drop 1 numeric updown control for each of the textboxes. <asp:NumericUpDownExtender ID="txtMonth_NumericUpDownExtender"     runat="server" TargetControlID="txtMonth" Maximum="12" Minimum="1" Width="152"> </asp:NumericUpDownExtender>                          <asp:NumericUpDownExtender ID="txtYear_NumericUpDownExtender"     runat="server" TargetControlID="txtYear" Width="152"> </asp:NumericUpDownExtender>                                                  You noticed that I configure the Maximum and Minimum value for the first numericupdown control, but I never did the same for the second one (for txtYear). That's because it won't work, well, at least for me. So I remove the Minimum="2000" and Maximum="2099" from there. Then I would configure the initial value to the the current year, and let the year to flow up and down freely. If you want, you want write the codes to restrict it. Here are the codes I used on PageLoad:     Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load         If Not Page.IsPostBack Then             If Trim(txtMonth.Text) = "" Then                 Me.txtMonth.Text = System.DateTime.Today.Month             End If             If Trim(txtYear.Text) = "" Then                 Me.txtYear.Text = System.DateTime.Today.Year             End If         End If     End Sub   Enjoy!

    Read the article

  • Physics Engine [Collision Response, 2-dimensional] experts, help!! My stack is unstable!

    - by Register Sole
    Previously, I struggle with the sequential impulse-based method I developed. Thanks to jedediah referring me to this paper, I managed to rebuild the codes and implement the simultaneous impulse based method with Projected-Gauss-Seidel (PGS) iterative solver as described by Erin Catto (mentioned in the reference of the paper as [Catt05]). So here's how it currently is: The simulation handles 2-dimensional rotating convex polygons. Detection is using separating-axis test, with a SKIN, meaning closest points between two polygons is detected and determined if their distance is less than SKIN. To resolve collision, simultaneous impulse-based method is used. It is solved using iterative solver (PGS-solver) as in Erin Catto's paper. Error-correction is implemented using Baumgarte's stabilization (you can refer to either paper for this) using J V = beta/dt*overlap, J is the Jacobian for the constraints, V the matrix containing the velocities of the bodies, beta an error-correction parameter that is better be < 1, dt the time-step taken by the engine, and overlap, the overlap between the bodies (true overlap, so SKIN is ignored). However, it is still less stable than I expected :s I tried to stack hexagons (or squares, doesn't really matter), and even with only 4 to 5 of them, they hardly stand still! Also note that I am not looking for a sleeping scheme. But I would settle if you have any explicit scheme to handle resting contacts. That said, I would be more than happy if you have a way of treating it generally (as continuous collision, instead of explicitly as a special state). Ideas I have: I would try adding a damping term (proportional to velocity) to the Baumgarte. Is this a good idea in general? If not I would not want to waste my time trying to tune the parameter hoping it magically works. Ideas I have tried: Using simultaneous position based error correction as described in the paper in section 5.3.2, turned out to be worse than the current scheme. If you want to know the parameters I used: Hexagons, side 50 (pixels) gravity 2400 (pixels/sec^2) time-step 1/60 (sec) beta 0.1 restitution 0 to 0.2 coeff. of friction 0.2 PGS iteration 10 initial separation 10 (pixels) mass 1 (unit is irrelevant for now, i modified velocity directly<-impulse method) inertia 1/1000 Thanks in advance! I really appreciate any help from you guys!! :)

    Read the article

  • From 20,663 issues to 1 issue&ndash;style-copping C5.Tests

    - by TATWORTH
    Originally posted on: http://geekswithblogs.net/TATWORTH/archive/2014/05/28/from-20663-issues-to-1-issuendashstyle-copping-c5.tests.aspxI recently became interested in the potential of the C5 Collections solution from http://www.itu.dk/research/c5/, however I was dismayed at the state of the code in the unit test project, so I set about fixing the 20,663 issues detected by StyleCop. The tools I used were the latest versions of: My 64-bit development PC running Windows 8 Update with 8Gb RAM Visual Studio 2013 Ultimate with SP2 ReSharper GhostDoc Pro My first attempt had to be abandoned due to collision of class names which broke one of the unit tests. So being aware of this duplication of class names, I started again and planned to prepend the class names with the namespace name. In some cases I additionally prepended the item of the C5 collection that was being tested. So what was the condition of code at the start? Besides the sprawl of C# code not written to style cop standard, there was: 1) Placing of many classes within one physical file. 2) Namespace within name space that did not follow the project structure. 3) As already mentioned, duplication of class names across namespaces. 4) A copyright notice that spawled but had to be preserved. 5) Project sub-folders were all lower case instead of initial letter capitalised. The first step was to add a stylecop heading plus the original heading contained within a region, to every file. The next step was to run GhostDoc Pro using its “Document File” option on every file but not letting it replace the headers, I had added. This brought the number of issues down to 18,192. I then went through each file collapsing each class and prepending names as appropriate. At each step, I saved the changes to my local Git. The step was to move each class to its own file and to style-cop each file. ReSharper provides a very useful feature for doing this which also fixes missing “this.” and moves using statements inside the namespace. Some classes required minimal work whereas others required extensive work to reach the stylecop standard. The unit tests were run at each split and when each class was completed. When all was done, one issue remained which I will need to submit to stylecop team for their advice (and possibly a fix to stylecop). The updated solution has been made available at https://c5stylecopped.codeplex.com/releases/view/122785.

    Read the article

  • how did Google Analytics kill my site?

    - by user1813359
    Yesterday I created a google analytics profile for one of my sites and included the JS block in the layout template. What happened next was very strange. Within about 2 minutes, the site had become unreachable. I had been checking the AWStats page for the site when I thought to set up GA. After that had been done, I clicked on the link for 404 stats, which opens in a new tab. It churned for a long while and then showed a nearly blank page, similar to that when Firefox chokes on a badly-formatted XML page, except there was no error msg. But i was logged into the server and could see that that page has a 401 Transitional DTD. Strange! I tried viewing source but it just churned endlessly. I then tried "inspect element" and was able to see an error msg having to do with some internal Firefox lib. Unfortunately, i neglected to copy that. :-( All further attempts to load anything on the site would time out. Firebug's Net panel showed no request being made. Chrome would time out. So, I deleted the GA profile, removed the JS block, and cleared the server cache. No joy. I then removed all google cookies and disabled JS. Still nothing. No luck in any other browser. And now my client couldn't access the site. Terrific. I was able use wget while logged into another server. The retrieved page was fine, and did not contain the GA JS block. However, the two servers are on the same network. (Perhaps a clue.) The server itself was fine. Ping, traceroute looked great. I could SSH in. I tailed the access log and tried a browser request. Nothing. But i forgot to quit and a minute or so later I saw a request from someone else being logged. Later, I could see that requests had been served all day to some people. Now, 24 hours later, the site works once again, but is still unreachable by the client (who is in another city). So, does anyone have some insight into what's going on? Does this have something to do with google's CDN? I don't know very much about how GA works but what I'm seeing reminds me of DNS propagation issues. And why the initial XML error? And why the heck was the site just plain unreachable? What did google do to my site?! Sorry for the length but I wanted to cover everything.

    Read the article

  • How are software projects 'typically' managed/deployed

    - by rguilbault
    My company is evaluating adopting off-the-shelf ALM products to aid in our development lifecycle; we currently use our own homegrown solutions to manage requirements gathering, specification documentation, testing, etc. One of the issues I am having is that we have what we call a pipeline, which consists of particular stops: [Source] - [QC] - [Production] At the first stop, the developer works out a solution to some requested change and performs individual testing. When that process is complete (and peer review has been performed), our ALM system physically moves the affected programs from the [Source] runtime environment to the [QC] runtime environment. You can think of this as analogous to moving some web pages from the 'test' server to the 'live' server, where QC personnel can bang on the system and complain that the developer has it all wrong ;-) Once QC signs off that the changes are working, the system again moves the code along to the next stage, where additional testing is performed, etc. I have been searching the internet for a few days trying to find how the process is accomplished anywhere else -- I have read a bit about builds, automated testing, various ALM products, etc. but nowhere does any of this state how builds interact with initial change requests, what the triggers are, how dependencies are managed, how the various forms of testing are accommodated (e.g. unit testing, integration testing, regression testing), etc. Can anyone point me to any resources or attempt to explain (generically) how a change could/should be tracked and moved though the development lifecycle? I'd be very appreciative. To keep things consistent, let's say that we have a project called Calculator, which we want to add support for the basic trigonometric functions: sine, cosine and tangent. I'm open to reorganizing the company however we need to in order to accomplish due diligence testing and we can suppose that any tools are available for use (if that helps to illustrate the process). To start things off, I think I understand this much: we document the requirements, e.g.: support sine, cosine and tangent functions we create some type of change request/work order to assign to programming coding takes place, commits are made to version control peer review commences programmer marks the work order as completed? ... now what? How does QC do their thing? Would they perform testing before closing the 'work order'?

    Read the article

  • Should we persist with an employee still writing bad code after many years?

    - by user94986
    I've been assigned the task of managing developers for a well-established company. They have a single developer who specialises in all their C++ coding (since forever), but the quality of the work is abysmal. Code reviews and testing have revealed many problems, one of the worst being memory leaks. The developer has never tested his code for leaks, and I discovered that the applications could leak many MBs with only a minute of use. User's were reporting huge slowdowns, and his take was, "it's nothing to do with me - if they quit and restart, it's all good again." I've given him tools to detect and trace the leaks, and sat down with him for many hours to demonstrate how the tools are used, where the problems occur, and what to do to fix them. We're 6 months down the track, and I assigned him to write a new module. I reviewed it before it was integrated into our larger code base, and was dismayed to discover the same bad coding as before. The part that I find incomprehensible is that some of the coding is worse than amateurish. For example, he wanted a class (Foo) that could populate an object of another class (Bar). He decided that Foo would hold a reference to Bar, e.g.: class Foo { public: Foo(Bar& bar) : m_bar(bar) {} private: Bar& m_bar; }; But (for other reasons) he also needed a default constructor for Foo and, rather than question his initial design, he wrote this gem: Foo::Foo() : m_bar(*(new Bar)) {} So every time the default constructor is called, a Bar is leaked. To make matters worse, Foo allocates memory from the heap for 2 other objects, but he didn't write a destructor or copy constructor. So every allocation of Foo actually leaks 3 different objects, and you can imagine what happened when a Foo was copied. And - it only gets better - he repeated the same pattern on three other classes, so it isn't a one-off slip. The whole concept is wrong on so many levels. I would feel more understanding if this came from a total novice. But this guy has been doing this for many years and has had very focussed training and advice over the past few months. I realise he has been working without mentoring or peer reviews most of that time, but I'm beginning to feel he can't change. So my question is, would you persist with someone who is writing such obviously bad code?

    Read the article

  • laptop will not boot after attempting upgrade to 13.10

    - by naerwenya
    I wanted to update my OS to the new 14.04 LTS. I was running 12.04 LTS before and followed the advice that upgrades should be done in steps. I first upgraded to 12.10, which seemed to work fine, and later to 13.10. This was all done through the Software Manager. After the last upgrade, I am no longer able to boot up my computer. The GNU GRUB menu opens up, but after selecting Ubuntu, it just stalls with a blank purple screen. If I select one of the other kernels, it also stalls after "Loading initial ramdisk...". I can't get into the Recovery Menu, either. I'm still rather new to Linux and may have possibly made the situation worse. Unfortunately, nothing has worked yet. I tried reinstalling from a flash drive and on my first attempt, the wizard recognised a previous installation. Unfortunately, the wizard also didn't like how my partitions were set up (I didn't change anything) and gave an error before closing. Unfortunately, I didn't write the error down, but it was about the boot partition. On the next attempt and ever since, the installation wizard has stated that "This computer currently has no detected operating systems." This is strange, because I could see the disk and even access my files when booting up from the USB. At this point, I decided to back up my important files using dropbox. Before losing all my files, I wanted to try the Boot-Repair tool, which also produced no results, and the files are no longer visible when booting from USB. The link to the Boot-Repair log is at http://paste.ubuntu.com/7457249/. If I then proceed through to the "Something else" installation option, I can see that the partitions still exist. This is what they look like: /dev/sda free space (size indicated 1MB) /dev/sda1 efi (33MB of 98 MB used) /dev/sda2 efi (352634MB of 746330MB used) /dev/sda3 swap (3725MB, none used) free space (0MB) Is there any way I might be able to get my computer to work and preserve my files as well?

    Read the article

  • Fortigate Remote VPN : no matching gateway for new request

    - by Kedare
    I am trying to configure a Fortigate 60C to act as an IPSec endpoint for remote VPN. I configured it like this : SCR-F0-FGT100C-1 # diagnose vpn ike config vd: root/0 name: SCR-REMOTEVPN serial: 7 version: 1 type: dynamic mode: aggressive dpd: enable retry-count 3 interval 5000ms auth: psk dhgrp: 2 xauth: server-auto xauth-group: VPN-group interface: wan1 distance: 1 priority: 0 phase2s: SCR-REMOTEVPN-PH2 proto 0 src 0.0.0.0/0.0.0.0:0 dst 0.0.0.0/0.0.0.0:0 dhgrp 5 replay keep-alive dhcp policies: none Here is the configuration: config vpn ipsec phase1-interface edit "SCR-REMOTEVPN" set type dynamic set interface "wan1" set dhgrp 2 set xauthtype auto set mode aggressive set proposal aes256-sha1 aes256-md5 set authusrgrp "VPN-group" set psksecret ENC xxx next config vpn ipsec phase2-interface edit "SCR-REMOTEVPN-PH2" set keepalive enable set phase1name "SCR-REMOTEVPN" set proposal aes256-sha1 aes256-md5 set dhcp-ipsec enable next end But when I try to connect from a remote device (I tested with an Android Phone), the phone fail to connect and the fortinet return this error : 2012-07-20 13:08:51 log_id=0101037124 type=event subtype=ipsec pri=error vd="root" msg="IPsec phase 1 error" action="negotiate" rem_ip=xxx loc_ip=xxx rem_port=1049 loc_port=500 out_intf="wan1" cookies="xxx" user="N/A" group="N/A" xauth_user="N/A" xauth_group="N/A" vpn_tunnel="N/A" status=negotiate_error error_reason=no matching gateway for new request peer_notif=INITIAL-CONTACT I tried searching on the web, but i did not find anything revelant to this. Do you have any idea of what can be the problem ? I tried many combinaisons of settings on the fortigate without success..

    Read the article

  • Unable to Install SQL Server on Server 2012

    - by Jeff
    The problem I have been trying to install SQL Server 2012 on Windows Server 2012. I continually get the same error: Managed SQL Server Installer has stopped working Problem signature: Problem Event Name: CLR20r3 Problem Signature 01: scenarioengine.exe Problem Signature 02: 11.0.3000.0 Problem Signature 03: 5081b97a Problem Signature 04: Microsoft.SqlServer.Chainer.Setup Problem Signature 05: 11.0.3000.0 Problem Signature 06: 5081b97a Problem Signature 07: 18 Problem Signature 08: 0 Problem Signature 09: System.IO.FileLoadException OS Version: 6.2.9200.2.0.0.272.79 Locale ID: 1033 Additional Information 1: c319 Additional Information 2: c3196e5863e32e0baf269d62f56cbc70 Additional Information 3: 422d Additional Information 4: 422d950c58f4efd1ef1d8394fee5d263 What I've tried After initial googling, I've tried the following things: Go through the list of hardware and software pre-reqs. All the software seems to be there by default on Server 2012 and my hardware meets the reqs. Copy the installation media to the local drive and try to install from that (rather than a DVD). This produced the same error. Based on another error message, I installed .NET 4.0 (which apparently is not on Server 2012 out of the box). Same error. Install from command line. This didn't work either, but it gave me a different error: Error: Unhandled Exception: System.IO.FileLoadException: Could not load file or assembl y 'Microsoft.SqlServer.Configuration.Sco, Version=11.0.0.0, Culture=neutral, Pub licKeyToken=89845dcd8080cc91' or one of its dependencies. Strong name validation failed. (Exception from HRESULT: 0x8013141A) ---> System.Security.SecurityExcep tion: Strong name validation failed. (Exception from HRESULT: 0x8013141A) --- End of inner exception stack trace --- at Microsoft.SqlServer.Chainer.Infrastructure.InputSettingService.CheckForBoo leanInputSettingExistenceFromCommandLine(ServiceContainer context, String settin gName) at Microsoft.SqlServer.Chainer.Setup.Setup.DebugBreak(ServiceContainer contex t) at Microsoft.SqlServer.Chainer.Setup.Setup.Main() Any ideas what I am missing?

    Read the article

  • Windows Server 2003 R2 Terminal Server : Internet Explorer Enhanced Security won't disable for Users

    - by Tubs
    The Internet Explorer Enhanced Security (IEES) won't disable using the normal method of disabling it from the Add/Remove Programs/Windows components. This came to light immediately after testing. IEES was disabled after Terminal Services were installed for admin and users, and after IE8 was installed. My initial thoughts were that there was some clash between IE8 and IE6 (which is the default on 2003 R2), so I uninstalled IE8 and reverted back to IE6. The same symptoms were displayed, when a normal user logged on Internet Explorer Enhnaced Security was enforced. I then thought it could be a problem that Terminal Server wasn't recognising the removal as IEES was on when initially installed. I uninistalled the Terminal Server Componants using the server roles, and then reactivated and deavtived IEES. Windows Server 2003 R2 allows a limited number of users to connect to RDP by default, so I logged on as a normal user, and IEES was disabled. I then reinstalled Terminal Server, and logged on as a normal user. IEES was back enabled. Why is this?

    Read the article

  • Outlook unable to synchronize SharePoint library - error 0x80004005

    - by DLux
    We have one large library (~10 GB) on SharePoint that cannot be synchronized with Outlook, even if you only attempt to synch one of the smaller sub folders in the library. Other libraries (or other library sub folders) work fine with Outlook. This is with MOSS 2007 SP1 and Outlook 2007 SP2. The error is: Task 'SharePoint' reported error (0x80004005): 'An error occurred either in Outlook or SharePoint. Contact the SharePoint site administrator.' Reproducing the error Open up the large SharePoint document library in Internet Explorer From the Actions menu, select Connect to Outlook Select Allow on the stssync: security warning that pops up Outlook automatically tries an initial sync and sync status immediately shows the above error. Update 1: I verified the same issue occurs on Windows XP SP3 with IE 6 using Outlook 2007 SP2 and the same SharePoint library (it was originally tested on Windows 7). The issue is definitely related to the library or Outlook. Update 2: Using stsadm I exported the site with this large document library (8.6 GB 15,000 items) and imported it on to a development system. The result is the same on the development system - multiple clients are unable to connect Outlook to the library and get the 0x80004005 error above.

    Read the article

  • LDAP authentication issue with Kerio Connect

    - by djk
    We have Kerio Connect (mail server) running on a Windows Server 2003 server on a domain. In the webmail client, users are able to change their domain password. This functionality used to work fine until a user tried to change their password a few days ago, when every password they'd try would result in the webmail client claiming their password was "invalid". I spoke to Kerio about this and they claim that this error is returned by the domain controller, which supports my initial investigations. The error that the DC is logging when an attempt is made to change the password is this: "80090308: LdapErr: DSID-0C090334, comment: AcceptSecurityContext error, data 52e, vece" The "data 52e" part indicates that this is an "invalid credentials" error. I don't see how this can be as I've tried (in the Kerio Connect configuration) various accounts that have privileges to modify accounts, including my own as I am a domain admin. I have ran 'dcdiag' (all tests) on the DC and it came back passing every single one of them. I've searched high and low for an answer to this and came up empty. Does anyone have any idea why this may have suddenly started happening? Thanks! Edit: I should mention that the passwords we are changing to do comply with the complexity policy.

    Read the article

  • Apache is reponding a blank white page

    - by Bruno Araujo
    I have the following situation: A site hosted in apache 2.4, with ssl, that works like a charm for a while now, but out of no where, without modifications to the site, apache started serving random blank pages. The workaround this is to delete the cookies of the browser or restart the browser. I've switched the vitualhost to log in debug mode but it didn't got me anywhere. Here is the debug log of a failed page load: [Wed Oct 24 10:57:35.762547 2012] [ssl:info] [pid 27854:tid 140617706374912] [client 192.168.10.150:58917] AH01964: Connection to child 147 established (server xxx.com.br:443) [Wed Oct 24 10:57:35.762739 2012] [ssl:debug] [pid 27854:tid 140617706374912] ssl_engine_kernel.c(1966): [client 192.168.10.150:58917] AH02043: SSL virtual host for servername xxx.com.br found [Wed Oct 24 10:57:35.777479 2012] [ssl:debug] [pid 27854:tid 140617706374912] ssl_engine_kernel.c(1899): [client 192.168.10.150:58917] AH02041: Protocol: TLSv1, Cipher: DHE-RSA-AES256-SHA (256/256 bits) [Wed Oct 24 10:57:35.779912 2012] [ssl:debug] [pid 27854:tid 140617706374912] ssl_engine_kernel.c(243): [client 192.168.10.150:58917] AH02034: Initial (No.1) HTTPS request received for child 147 (server xxx.com.br:443) [Wed Oct 24 10:57:35.780044 2012] [authz_core:debug] [pid 27854:tid 140617706374912] mod_authz_core.c(809): [client 192.168.10.150:58917] AH01628: authorization result: granted (no directives) [Wed Oct 24 10:57:40.783950 2012] [ssl:info] [pid 27854:tid 140617706374912] (70007)The timeout specified has expired: [client 192.168.10.150:58917] AH01991: SSL input filter read failed. [Wed Oct 24 10:57:40.784077 2012] [ssl:debug] [pid 27854:tid 140617706374912] ssl_engine_io.c(988): [remote 192.168.10.150:58917] AH02001: Connection closed to child 147 with standard shutdown (server xxx.com.br:443)

    Read the article

  • Couldn't upload files to Sharepoint site while passing through Squid Proxy

    - by Ecio
    Hi all, we have this issue: one of our employees is collaborating with a supplier and he needs to upload documents on a Sharepoint site hosted on the supplier's main site. In our environment we use Squid Proxy to allow people navigate on the net (we have NTLM authentication and users transparently authenticate while using IE and FF). It seems that this specific Sharepoint site is using Integrated Windows Authentication only, and according to some research on the net it seems that this can have troubles with proxies. More specifically, we have tried two Squid versions: with Squid 3.0 we are unable to login to the site (the browser loads an empty page) with Squid 2.7 (that supports "Connection Pinning") we are able to login into the site, move on the different sections BUT.. when we try to upload a file that is bigger than a couple of KiloBytes (i.e. 10KB) the browser loads an error page (i think it's a 401 unauthorized but i must verify it) we've tried changing a couple of Squid options (in 2.7), what we got is that when you try to upload the file you got an authentication box (just like the initial login) and it refuses to go on even if you enter the same authentication credentials. What's really strange is that when you try to upload a small file (i.e. a text or binary 1KB file) the upload succeeds. I initially thought that maybe there was something misconfigured on their Sharepoint site but I've tried also this site: www.xsolive.com (it's a sharepoint 2007 demo site) and I've experienced the same problem. Has any of you experienced such behaviour? Thanks! Of course we've suggested to the supplier to activate also Basic+SSL and we're waiting for their reply..

    Read the article

  • Problem booting virtual machine after converting VMDK to VHD

    - by vg1890
    I used the VMWare VCenter Converter Standalone Client to convert a physical drive on my old PC to a virtual drive. The conversion worked fine and I ended up with a valid VMDK file. Next, I wanted to convert the VMDK to a VHD for use with Microsoft Virtual PC, since that's what I use on my new box. I used WinImage for the conversion and that worked fine, too. I can access the files from the virtual drive through WinImage. However, when I create a new virtual machine using Virtual PC and add the existing VHD file, the machine doesn't boot. The initial boot screen flashes with the amount of RAM and then the screen goes black. If I turn off the VM and reboot in safe mode I can see the drivers being loaded until eventually it gets to crcdisk.sys and hangs indefinitely. Any ideas how to fix this? I'm not opposed to starting over from scratch if there's another method to turn my physical machine into a Virtual PC VM. Thanks! EDIT - I should add that the virtual drive is a system boot drive and not a secondary drive. EDIT - I tried booting from the install CD and doing a repair. The result was that the system could not be repaired due to a "driver error."

    Read the article

  • URL Rewriting on GoDaddy Virtual Server

    - by Aristotle
    I migrated a Kohana2 application from a shared-hosting environment over to a virtual dedicated server. After this migration, I can't seem to get my .htaccess file working again. I apologize up front, but over the years I have never experienced so much frustration with anything else as I do with the dreaded .htaccess file. Presently I have my project installed immediately within a directory in my public folder: /var/html/www/info.php (general information about server) /var/html/www/logo.jpg (some flat file) /var/html/www/somesite.com/[kohana site exists here] So my .htaccess file is within that directory, and has the following contents: # Turn on URL rewriting RewriteEngine On # Installation directory RewriteBase /somesite.com/ # Protect application and system files from being viewed # This is only necessary when these files are inside the webserver document root RewriteRule ^(application|modules|system) - [R=404,L] # Allow any files or directories that exist to be displayed directly RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d # Rewrite all other URLs to index.php/URL RewriteRule .* index.php?kohana_uri=$0 [PT,QSA,L] # Alternativly, if the rewrite rule above does not work try this instead: #RewriteRule .* index.php?kohana_uri=$0 [PT,QSA,L] This doesn't work. The initial controller is loaded, since index.php is called up implicitly when nothing else is in the url. But if I try to load up some other non-default controller, the site fails. If I place the index.php back within the url, the call to other controllers works just fine. I'm really at my wits end, and would appreciate some direction here.

    Read the article

  • File transfer problems through VPN when Cisco IPS is enabled

    - by Richard West
    We have a Cisco ASA 5510 firewall with the IPS module installed. We have a customer that we must connect to via VPN to their network to exchange files via FTP. We use the Cisco VPN client (version 5.0.01.0600) on our local workstations, which are behind the firewall and subject to the IPS. The VPN client is successful in connecting to the remote site. However when we start the FTP file transfer we are able to upload only 150K to 200K of data, then everything stops. A minute later the VPN session is dropped. I think I have isolated this to an IPS issue by temporarily disabling the Service Policy on the ASA for the IPS with the following command: access-list IPS line 1 extended permit ip 0.0.0.0 0.0.0.0 0.0.0.0 0.0.0.0 inactive After this command was issued I then established the VPN to the remote site and was successful in transferring the entire file. While still connected to the VPN and FTP session I issued the command to enable the IPS: access-list IPS line 1 extended permit ip 0.0.0.0 0.0.0.0 0.0.0.0 0.0.0.0 The file transfer was tried again and was once again successful so I closed the FTP session and reopened it, while keeping the same VPN session open. This file transfer was also successful. This told me that nothing with the FTP programs was being filtered or causing the problem. Furthermore, we use FTP to exchange files with many sites everyday without issue. I then disconnected the original VPN session, which was established when the access-list was inactive, and reconnected the VPN session, now with the access-list active. After starting the FTP transfer the file stopped after 150K. To me this seems like the IPS is blocking, or somehow interfering with the initial VPN setup to the remote site. This only started happening last week after the latest IPS signature updates were applied (sig version 407.0). Our previous sig version was 95 days old becuase the system was not auto updating itself. Any ideas on what could be causing this problem?

    Read the article

  • How to auto-mount encfs volume on login in ubuntu 9.10

    - by xzenox
    Hi, Previously, in 9.04, I was using pam mount in conjunction with encfs to mount an encrypted volume at login. This worked perfectly and since the password was the same as my user password, none was entered besides the initial login one. Now in 9.10, using the same setup and the same volume line in pam's config file, the volume will not mount. The folder does not even get created for the mount point. I am thinking this might be caused by the fact that I now switched to using an encrypted /home directory (previously left unencrypted on 9.04). To encrypt it, i used the standard /home encrypt setup from the 9.10 fresh install. I am thinking that perhaps, pam tries to mount the volume before /home is mounted and fails. Is there a log file I could look into/post here? Note that mounting manually works fine using the same paths, writing full paths does not help, nor is removing the options attribute. Here's my volume entry: <volume user="nicholas" fstype="fuse" path="encfs#~/.dropbox_dir/Dropbox/encrypted" mountpoint="~/Dropbox" options="nonempty" />

    Read the article

  • Windows or Linux for VPN-VPN Bridge

    - by James
    I have the following network layout: Network1 ----VPN1-----Network2----VPN2----Network3 I can administer everything in Network1 only and my goal is to get to a box on Network3. I've been told by the admins of Network2 that it's not possible for them to route traffic from Network1 to Network3. I've finally been authorised to host a box in Network2 and I'm hoping with this I can set something up to resolve the issue. My question is should I set this up as a Windows or a Linux box. My initial thought was to use iptables to reroute requests but with my lack of experience with Windows Server (used for something or other in Network2) I'm not sure if this will work. My head's full of questions like: - can I get an ip without logging in to a windows domain? - if I do get an ip, do Windows Servers manage routing through the VPN? - can I make a linux box authenticate with Windows Server to log on to the domain? - would it just be easier to set up a windows box? - is it possible to configure a windows box to do routing from Network1 to Network3? Has anyone done anything like this before? Had experience managing Windows Server? Authenticated (or not as the case may be) to a Windows domain? I'd really appreciate your advice. It might be worth mentioning that the overall objective is to establish a telnet connection from a box on Network1 to a box on Network3.

    Read the article

  • A network-related or instance-specific error occurred while establishing a connection to SQL Server

    - by sf
    Hi, I'm getting the following error when trying to load an Asp.NET MVC App on IIS 7 with Sql Server 2008 Express. The App uses Linq to SQL. A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server) I've done some searching and all answers point to enabling TCP connections in Sql Server Configuration which I have done to no avail. The connection string I am using is: Server=SERVERNAME\SQLEXPRESS;Database=DBName;Integrated Security=true The catch. I have another app that already could talk to the Sql Server just fine. Even before playing around with the Sql Server Configuration Settings. The other app uses the following connectionstring: Data Source=SERVERNAME\SQLEXPRESS;Initial Catalog=OtherDbName;Integrated Security=True;Persist Security Info=False;Connect Timeout=120 I've tried this connectionstring on the app that isn't working and it still doesn't work. Please help. I think i'm about to go crazy

    Read the article

  • python mysqldb - mysql server gone away - can't reconnect

    - by david.barkhuizen
    Hi Folks, When attempting to import a bunch of data into mysql tables using python and mysqldb, I run into the following error '2006 - mySQL Server has gone away', and then I am unable to reconnect again within the script. I am iniitially re-using a connection object across transactions ( delineated by conn.commit() ), then when I first encounter this exception, if I create a new connection by calling MySQLdb.connect(), this new connection also fails with the same exception. This error does not occur immediately, I can pump a fair amount of data into the db, but then faithfully occurs after I have inserted a couple thousand records, so roughly once the db has committed a certain transaction volume, it always falls over like this. If I rerun the script, WITHOUT restarting the db server. then it resumes where it left off, pumps in some data, then falls over again. Before recommendations to change time-out timings, does anyone know why I am not able to establish a new connection after the initial failure ? - Even if I try a couple of times waiting a couple of seconds between each. (btw, I'm running Windows 7, mysql server 5.1.48, mysqldb 1.2.3.gamma.1, python 2.6)

    Read the article

  • How to diagnose disk errors when disk appears to be ok?

    - by Kylotan
    I have a six-month-old 1TB Seagate drive formatted into 2 NTFS partitions, and the disk appeared to be failing with Windows dropping down from UDMA to PIO mode, reporting Delayed Write Errors, and hanging Explorer when browsing directories. My initial suspicion was that the disk was dying. However, on further examination it appears that Ubuntu, which doesn't write to the volume frequently like Windows does, was able to read the disk properly and retrieve all the data intact, saving me from having to use an older backup. Finally, running the Seatools DOS diagnostic reported that the disk has no problems, ie. SMART errors and no bad sectors, apparently. This, in combination with the relative youth of the disk, suggests that something else is broken. The cable? The PSU? The integrated disk controller? But what would be a good way to diagnose the problem without risking damaging the data? I intend to extract the disk and try it in an external eSATA enclosure and see if the write errors cease, but in the event of the disk appearing to be fine, I would like to be able to confirm what part of the hardware is actually broken here in order to know just what needs replacing. Are there any good ways to go about this?

    Read the article

< Previous Page | 285 286 287 288 289 290 291 292 293 294 295 296  | Next Page >