Search Results

Search found 40923 results on 1637 pages for 'so give me back my rep'.

Page 82/1637 | < Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >

  • Help! The log file for database 'tempdb' is full. Back up the transaction log for the database to fr

    - by michael.lukatchik
    We're running SQL Server 2000. In our database, we have an "Orders" table with approximately 750,000 rows. We can perform simple SELECT statements on this table. However, when we want to run a query like SELECT TOP 100 * FROM Orders ORDER BY Date_Ordered DESC, we receive the following message: Error: 9002, Severity: 17, State: 6 The log file for database 'tempdb' is full. Back up the transaction log for the database to free up some log space. We have other tables in our database which are similar in size of the amount of records that are in the tables (i.e. 700,000 records). On these tables, we can run any queries we'd like and we never receive a message about 'tempdb being full'. To resolve this, we've backed up our database, shrunk the actual database and also shrunk the database and files in the tempdb system database, but this hasn't resolved the issue. The size of our log file is set to autogrow. We're not sure where to go next. Are there any ideas why we still might be receiving this message? Error: 9002, Severity: 17, State: 6 The log file for database 'tempdb' is full. Back up the transaction log for the database to free up some log space.

    Read the article

  • Webforms vs. MVC. Once you start using MVC.. Do you ever go back to webforms?

    - by punkouter
    I checked out MVC months ago and didn't really get it.. but recently as I have become a better programmer I think it is making sense.. Here is my theory.. tell me if I got it Right In the 90s for Microsoft Devs we had Classic ASP. This mixed VBscript and HTML on the same page. So you needed to create all the HTML yourself and mix HTML and VBScript. This was not considered Ideal. Then .NET came along and everyone liked it because it was similiar to event driven VB 6 style programming. It created this abstraction of binding data to ASP Servier controls. It made getting Enumerated data easy to get on the screen with one line. Then recently Jquery and SOA concepts are mixed together.. Now people think.. Why create this extra layer of abstraction when I can just directly use .NET as a data provider and use jquery AJAX calls to get the data and create the HTML with it directly .. no need for the Webforms abstraction layer.. Sowe are back to creating HTML directly like we did in 1999. So MVC is all about saying Stop pretending like WEb programming is a VB6 app! Generate HTML directly! Am I missing anything? So I wonder.. for you people out there using MVC... is it the sort of things that once you get used to it you never want to go back to webforms??

    Read the article

  • How do I separate functionality with Javascript code to set Timeout?

    - by GIVE-ME-CHICKEN
    I have the following code: var comparePanel = $(__this.NOTICE_BODY); clearTimeout(__this._timeout); comparePanel.addClass(__this.VISIBLE); __this._timeout = setTimeout(function () { comparePanel.removeClass(__this.CL_VISIBLE); }, 3000); } }) The following has been repeated a few times: __this._timeout = setTimeout(function () { comparePanel.removeClass(__this.CL_VISIBLE); }, 3000); I want to be able to do something like this: __this._timeout = setTimeout(comparePanel, 3000); How do I define and call that function? PS. I am very very new to JavaScript so any explanation of what is going on is greatly appreciated.

    Read the article

  • What are possible results/side effects if replication between DC's in a Windows domain is unable to occur?

    - by hydroparadise
    There's plenty of administration literature out there how to properly manage Windows servers. But in dealing with real life, things don't always occur like you want them to. In Microsoft's Windows Server 2003 Administrator's Companion, out of 1400+ pages, theres only one page that I could find when it comes up setting up additional domain controlers. They make it sound seemless and don't reveal a whole lot on what happens if "peer" DC's are unable to replicate. Down to the specific issue at hand, we had a DC go down about a month ago due to a bad RAID controller. There was nothing critical that waranted imediate attention, so bringing it back up got put on the back burner. A month later, we get the DC back up and running and everyting seemed ok. The next day, nobody is able to logon complaining that the "user does not exist" or "unable to establish a trust relationship". Knowing that I had just put the downed DC back on the network, I immediately took it back off the network and had everybody restart the workstations. After that, exchange was fine, shares became available, and everybody was able to log in. After doing some event log swimming, it would appear that everything started due to replication issues on the SYSVOL. I've read where you can force replication, but that would mean putting it back on the network. I am afraid to put the DC back on the network in fear that something else could go wrong. So, what other issues could one expect to run into where two DC's are unreplicated for over a month?

    Read the article

  • Sanitizing user input before adding it to the DOM in Javascript

    - by I GIVE TERRIBLE ADVICE
    I'm writing the JS for a chat application I'm working on in my free time, and I need to have HTML identifiers that change according to user submitted data. This is usually something conceptually shaky enough that I would not even attempt it, but I don't see myself having much of a choice this time. What I need to do then is to escape the HTML id to make sure it won't allow for XSS or breaking HTML. Here's the code: var user_id = escape(id) var txt = '<div class="chut">'+ '<div class="log" id="chut_'+user_id+'"></div>'+ '<textarea id="chut_'+user_id+'_msg"></textarea>'+ '<label for="chut_'+user_id+'_to">To:</label>'+ '<input type="text" id="chut_'+user_id+'_to" value='+user_id+' readonly="readonly" />'+ '<input type="submit" id="chut_'+user_id+'_send" value="Message"/>'+ '</div>'; What would be the best way to escape id to avoid any kind of problem mentioned above? As you can see, right now I'm using the built-in escape() function, but I'm not sure of how good this is supposed to be compared to other alternatives. I'm mostly used to sanitizing input before it goes in a text node, not an id itself.

    Read the article

  • Sanitizing usser input before adding it to the DOM in Javascript

    - by I GIVE TERRIBLE ADVICE
    I'm writing the JS for a chat appication I'm working on in my free time, and I need to have HTML identifiers that change according to user submitted data. This is usually something conceptually shaky enough that I would not even attempt it, but I don't see myself having much of a choice this time. What I need to do then is to escape the HTML id to make sure it won't allow for XSS or breaking HTML. Here's the code: var user_id = escape(id) var txt = '<div class="chut">'+ '<div class="log" id="chut_'+user_id+'"></div>'+ '<textarea id="chut_'+user_id+'_msg"></textarea>'+ '<label for="chut_'+user_id+'_to">To:</label>'+ '<input type="text" id="chut_'+user_id+'_to" value='+user_id+' readonly="readonly" />'+ '<input type="submit" id="chut_'+user_id+'_send" value="Message"/>'+ '</div>'; What would be the best way to escape id to avoid any kind of problem mentioned above? As you can see, right now I'm using the built-in escape() function, but I'm not sure of how good this is supposed to be compared to other alternatives. I'm mostly used to sanitizing input before it goes in a text node, not an id itself.

    Read the article

  • Learning AngularJS by Example – The Customer Manager Application

    - by dwahlin
    I’m always tinkering around with different ideas and toward the beginning of 2013 decided to build a sample application using AngularJS that I call Customer Manager. It’s not exactly the most creative name or concept, but I wanted to build something that highlighted a lot of the different features offered by AngularJS and how they could be used together to build a full-featured app. One of the goals of the application was to ensure that it was approachable by people new to Angular since I’ve never found overly complex applications great for learning new concepts. The application initially started out small and was used in my AngularJS in 60-ish Minutes video on YouTube but has gradually had more and more features added to it and will continue to be enhanced over time. It’ll be used in a new “end-to-end” training course my company is working on for AngularjS as well as in some video courses that will be coming out. Here’s a quick look at what the application home page looks like: In this post I’m going to provide an overview about how the application is organized, back-end options that are available, and some of the features it demonstrates. I’ve already written about some of the features so if you’re interested check out the following posts: Building an AngularJS Modal Service Building a Custom AngularJS Unique Value Directive Using an AngularJS Factory to Interact with a RESTful Service Application Structure The structure of the application is shown to the right. The  homepage is index.html and is located at the root of the application folder. It defines where application views will be loaded using the ng-view directive and includes script references to AngularJS, AngularJS routing and animation scripts, plus a few others located in the Scripts folder and to custom application scripts located in the app folder. The app folder contains all of the key scripts used in the application. There are several techniques that can be used for organizing script files but after experimenting with several of them I decided that I prefer things in folders such as controllers, views, services, etc. Doing that helps me find things a lot faster and allows me to categorize files (such as controllers) by functionality. My recommendation is to go with whatever works best for you. Anyone who says, “You’re doing it wrong!” should be ignored. Contrary to what some people think, there is no “one right way” to organize scripts and other files. As long as the scripts make it down to the client properly (you’ll likely minify and concatenate them anyway to reduce bandwidth and minimize HTTP calls), the way you organize them is completely up to you. Here’s what I ended up doing for this application: Animation code for some custom animations is located in the animations folder. In addition to AngularJS animations (which are defined using CSS in Content/animations.css), it also animates the initial customer data load using a 3rd party script called GreenSock. Controllers are located in the controllers folder. Some of the controllers are placed in subfolders based upon the their functionality while others are placed at the root of the controllers folder since they’re more generic:   The directives folder contains the custom directives created for the application. The filters folder contains the custom filters created for the application that filter city/state and product information. The partials folder contains partial views. This includes things like modal dialogs used in the application. The services folder contains AngularJS factories and services used for various purposes in the application. Most of the scripts in this folder provide data functionality. The views folder contains the different views used in the application. Like the controllers folder, the views are organized into subfolders based on their functionality:   Back-End Services The Customer Manager application (grab it from Github) provides two different options on the back-end including ASP.NET Web API and Node.js. The ASP.NET Web API back-end uses Entity Framework for data access and stores data in SQL Server (LocalDb). The other option on the back-end is Node.js, Express, and MongoDB.   Using the ASP.NET Web API Back-End To run the application using ASP.NET Web API/SQL Server back-end open the .sln file at the root of the project in Visual Studio 2012 or higher (the free Express 2013 for Web version is fine). Press F5 and a browser will automatically launch and display the application. Using the Node.js Back-End To run the application using the Node.js/MongoDB back-end follow these steps: In the CustomerManager directory execute 'npm install' to install Express, MongoDB and Mongoose (package.json). Load sample data into MongoDB by performing the following steps: Execute 'mongod' to start the MongoDB daemon Navigate to the CustomerManager directory (the one that has initMongoCustData.js in it) then execute 'mongo' to start the MongoDB shell Enter the following in the mongo shell to load the seed files that handle seeding the database with initial data: use custmgr load("initMongoCustData.js") load("initMongoSettingsData.js") load("initMongoStateData.js") Start the Node/Express server by navigating to the CustomerManager/server directory and executing 'node app.js' View the application at http://localhost:3000 in your browser. Key Features The Customer Manager application certainly doesn’t cover every feature provided by AngularJS (as mentioned the intent was to keep it as simple as possible) but does provide insight into several key areas: Using factories and services as re-useable data services (see the app/services folder) Creating custom directives (see the app/directives folder) Custom paging (see app/views/customers/customers.html and app/controllers/customers/customersController.js) Custom filters (see app/filters) Showing custom modal dialogs with a re-useable service (see app/services/modalService.js) Making Ajax calls using a factory (see app/services/customersService.js) Using Breeze to retrieve and work with data (see app/services/customersBreezeService.js). Switch the application to use the Breeze factory by opening app/services.config.js and changing the useBreeze property to true. Intercepting HTTP requests to display a custom overlay during Ajax calls (see app/directives/wcOverlay.js) Custom animations using the GreenSock library (see app/animations/listAnimations.js) Creating custom AngularJS animations using CSS (see Content/animations.css) JavaScript patterns for defining controllers, services/factories, directives, filters, and more (see any JavaScript file in the app folder) Card View and List View display of data (see app/views/customers/customers.html and app/controllers/customers/customersController.js) Using AngularJS validation functionality (see app/views/customerEdit.html, app/controllers/customerEditController.js, and app/directives/wcUnique.js) More… Conclusion I’ll be enhancing the application even more over time and welcome contributions as well. Tony Quinn contributed the initial Node.js/MongoDB code which is very cool to have as a back-end option. Access the standard application here and a version that has custom routing in it here. Additional information about the custom routing can be found in this post.

    Read the article

  • Redirect network logs from syslog to another file

    - by w0rldart
    I keep logging way to much info (not needed, for now) in my syslog, and not daily or hourly... but instant. If I want to watch for something in my syslog I just can't because the network log keeps interfering. So, how can I redirect network logs to another file and/or stop logging it? Dec 10 17:01:33 user kernel: [ 8716.000587] MediaState is connected Dec 10 17:01:33 user kernel: [ 8716.000599] ==>rt_ioctl_giwmode(mode=2) Dec 10 17:01:33 user kernel: [ 8716.000601] ==>rt_ioctl_giwfreq 11 Dec 10 17:01:33 user kernel: [ 8716.000612] rt28xx_get_wireless_stats ---> Dec 10 17:01:33 user kernel: [ 8716.000615] <--- rt28xx_get_wireless_stats Dec 10 17:01:39 user kernel: [ 8722.000714] MediaState is connected Dec 10 17:01:39 user kernel: [ 8722.000729] ==>rt_ioctl_giwmode(mode=2) Dec 10 17:01:39 user kernel: [ 8722.000732] ==>rt_ioctl_giwfreq 11 Dec 10 17:01:39 user kernel: [ 8722.000747] rt28xx_get_wireless_stats ---> Dec 10 17:01:39 user kernel: [ 8722.000751] <--- rt28xx_get_wireless_stats Dec 10 17:01:44 user kernel: [ 8726.904025] QuickDRS: TxTotalCnt <= 15, train back to original rate Dec 10 17:01:45 user kernel: [ 8728.003138] MediaState is connected Dec 10 17:01:45 user kernel: [ 8728.003153] ==>rt_ioctl_giwmode(mode=2) Dec 10 17:01:45 user kernel: [ 8728.003157] ==>rt_ioctl_giwfreq 11 Dec 10 17:01:45 user kernel: [ 8728.003171] rt28xx_get_wireless_stats ---> Dec 10 17:01:45 user kernel: [ 8728.003175] <--- rt28xx_get_wireless_stats Dec 10 17:01:51 user kernel: [ 8734.004066] MediaState is connected Dec 10 17:01:51 user kernel: [ 8734.004079] ==>rt_ioctl_giwmode(mode=2) Dec 10 17:01:51 user kernel: [ 8734.004082] ==>rt_ioctl_giwfreq 11 Dec 10 17:01:51 user kernel: [ 8734.004096] rt28xx_get_wireless_stats ---> Dec 10 17:01:51 user kernel: [ 8734.004099] <--- rt28xx_get_wireless_stats Dec 10 17:01:57 user kernel: [ 8740.004108] MediaState is connected Dec 10 17:01:57 user kernel: [ 8740.004119] ==>rt_ioctl_giwmode(mode=2) Dec 10 17:01:57 user kernel: [ 8740.004121] ==>rt_ioctl_giwfreq 11 Dec 10 17:01:57 user kernel: [ 8740.004132] rt28xx_get_wireless_stats ---> Dec 10 17:01:57 user kernel: [ 8740.004135] <--- rt28xx_get_wireless_stats Dec 10 17:01:57 user kernel: [ 8740.436021] QuickDRS: TxTotalCnt <= 15, train back to original rate Dec 10 17:02:03 user kernel: [ 8746.005280] MediaState is connected Dec 10 17:02:03 user kernel: [ 8746.005294] ==>rt_ioctl_giwmode(mode=2) Dec 10 17:02:03 user kernel: [ 8746.005298] ==>rt_ioctl_giwfreq 11 Dec 10 17:02:03 user kernel: [ 8746.005312] rt28xx_get_wireless_stats ---> Dec 10 17:02:03 user kernel: [ 8746.005315] <--- rt28xx_get_wireless_stats Dec 10 17:02:09 user kernel: [ 8752.004790] MediaState is connected Dec 10 17:02:09 user kernel: [ 8752.004804] ==>rt_ioctl_giwmode(mode=2) Dec 10 17:02:09 user kernel: [ 8752.004808] ==>rt_ioctl_giwfreq 11 Dec 10 17:02:09 user kernel: [ 8752.004821] rt28xx_get_wireless_stats ---> Dec 10 17:02:09 user kernel: [ 8752.004825] <--- rt28xx_get_wireless_stats Dec 10 17:02:15 user kernel: [ 8757.984031] QuickDRS: TxTotalCnt <= 15, train back to original rate Dec 10 17:02:15 user kernel: [ 8758.004078] MediaState is connected Dec 10 17:02:15 user kernel: [ 8758.004094] ==>rt_ioctl_giwmode(mode=2) Dec 10 17:02:15 user kernel: [ 8758.004097] ==>rt_ioctl_giwfreq 11 Dec 10 17:02:15 user kernel: [ 8758.004112] rt28xx_get_wireless_stats ---> Dec 10 17:02:15 user kernel: [ 8758.004116] <--- rt28xx_get_wireless_stats Dec 10 17:02:16 user kernel: [ 8759.492017] QuickDRS: TxTotalCnt <= 15, train back to original rate Dec 10 17:02:19 user kernel: [ 8762.002179] SCANNING, suspend MSDU transmission ... Dec 10 17:02:19 user kernel: [ 8762.004291] MlmeScanReqAction -- Send PSM Data frame for off channel RM, SCAN_IN_PROGRESS=1! Dec 10 17:02:19 user kernel: [ 8762.025055] SYNC - BBP R4 to 20MHz.l Dec 10 17:02:19 user kernel: [ 8762.027249] RT35xx: SwitchChannel#1(RF=8, Pwr0=30, Pwr1=25, 2T), N=0xF1, K=0x02, R=0x02 Dec 10 17:02:19 user kernel: [ 8762.170206] RT35xx: SwitchChannel#2(RF=8, Pwr0=30, Pwr1=25, 2T), N=0xF1, K=0x07, R=0x02 Dec 10 17:02:19 user kernel: [ 8762.318211] RT35xx: SwitchChannel#3(RF=8, Pwr0=30, Pwr1=25, 2T), N=0xF2, K=0x02, R=0x02 Dec 10 17:02:19 user kernel: [ 8762.462269] RT35xx: SwitchChannel#4(RF=8, Pwr0=30, Pwr1=25, 2T), N=0xF2, K=0x07, R=0x02 Dec 10 17:02:19 user kernel: [ 8762.606229] RT35xx: SwitchChannel#5(RF=8, Pwr0=30, Pwr1=25, 2T), N=0xF3, K=0x02, R=0x02 Dec 10 17:02:19 user kernel: [ 8762.750202] RT35xx: SwitchChannel#6(RF=8, Pwr0=30, Pwr1=25, 2T), N=0xF3, K=0x07, R=0x02 Dec 10 17:02:20 user kernel: [ 8762.894217] RT35xx: SwitchChannel#7(RF=8, Pwr0=29, Pwr1=26, 2T), N=0xF4, K=0x02, R=0x02 Dec 10 17:02:20 user kernel: [ 8763.038202] RT35xx: SwitchChannel#11(RF=8, Pwr0=29, Pwr1=26, 2T), N=0xF6, K=0x02, R=0x02 Dec 10 17:02:20 user kernel: [ 8763.040194] CntlEnqueueForRecv(): BAR-Wcid(1), Tid (0) Dec 10 17:02:20 user kernel: [ 8763.040199] BAR(1) : Tid (0) - 03a3:037e Dec 10 17:02:20 user kernel: [ 8763.040387] SYNC - End of SCAN, restore to channel 11, Total BSS[03] Dec 10 17:02:20 user kernel: [ 8763.040400] ScanNextChannel -- Send PSM Data frame Dec 10 17:02:20 user kernel: [ 8763.040402] bFastRoamingScan ~~~~~~~~~~~~~ Get back to send data ~~~~~~~~~~~~~ Dec 10 17:02:20 user kernel: [ 8763.040405] SCAN done, resume MSDU transmission ... Dec 10 17:02:20 user kernel: [ 8763.047022] CntlEnqueueForRecv(): BAR-Wcid(1), Tid (0) Dec 10 17:02:20 user kernel: [ 8763.047026] BAR(1) : Tid (0) - 03a3:03a5 Dec 10 17:02:21 user kernel: [ 8763.898130] bImprovedScan ............. Resume for bImprovedScan, SCAN_PENDING .............. Dec 10 17:02:21 user kernel: [ 8763.898143] SCANNING, suspend MSDU transmission ... Dec 10 17:02:21 user kernel: [ 8763.900245] MlmeScanReqAction -- Send PSM Data frame for off channel RM, SCAN_IN_PROGRESS=1! Dec 10 17:02:21 user kernel: [ 8763.921144] SYNC - BBP R4 to 20MHz.l Dec 10 17:02:21 user kernel: [ 8763.923339] RT35xx: SwitchChannel#8(RF=8, Pwr0=29, Pwr1=26, 2T), N=0xF4, K=0x07, R=0x02 Dec 10 17:02:21 user kernel: [ 8763.996019] QuickDRS: TxTotalCnt <= 15, train back to original rate Dec 10 17:02:21 user kernel: [ 8764.066221] RT35xx: SwitchChannel#9(RF=8, Pwr0=29, Pwr1=26, 2T), N=0xF5, K=0x02, R=0x02 Dec 10 17:02:21 user kernel: [ 8764.210212] RT35xx: SwitchChannel#10(RF=8, Pwr0=29, Pwr1=26, 2T), N=0xF5, K=0x07, R=0x02 Dec 10 17:02:21 user kernel: [ 8764.215536] CntlEnqueueForRecv(): BAR-Wcid(1), Tid (0) Dec 10 17:02:21 user kernel: [ 8764.215542] BAR(1) : Tid (0) - 0457:0452 Dec 10 17:02:21 user kernel: [ 8764.244000] CntlEnqueueForRecv(): BAR-Wcid(1), Tid (0) Dec 10 17:02:21 user kernel: [ 8764.244004] BAR(1) : Tid (0) - 0459:0456 Dec 10 17:02:21 user kernel: [ 8764.253019] CntlEnqueueForRecv(): BAR-Wcid(1), Tid (0) Dec 10 17:02:21 user kernel: [ 8764.253023] BAR(1) : Tid (0) - 045c:0458 Dec 10 17:02:21 user kernel: [ 8764.256677] CntlEnqueueForRecv(): BAR-Wcid(1), Tid (0) Dec 10 17:02:21 user kernel: [ 8764.256681] BAR(1) : Tid (0) - 045c:045b Dec 10 17:02:21 user kernel: [ 8764.259785] CntlEnqueueForRecv(): BAR-Wcid(1), Tid (0) Dec 10 17:02:21 user kernel: [ 8764.259788] BAR(1) : Tid (0) - 045d:045b Dec 10 17:02:21 user kernel: [ 8764.280467] CntlEnqueueForRecv(): BAR-Wcid(1), Tid (0) Dec 10 17:02:21 user kernel: [ 8764.280471] BAR(1) : Tid (0) - 045f:045c Dec 10 17:02:21 user kernel: [ 8764.282189] CntlEnqueueForRecv(): BAR-Wcid(1), Tid (0) Dec 10 17:02:21 user kernel: [ 8764.282192] BAR(1) : Tid (0) - 045f:045e Dec 10 17:02:21 user kernel: [ 8764.354204] RT35xx: SwitchChannel#11(RF=8, Pwr0=29, Pwr1=26, 2T), N=0xF6, K=0x02, R=0x02 Dec 10 17:02:21 user kernel: [ 8764.356408] ScanNextChannel():Send PWA NullData frame to notify the associated AP! Dec 10 17:02:21 user kernel: [ 8764.498202] RT35xx: SwitchChannel#12(RF=8, Pwr0=29, Pwr1=26, 2T), N=0xF6, K=0x07, R=0x02 Dec 10 17:02:21 user kernel: [ 8764.642210] RT35xx: SwitchChannel#13(RF=8, Pwr0=30, Pwr1=28, 2T), N=0xF7, K=0x02, R=0x02 Dec 10 17:02:22 user kernel: [ 8764.790229] RT35xx: SwitchChannel#14(RF=8, Pwr0=30, Pwr1=28, 2T), N=0xF8, K=0x04, R=0x02 Dec 10 17:02:22 user kernel: [ 8764.934238] RT35xx: SwitchChannel#11(RF=8, Pwr0=29, Pwr1=26, 2T), N=0xF6, K=0x02, R=0x02 Dec 10 17:02:22 user kernel: [ 8764.935243] CntlEnqueueForRecv(): BAR-Wcid(1), Tid (0) Dec 10 17:02:22 user kernel: [ 8764.935249] BAR(1) : Tid (0) - 048e:0485 Dec 10 17:02:22 user kernel: [ 8764.936423] SYNC - End of SCAN, restore to channel 11, Total BSS[05] Dec 10 17:02:22 user kernel: [ 8764.936436] ScanNextChannel -- Send PSM Data frame Dec 10 17:02:22 user kernel: [ 8764.936440] SCAN done, resume MSDU transmission ... Dec 10 17:02:22 user kernel: [ 8764.940529] RT35xx: SwitchChannel#11(RF=8, Pwr0=29, Pwr1=26, 2T), N=0xF6, K=0x02, R=0x02 Dec 10 17:02:22 user kernel: [ 8764.942178] CntlEnqueueForRecv(): BAR-Wcid(1), Tid (0) Dec 10 17:02:22 user kernel: [ 8764.942182] BAR(1) : Tid (0) - 0493:048e Dec 10 17:02:22 user kernel: [ 8764.942715] CNTL - All roaming failed, restore to channel 11, Total BSS[05] Dec 10 17:02:22 user kernel: [ 8764.948016] MMCHK - No BEACON. restore R66 to the low bound(56) Dec 10 17:02:22 user kernel: [ 8764.948307] ===>rt_ioctl_giwscan. 5(5) BSS returned, data->length = 1111 Dec 10 17:02:23 user kernel: [ 8766.048073] QuickDRS: TxTotalCnt <= 15, train back to original rate Dec 10 17:02:23 user kernel: [ 8766.552034] QuickDRS: TxTotalCnt <= 15, train back to original rate Dec 10 17:02:27 user kernel: [ 8770.001180] MediaState is connected Dec 10 17:02:27 user kernel: [ 8770.001197] ==>rt_ioctl_giwmode(mode=2) Dec 10 17:02:27 user kernel: [ 8770.001201] ==>rt_ioctl_giwfreq 11 Dec 10 17:02:27 user kernel: [ 8770.001219] rt28xx_get_wireless_stats ---> Dec 10 17:02:27 user kernel: [ 8770.001223] <--- rt28xx_get_wireless_stats Dec 10 17:02:28 user kernel: [ 8771.564020] QuickDRS: TxTotalCnt <= 15, train back to original rate Dec 10 17:02:29 user kernel: [ 8772.064031] QuickDRS: TxTotalCnt <= 15, train back to original rate

    Read the article

  • Convert ddply {plyr} to Oracle R Enterprise, or use with Embedded R Execution

    - by Mark Hornick
    The plyr package contains a set of tools for partitioning a problem into smaller sub-problems that can be more easily processed. One function within {plyr} is ddply, which allows you to specify subsets of a data.frame and then apply a function to each subset. The result is gathered into a single data.frame. Such a capability is very convenient. The function ddply also has a parallel option that if TRUE, will apply the function in parallel, using the backend provided by foreach. This type of functionality is available through Oracle R Enterprise using the ore.groupApply function. In this blog post, we show a few examples from Sean Anderson's "A quick introduction to plyr" to illustrate the correpsonding functionality using ore.groupApply. To get started, we'll create a demo data set and load the plyr package. set.seed(1) d <- data.frame(year = rep(2000:2014, each = 3),         count = round(runif(45, 0, 20))) dim(d) library(plyr) This first example takes the data frame, partitions it by year, and calculates the coefficient of variation of the count, returning a data frame. # Example 1 res <- ddply(d, "year", function(x) {   mean.count <- mean(x$count)   sd.count <- sd(x$count)   cv <- sd.count/mean.count   data.frame(cv.count = cv)   }) To illustrate the equivalent functionality in Oracle R Enterprise, using embedded R execution, we use the ore.groupApply function on the same data, but pushed to the database, creating an ore.frame. The function ore.push creates a temporary table in the database, returning a proxy object, the ore.frame. D <- ore.push(d) res <- ore.groupApply (D, D$year, function(x) {   mean.count <- mean(x$count)   sd.count <- sd(x$count)   cv <- sd.count/mean.count   data.frame(year=x$year[1], cv.count = cv)   }, FUN.VALUE=data.frame(year=1, cv.count=1)) You'll notice the similarities in the first three arguments. With ore.groupApply, we augment the function to return the specific data.frame we want. We also specify the argument FUN.VALUE, which describes the resulting data.frame. From our previous blog posts, you may recall that by default, ore.groupApply returns an ore.list containing the results of each function invocation. To get a data.frame, we specify the structure of the result. The results in both cases are the same, however the ore.groupApply result is an ore.frame. In this case the data stays in the database until it's actually required. This can result in significant memory and time savings whe data is large. R> class(res) [1] "ore.frame" attr(,"package") [1] "OREbase" R> head(res)    year cv.count 1 2000 0.3984848 2 2001 0.6062178 3 2002 0.2309401 4 2003 0.5773503 5 2004 0.3069680 6 2005 0.3431743 To make the ore.groupApply execute in parallel, you can specify the argument parallel with either TRUE, to use default database parallelism, or to a specific number, which serves as a hint to the database as to how many parallel R engines should be used. The next ddply example uses the summarise function, which creates a new data.frame. In ore.groupApply, the year column is passed in with the data. Since no automatic creation of columns takes place, we explicitly set the year column in the data.frame result to the value of the first row, since all rows received by the function have the same year. # Example 2 ddply(d, "year", summarise, mean.count = mean(count)) res <- ore.groupApply (D, D$year, function(x) {   mean.count <- mean(x$count)   data.frame(year=x$year[1], mean.count = mean.count)   }, FUN.VALUE=data.frame(year=1, mean.count=1)) R> head(res)    year mean.count 1 2000 7.666667 2 2001 13.333333 3 2002 15.000000 4 2003 3.000000 5 2004 12.333333 6 2005 14.666667 Example 3 uses the transform function with ddply, which modifies the existing data.frame. With ore.groupApply, we again construct the data.frame explicilty, which is returned as an ore.frame. # Example 3 ddply(d, "year", transform, total.count = sum(count)) res <- ore.groupApply (D, D$year, function(x) {   total.count <- sum(x$count)   data.frame(year=x$year[1], count=x$count, total.count = total.count)   }, FUN.VALUE=data.frame(year=1, count=1, total.count=1)) > head(res)    year count total.count 1 2000 5 23 2 2000 7 23 3 2000 11 23 4 2001 18 40 5 2001 4 40 6 2001 18 40 In Example 4, the mutate function with ddply enables you to define new columns that build on columns just defined. Since the construction of the data.frame using ore.groupApply is explicit, you always have complete control over when and how to use columns. # Example 4 ddply(d, "year", mutate, mu = mean(count), sigma = sd(count),       cv = sigma/mu) res <- ore.groupApply (D, D$year, function(x) {   mu <- mean(x$count)   sigma <- sd(x$count)   cv <- sigma/mu   data.frame(year=x$year[1], count=x$count, mu=mu, sigma=sigma, cv=cv)   }, FUN.VALUE=data.frame(year=1, count=1, mu=1,sigma=1,cv=1)) R> head(res)    year count mu sigma cv 1 2000 5 7.666667 3.055050 0.3984848 2 2000 7 7.666667 3.055050 0.3984848 3 2000 11 7.666667 3.055050 0.3984848 4 2001 18 13.333333 8.082904 0.6062178 5 2001 4 13.333333 8.082904 0.6062178 6 2001 18 13.333333 8.082904 0.6062178 In Example 5, ddply is used to partition data on multiple columns before constructing the result. Realizing this with ore.groupApply involves creating an index column out of the concatenation of the columns used for partitioning. This example also allows us to illustrate using the ORE transparency layer to subset the data. # Example 5 baseball.dat <- subset(baseball, year > 2000) # data from the plyr package x <- ddply(baseball.dat, c("year", "team"), summarize,            homeruns = sum(hr)) We first push the data set to the database to get an ore.frame. We then add the composite column and perform the subset, using the transparency layer. Since the results from database execution are unordered, we will explicitly sort these results and view the first 6 rows. BB.DAT <- ore.push(baseball) BB.DAT$index <- with(BB.DAT, paste(year, team, sep="+")) BB.DAT2 <- subset(BB.DAT, year > 2000) X <- ore.groupApply (BB.DAT2, BB.DAT2$index, function(x) {   data.frame(year=x$year[1], team=x$team[1], homeruns=sum(x$hr))   }, FUN.VALUE=data.frame(year=1, team="A", homeruns=1), parallel=FALSE) res <- ore.sort(X, by=c("year","team")) R> head(res)    year team homeruns 1 2001 ANA 4 2 2001 ARI 155 3 2001 ATL 63 4 2001 BAL 58 5 2001 BOS 77 6 2001 CHA 63 Our next example is derived from the ggplot function documentation. This illustrates the use of ddply within using the ggplot2 package. We first create a data.frame with demo data and use ddply to create some statistics for each group (gp). We then use ggplot to produce the graph. We can take this same code, push the data.frame df to the database and invoke this on the database server. The graph will be returned to the client window, as depicted below. # Example 6 with ggplot2 library(ggplot2) df <- data.frame(gp = factor(rep(letters[1:3], each = 10)),                  y = rnorm(30)) # Compute sample mean and standard deviation in each group library(plyr) ds <- ddply(df, .(gp), summarise, mean = mean(y), sd = sd(y)) # Set up a skeleton ggplot object and add layers: ggplot() +   geom_point(data = df, aes(x = gp, y = y)) +   geom_point(data = ds, aes(x = gp, y = mean),              colour = 'red', size = 3) +   geom_errorbar(data = ds, aes(x = gp, y = mean,                                ymin = mean - sd, ymax = mean + sd),              colour = 'red', width = 0.4) DF <- ore.push(df) ore.tableApply(DF, function(df) {   library(ggplot2)   library(plyr)   ds <- ddply(df, .(gp), summarise, mean = mean(y), sd = sd(y))   ggplot() +     geom_point(data = df, aes(x = gp, y = y)) +     geom_point(data = ds, aes(x = gp, y = mean),                colour = 'red', size = 3) +     geom_errorbar(data = ds, aes(x = gp, y = mean,                                  ymin = mean - sd, ymax = mean + sd),                   colour = 'red', width = 0.4) }) But let's take this one step further. Suppose we wanted to produce multiple graphs, partitioned on some index column. We replicate the data three times and add some noise to the y values, just to make the graphs a little different. We also create an index column to form our three partitions. Note that we've also specified that this should be executed in parallel, allowing Oracle Database to control and manage the server-side R engines. The result of ore.groupApply is an ore.list that contains the three graphs. Each graph can be viewed by printing the list element. df2 <- rbind(df,df,df) df2$y <- df2$y + rnorm(nrow(df2)) df2$index <- c(rep(1,300), rep(2,300), rep(3,300)) DF2 <- ore.push(df2) res <- ore.groupApply(DF2, DF2$index, function(df) {   df <- df[,1:2]   library(ggplot2)   library(plyr)   ds <- ddply(df, .(gp), summarise, mean = mean(y), sd = sd(y))   ggplot() +     geom_point(data = df, aes(x = gp, y = y)) +     geom_point(data = ds, aes(x = gp, y = mean),                colour = 'red', size = 3) +     geom_errorbar(data = ds, aes(x = gp, y = mean,                                  ymin = mean - sd, ymax = mean + sd),                   colour = 'red', width = 0.4)   }, parallel=TRUE) res[[1]] res[[2]] res[[3]] To recap, we've illustrated how various uses of ddply from the plyr package can be realized in ore.groupApply, which affords the user explicit control over the contents of the data.frame result in a straightforward manner. We've also highlighted how ddply can be used within an ore.groupApply call.

    Read the article

  • RHEL Cluster FAIL after changing time on system

    - by Eugene S
    I've encountered a strange issue. I had to change the time on my Linux RHEL cluster system. I've done it using the following command from the root user: date +%T -s "10:13:13" After doing this, some message appeared relating to <emerg> #1: Quorum Dissolved however I didn't capture it completely. In order to investigate the issue I looked at /var/log/messages and I've discovered the following: Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [TOTEM] entering GATHER state from 0. Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [TOTEM] Creating commit token because I am the rep. Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [TOTEM] Storing new sequence id for ring 354 Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [TOTEM] entering COMMIT state. Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [TOTEM] entering RECOVERY state. Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [TOTEM] position [0] member 192.168.1.49: Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [TOTEM] previous ring seq 848 rep 192.168.1.49 Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [TOTEM] aru 61 high delivered 61 received flag 1 Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [TOTEM] Did not need to originate any messages in recovery. Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [TOTEM] Sending initial ORF token Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [CLM ] CLM CONFIGURATION CHANGE Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [CLM ] New Configuration: Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [CLM ] #011r(0) ip(192.168.1.49) Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [CLM ] Members Left: Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [CLM ] #011r(0) ip(192.168.1.51) Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [CLM ] Members Joined: Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [CMAN ] quorum lost, blocking activity Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [CLM ] CLM CONFIGURATION CHANGE Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [CLM ] New Configuration: Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [CLM ] #011r(0) ip(192.168.1.49) Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [CLM ] Members Left: Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [CLM ] Members Joined: Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [SYNC ] This node is within the primary component and will provide service. Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [TOTEM] entering OPERATIONAL state. Mar 22 16:40:42 hsmsc50sfe1a kernel: dlm: closing connection to node 2 Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [CLM ] got nodejoin message 192.168.1.49 Mar 22 16:40:42 hsmsc50sfe1a clurgmgrd[25809]: <emerg> #1: Quorum Dissolved Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [CPG ] got joinlist message from node 1 Mar 22 16:40:42 hsmsc50sfe1a ccsd[25705]: Cluster is not quorate. Refusing connection. Mar 22 16:40:42 hsmsc50sfe1a ccsd[25705]: Error while processing connect: Connection refused Mar 22 16:40:42 hsmsc50sfe1a ccsd[25705]: Invalid descriptor specified (-21). Mar 22 16:40:42 hsmsc50sfe1a ccsd[25705]: Someone may be attempting something evil. Mar 22 16:40:42 hsmsc50sfe1a ccsd[25705]: Error while processing disconnect: Invalid request descriptor Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [TOTEM] entering GATHER state from 9. Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [TOTEM] Creating commit token because I am the rep. Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [TOTEM] Storing new sequence id for ring 358 Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [TOTEM] entering COMMIT state. Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [TOTEM] entering RECOVERY state. Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [TOTEM] position [0] member 192.168.1.49: Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [TOTEM] previous ring seq 852 rep 192.168.1.49 Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [TOTEM] aru f high delivered f received flag 1 Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [TOTEM] position [1] member 192.168.1.51: Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [TOTEM] previous ring seq 852 rep 192.168.1.51 Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [TOTEM] aru f high delivered f received flag 1 Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [TOTEM] Did not need to originate any messages in recovery. Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [TOTEM] Sending initial ORF token Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [CLM ] CLM CONFIGURATION CHANGE Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [CLM ] New Configuration: Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [CLM ] #011r(0) ip(192.168.1.49) Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [CLM ] Members Left: Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [CLM ] Members Joined: Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [CLM ] CLM CONFIGURATION CHANGE Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [CLM ] New Configuration: Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [CLM ] #011r(0) ip(192.168.1.49) Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [CLM ] #011r(0) ip(192.168.1.51) Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [CLM ] Members Left: Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [CLM ] Members Joined: Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [CLM ] #011r(0) ip(192.168.1.51) Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [SYNC ] This node is within the primary component and will provide service. Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [TOTEM] entering OPERATIONAL state. Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [MAIN ] Node chb_sfe2a not joined to cman because it has existing state Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [CLM ] got nodejoin message 192.168.1.49 Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [CLM ] got nodejoin message 192.168.1.51 Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [CPG ] got joinlist message from node 1 Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [CPG ] got joinlist message from node 2 Mar 22 16:40:42 hsmsc50sfe1a ccsd[25705]: Cluster is not quorate. Refusing connection. Mar 22 16:40:42 hsmsc50sfe1a ccsd[25705]: Error while processing connect: Connection refused Mar 22 16:40:42 hsmsc50sfe1a ccsd[25705]: Invalid descriptor specified (-111). Mar 22 16:40:42 hsmsc50sfe1a ccsd[25705]: Someone may be attempting something evil. Mar 22 16:40:42 hsmsc50sfe1a ccsd[25705]: Error while processing get: Invalid request descriptor Mar 22 16:40:42 hsmsc50sfe1a ccsd[25705]: Invalid descriptor specified (-21). Mar 22 16:40:42 hsmsc50sfe1a ccsd[25705]: Someone may be attempting something evil. Mar 22 16:40:42 hsmsc50sfe1a ccsd[25705]: Error while processing disconnect: Invalid request descriptor How could this be related to the time change procedure I performed?

    Read the article

  • How Expedia Made My New Bride Cry

    - by Lance Robinson
    Tweet this? Email Expedia and ask them to give me and my new wife our honeymoon? When Expedia followed up their failure with our honeymoon trip with a complete and total lack of acknowledgement of any responsibility for the problem and endless loops of explaining the issue over and over again - I swore that they would make it right. When they brought my new bride to tears, I got an immediate and endless supply of motivation. I hope you will help me make them make it right by posting our story on Twitter, Facebook, your blog, on Expedia itself, and when talking to your friends in person about their own travel plans.   If you are considering using them now for an important trip - reconsider. Short summary: We arrived early for a flight - but Expedia had made a mistake with the data they supplied to JetBlue and Emirates, which resulted in us not being able to check in (one leg of our trip was missing)!  At the time of this post, three people (myself, my wife, and an exceptionally patient JetBlue employee named Mary) each spent hours on the phone with Expedia.  I myself spent right at 3 hours (according to iPhone records), Lauren spent an hour and a half or so, and poor Mary was probably on the phone for a good 3.5 hours.  This is after 5 hours total at the airport.  If you add up our phone time, that is nearly 8 hours of phone time over a 5 hour period with little or no help, stall tactics (?), run-around, denial, shifting of blame, and holding. Details below (times are approximate): First, my wife and I were married yesterday - June 18th, the 3 year anniversary of our first date. She is awesome. She is the nicest person I have ever known, a ton of fun, absolutely beautiful in every way. Ok enough mushy - here are the dirty details. 2:30 AM - Early Check-in Attempt - we attempted to check-in for our flight online. Some sort of technology error on website, instructed to checkin at desk. 4:30 AM - Arrive at airport. Try to check-in at kiosk, get the same error. We got to the JetBlue desk at RDU International Airport, where Mary helped us. Mary discovered that the Expedia provided itinerary does not match the Expedia provided tickets. We are informed that when that happens American, JetBlue, and others that use the same software cannot check you in for the flight because. Why? Because the itinerary was missing a leg of our flight! Basically we were not shown in the system as definitely being able to make it home. Mary called Expedia and was put on hold by their automated system. 4:55 AM - Mary, myself, and my brand new bride all waited for about 25 minutes when finally I decided I would make a call myself on my iPhone while Mary was on the airport phone. In their automated system, I chose "make a new reservation", thinking they might answer a little more quickly than "customer service". Not surprisingly I was connected to an Expedia person within 1 minute. They informed me that they would have to forward me to a customer service specialist. I explained to them that we were already on hold for that and had been for nearly half an hour, that we were going on our honeymoon and that our flight would be leaving soon - could they please help us. "Yes, I will help you". I hand the phone to JetBlue Mary who explains the situation 3 or 4 times. Obviously I couldn't hear both ends of the conversation at this point, but the Expedia person explained what the problem was by stating exactly what Mary had just spent 15 minutes explaining. Mary calmly confirms that this is the problem, and asks Expedia to re-issue the itinerary. Expedia tells Mary that they'll have to transfer her to customer service. Mary asks for someone specific so that we get an answer this time, and goes on hold. Mary get's connected, explains the situation, and then Mary's connection gets terminated. 5:10 AM - Mary calls back to the Expedia automated system again, and we wait for about 5 minutes on hold this time before I pick up my iPhone and call Expedia again myself. Again I go to sales, a person picks up the phone in less than a minute. I explain the situation and let them know that we are now very close to missing our flight for our honeymoon, could they please help us. "Yes, I will help you". Again I give the phone to Mary who provides them with a call back number in case we get disconnected again and explains the situation again. More back and forth with Expedia doing nothing but repeating the same questions, Mary answering the questions with the same information she provided in the original explanation, and Expedia simply restating the problem. Mary again asks them to re-issue the itinerary, and explains that doing so will fix the problem. Expedia again repeats the problem instead of fixing it, and Mary's connection gets terminated. 5:20 AM - Mary again calls back to Expedia. My beautiful bride also calls on her own phone. At this point she is struggling to hold back her tears, stumbling through an explanation of all that has happened and that we are about to miss our flight. Please help us. "Yes, I will help". My beautiful bride's connection gets terminated. Ok, maybe this disconnection isn't an accident. We've now been disconnected 3 times on two different phones. 5:45 AM - I walk away and pleadingly beg a person to help me. They "escalate" the issue to "Rosy" (sp?) at Expedia. I go through the whole song and dance again with Rosy, who gives me the same treatment Mary was given. Rosy blames JetBlue for now having the correct data. Meanwhile Mary is on the phone with Emirates Air (the airline for the second leg of our trip), who agrees with JetBlue that Expedia's data isn't up to date. We are informed by two airport employees that issues like this with Expedia are not uncommon, and that the fix is simple. On the phone iwth Rosy, I ask her to re-issue the itinerary because we are about to miss our flight. She again explains the problem to me. At this point, I am standing at the window, pleading with Rosy to help us get to our honeymoon, watching our airplane. Then our airplane leaves without us. 6:03 AM - At this point we have missed our flight. Re-issuing the itinerary is no longer a solution. I ask Rosy to start from the beginning and work us up a new trip. She says that she cannot do that. She says that she needs to talk to JetBlue and Emirates and find out why we cannot check-in for our flight. I remind Rosy that our flight has already left - I just watched it taxi away - it no longer matters why (not to mention the fact that we already knew why, and have known why since 4:30 AM), and have known the solution since 4:30 AM. Rosy, can you please book a new trip? Yes, but it will cost $400. Excuse me? Now you can, but it will cost ME to fix your mistake? Rosy says that she can escalate the situation to her supervisor but that will take 1.5 hours. 6:15 AM - I told Rosy that if they had re-issued the itinerary as JetBlue asked (at 4:30 AM), my new wife and I might be on the airplane now instead of dealing with this on the phone and missing the beginning (and how much more?) of our honeymoon. Rosy said that it was not necessary to re-issue the itinerary. Out of curiosity, i asked Rosy if there was some financial burden on them to re-issue the itinerary. "No", said Rosy. I asked her if it was a large time burden on Expedia to re-issue the itinerary. "No", said Rosy. I directly asked Rosy: Why wouldn't Expedia have re-issued the itinerary when JetBlue asked? No answer. I asked Rosy: If you had re-issued the itinerary at 4:30, isn't it possible that I would be on that flight right now? She actually surprised me by answering "Yes" to that question. So I pointed out that it followed that Expedia was responsible for the fact that we missed out flight, and she immediately went into more about how the problem was with JetBlue - but now it was ALSO an Emirates Air problem as well. I tell Rosy to go ahead and escalate the issue again, and please call me back in that 1.5 hours (which how is about 1 hour and 10 minutes away). 6:30 AM - I start tweeting my frustration with iPhone. It's now pretty much impossible for us to make it to The Maldives by 3pm, which is the time at which we would need to arrive in order to be allowed service to the actual island where we are staying. Expedia has now given me the run-around for 2 hours, caused me to miss my flight, and worst of all caused my amazing new wife Lauren to miss our honeymoon. You think I was mad? No. Furious. Its ok to make mistakes - but to refuse to fix them and to ruin our honeymoon? No, not ok, Expedia. I swore right then that Expedia would make this right. 7:45 AM - JetBlue mary is still talking her tail off to other people in JetBlue and Emirates Air. Mary works it out so that if Expedia simply books a new trip, JetBlue and Emirates will both waive all the fees. Now we just have to convince Expedia to fix their mistake and get us on our way! Around this time Expedia Rosy calls me back! I inform her of the excellent work of JetBlue Mary - that JetBlue and Emirates both will waive the fees so Expedia can fix their mistake and get us going on our way. She says that she sees documentation of this in her system and that she needs to put me on hold "for 1 to 10 minutes" to talk to Emirates Air (why I'm not exactly sure). I say ok. 8:45 AM - After an hour on hold, Rosy comes on the line and asks me to hold more. I ask her to call me back. 9:35 AM - I put down the iPhone Twitter app and picks up the laptop. You think I made some noise with my iPhone? Heh 11:25 AM - Expedia follows me and sends a canned "We're sorry, DM us the details".  If you look at their Twitter feed, 16 out of the most recent 20 tweets are exactly the same canned response.  The other 4?  Ads.  Um - #MultiFAIL? To Expedia:  You now have had (as explained above) 8 hours of 3 different people explaining our situation, you know the email address of our Expedia account, you know my web blog, you know my Twitter address, you know my phone number.  You also know how upset you have made both me and my new bride by treating us with such a ... non caring, scripted, uncooperative, argumentative, and possibly even deceitful manner.  In the wise words of the great Kenan Thompson of SNL: "FIX IT!".  And no, I'm NOT going away until you make this right. Period. 11:45 AM - Expedia corporate office called.  The woman I spoke to was very nice and apologetic.  She listened to me tell the story again, she says she understands the problem and she is going to work to resolve it.  I don't have any details on what exactly that resolution might me, she said she will call me back in 20 minutes.  She found out about the problem via Twitter.  Thank you Twitter, and all of you who helped.  Hopefully social media will win my wife and I our honeymoon, and hopefully Expedia will encourage their customer service teams treat their customers properly. 12:22 PM - Spoke to Fran again from Expedia corporate office.  She has a flight for us tonight.  She is booking it now.  We will arrive at our honeymoon destination of beautiful Veligandu Island Resort only 1 day late.  She cannot confirm today, but she expects that Expedia will pay for the lost honeymoon night.  Thank you everyone for your help.  I will reflect more on this whole situation and confirm its resolution after our flight is 100% confirmed.  For now, I'm going to take a breather and go kiss my wonderful wife! 1:50 PM - Have not yet received the promised phone call.  We did receive an email with a new itinerary for a flight but the booking is not for specific seats, so there is no guarantee that my wife and I will be able to sit together.  With the original booking I carefully selected our seats for every segment of our trip.  I decided to call into the phone number that Fran from the Expedia corporate office gave me.  Its automated voice system identified itself as "Tier 3 Support".  I am currently still on hold with them, I have not gotten through to a human yet. 1:55 PM - Fran from Expedia called me back.  She confirmed us as booked.  She called the airlines to confirm.  Unfortunately, Expedia was unwilling or unable to allow us any type of seat selection.  It is possible that i won't get to sit next to the woman I married less than a day ago on our 40 total hours of flight time (there and back).  In addition, our seats could be the worst seats on the planes, with no reclining seat back or right next to the restroom.  Despite this fact (which in my opinion is huge), the horrible inconvenience, the hours at the airport, and the negative Internet publicity that Expedia is receiving, Expedia declined to offer us any kind of upgrade or to mark us as SFU (suitable for upgrade).  Since they didn't offer - I asked, and was rejected.  I am grateful to finally be heading in the right direction, but not only did Expedia horribly botch this job from the very beginning, they followed that botch job with near zero customer service, followed by a verbally apologetic but otherwise half-hearted resolution.  If this works out favorably for us, great.  If not - I'm not done making noise, Expedia.  You owe us, and I expect you to make it right.  You haven't quite done that yet. Thanks - Thank you to Twitter.  Thanks to all those who sympathize with us and helped us get the attention of Expedia, since three people (one of them an airline employee) using Expedia's normal channels of communication for many hours didn't help.  Thanks especially to my PowerShell and Sharepoint friends, my local friends, and those connectors who encouraged me and spread my story. 5:15 PM - Love Wins - After all this, Lauren and I are exhausted.  We both took a short nap, and when we woke up we talked about the last 24 hours.  It was a big, amazing, story-filled 24 hours.  I said that Expedia won, but Lauren said no.  She pointed out how lucky we are.  We are in love and married.  We have wonderful family and friends.  We are both hard-working successful people who love what they do.  We get to go to an amazing exotic destination for our honeymoon like Veligandu in The Maldives...  That's a lot of good.  Expedia didn't win.  This was (is) a big loss for Expedia.  It is a public blemish for all to see.  But Lauren and I did win, big time.  Expedia may not have made things right - but things are right for us.  Post in progress... I will relay any further comments (or lack of) from Expedia soon, as well as an update on confirmation of their repayment of our lost resort room rates.  I'll also post a picture of us on our honeymoon as soon as I can!

    Read the article

  • Backing up my data causes my server to crash using Symantec Backup Exec 12, or How I Came to Loathe

    - by Kyle Noland
    I have a Dell PowerEdge 2850 running Windows Server 2003. It is the primary file server for one of my clients. I have another server also running Windows Server 2003 that acts as the core media server for Symantec Backup Exec 12. I recently upgraded from Backup Exec 11d to 12. This upgrade was necessary because we also just upgraded from Exchange 2003 to Exchange 2007. After the upgrade I had to push-install the new version 12 Backup Exec Remote Agents to each of the servers I am backing up (about 6 total). 5 of my servers are doing just fine, faithfully completing backups every night. My file server routinely crashes. Observations: When the server crashes, it does not blue screen, it just locks up completely. Even the mouse is unresponsive. If you leave the server locked up long enough, it will eventually reboot itself and hang on the Windows splash screen. There is absolutely zero useful Event Viewer evidence of a problem. The logs go from routine logging to an Unexplained Shutdown Event the next morning when I have to hard reset the server to get it to boot. 90% of the time the server does not boot cleanly, it hangs on the Windows splash screen. I don't have any light to shed here. When the server hangs all I can do is hard reset it and try again. Even after a successful boot and chkdsk /r operation, if you reboot the machine, you have a 90% chance it won't back up again cleanly. The back story: This server started crashing during nightly backups about a month ago. I tried everything I could think of to troubleshoot the problem and eventually had to give up because I could not keep coming to the office at 4 AM to try to get the server back online. One Friday I got lucky and the server stayed up for its entire full backup. I took this opportunity to restore the full backup to a temporary server I set up and switched all my users to the temporary. Then I reloaded the ailing file server. I kept all my users on the temporary file server for about 3 weeks. I installed the same Backup Exec Remote Agent and Trend Micro A/V client on the temporary server that I was using on the regular file server. During this time, I had absolutely no problems backing up the temporary server. I tested the reloaded file server extensively. I rebooted the server once an hour every day for 3 weeks trying to make it fail. It never did. I felt confident that the reload was the answer to my problems. I moved all of the data from the temporary server back to the regular server. I got 3 nightly backups out of it before it locked up again and started the familiar failure to boot cleanly behavior. This weekend I decided to monitor the file server through the entire backup job. I RDPd into the file server and also into the server running Backup Exec. On the file server I opened the Task Manager so I could view the processes and watch CPU and memory usage. Everything was running smoothly for about 60GB worth of backup. Then I noticed that the byte count of the backup job in Backup Exec had stopped progressing. I looked back over at my RDP session into the file server, and I was getting real time updates about CPU and memory usage still - both nearly 0%, which is unusual. Backups usually hover around 40% usage for the duration of the backup job. Let me reiterate this point: The screen was refreshing and I was getting real time Task Manager updates - until I clicked on the Start menu. The screen went black and the server locked up. In truth, I think the server had already locked up, the video card just hadn't figured it out yet. I went back into my bag of trick: driving to the office and hard reseting the server over and over again when it hangs up at the Windows splash screen. I did this for 2 hours without getting a successful boot. I started panicking because I did not have a decent backup to use to get everything back onto the working temporary file server. Once I exhausted everything I knew to do, I took a deep breath, booted to the Windows Server 2003 CD and performed a repair installation of Windows. The server came back up fine, with all of my data intact. I can now reboot the server at will and it will come back up cleanly. The problem is that I'm afraid as soon as I try to back that data up again I will back at square one. So let me sum things up: Here is what I've done so far to troubleshoot this server: Deleted and recreated the RAID 5 sets. Initialized the drives. Reloaded the server with a fresh Server 2003 install. Confirmed with Dell that I have installed the latest, Dell approved BIOS and NIC drivers. Uninstalled / reinstalled the Backup Exec Remote Agent. Uninstalled the Trend Micro A/V client. Configured the server not to reboot itself after a blue screen so I can see any stop error. I used to think the server was blue screening, but since I enabled this setting I now know that the server just completely locks up. Run chkdsk /r from the Windows Recovery Console. Several errors were found and corrected, but did not help my problem. Help confirm or deny the following assumptions: There are two problems at work here. Why the server is locking up in the first place, and why the server won't boot cleanly after a lockup. This is ultimately a software problem. The server works fine and can be rebooted cleanly all day long - until the first lockup - following a fresh OS load or even a Repair installation. This is not a problem with Backup Exec in general. All of my other servers back up just fine. For the record, all of the other servers run Server 2003, and some of them house more data than the file server in question here. Any help is appreciated. The irony is almost too much to bear. Backing up my data is what is jeopardizing it.

    Read the article

  • Backing up my data causes my server to crash using Symantec Backup Exec 12, or How I Came to Loathe Irony

    - by Kyle Noland
    I have a Dell PowerEdge 2850 running Windows Server 2003. It is the primary file server for one of my clients. I have another server also running Windows Server 2003 that acts as the core media server for Symantec Backup Exec 12. I recently upgraded from Backup Exec 11d to 12. This upgrade was necessary because we also just upgraded from Exchange 2003 to Exchange 2007. After the upgrade I had to push-install the new version 12 Backup Exec Remote Agents to each of the servers I am backing up (about 6 total). 5 of my servers are doing just fine, faithfully completing backups every night. My file server routinely crashes. Observations: When the server crashes, it does not blue screen, it just locks up completely. Even the mouse is unresponsive. If you leave the server locked up long enough, it will eventually reboot itself and hang on the Windows splash screen. There is absolutely zero useful Event Viewer evidence of a problem. The logs go from routine logging to an Unexplained Shutdown Event the next morning when I have to hard reset the server to get it to boot. 90% of the time the server does not boot cleanly, it hangs on the Windows splash screen. I don't have any light to shed here. When the server hangs all I can do is hard reset it and try again. Even after a successful boot and chkdsk /r operation, if you reboot the machine, you have a 90% chance it won't back up again cleanly. The back story: This server started crashing during nightly backups about a month ago. I tried everything I could think of to troubleshoot the problem and eventually had to give up because I could not keep coming to the office at 4 AM to try to get the server back online. One Friday I got lucky and the server stayed up for its entire full backup. I took this opportunity to restore the full backup to a temporary server I set up and switched all my users to the temporary. Then I reloaded the ailing file server. I kept all my users on the temporary file server for about 3 weeks. I installed the same Backup Exec Remote Agent and Trend Micro A/V client on the temporary server that I was using on the regular file server. During this time, I had absolutely no problems backing up the temporary server. I tested the reloaded file server extensively. I rebooted the server once an hour every day for 3 weeks trying to make it fail. It never did. I felt confident that the reload was the answer to my problems. I moved all of the data from the temporary server back to the regular server. I got 3 nightly backups out of it before it locked up again and started the familiar failure to boot cleanly behavior. This weekend I decided to monitor the file server through the entire backup job. I RDPd into the file server and also into the server running Backup Exec. On the file server I opened the Task Manager so I could view the processes and watch CPU and memory usage. Everything was running smoothly for about 60GB worth of backup. Then I noticed that the byte count of the backup job in Backup Exec had stopped progressing. I looked back over at my RDP session into the file server, and I was getting real time updates about CPU and memory usage still - both nearly 0%, which is unusual. Backups usually hover around 40% usage for the duration of the backup job. Let me reiterate this point: The screen was refreshing and I was getting real time Task Manager updates - until I clicked on the Start menu. The screen went black and the server locked up. In truth, I think the server had already locked up, the video card just hadn't figured it out yet. I went back into my bag of trick: driving to the office and hard reseting the server over and over again when it hangs up at the Windows splash screen. I did this for 2 hours without getting a successful boot. I started panicking because I did not have a decent backup to use to get everything back onto the working temporary file server. Once I exhausted everything I knew to do, I took a deep breath, booted to the Windows Server 2003 CD and performed a repair installation of Windows. The server came back up fine, with all of my data intact. I can now reboot the server at will and it will come back up cleanly. The problem is that I'm afraid as soon as I try to back that data up again I will back at square one. So let me sum things up: Here is what I've done so far to troubleshoot this server: Deleted and recreated the RAID 5 sets. Initialized the drives. Reloaded the server with a fresh Server 2003 install. Confirmed with Dell that I have installed the latest, Dell approved BIOS and NIC drivers. Uninstalled / reinstalled the Backup Exec Remote Agent. Uninstalled the Trend Micro A/V client. Configured the server not to reboot itself after a blue screen so I can see any stop error. I used to think the server was blue screening, but since I enabled this setting I now know that the server just completely locks up. Run chkdsk /r from the Windows Recovery Console. Several errors were found and corrected, but did not help my problem. Help confirm or deny the following assumptions: There are two problems at work here. Why the server is locking up in the first place, and why the server won't boot cleanly after a lockup. This is ultimately a software problem. The server works fine and can be rebooted cleanly all day long - until the first lockup - following a fresh OS load or even a Repair installation. This is not a problem with Backup Exec in general. All of my other servers back up just fine. For the record, all of the other servers run Server 2003, and some of them house more data than the file server in question here. Any help is appreciated. The irony is almost too much to bear. Backing up my data is what is jeopardizing it.

    Read the article

  • Fail-over caching reverse proxy

    - by sybreon
    Is there a way to configure varnish or any other caching reverse proxy, to serve pages from its cache when the back-end fails? At the moment, if the back-end goes down a 503 Service Unavailable error would be returned to the browser. I would prefer it if visitors got to see a cached version than an error page while the back-end is being fixed. My setup: [varnish (public ip)] <=== [router] <=== [web server (private ip)] PS: I have only one back-end web server.

    Read the article

  • Why does Outlook 2010 give the message "Creating a new item from the selected items could take some time...are you sure you create a new item...?

    - by Matt
    I'm using Outlook 2010 with Exchange 2007. I am moving emails from my Deleted Items folder to a user-created folder. When I move a "low" number of messages, say a few hundred or less, the operation completes successfully. When I move a "large" number of messages (in this example it's over 800) I get the message shown in the screenshot below. If I click Yes, a new email is generated and has links to all the emails I selected in the Attachment field. When I cancel that email, not only have the messages not moved but they appear to be deleted entirely. What does the message mean and why does it get presented? Why does clicking Yes do the behavior I described above?

    Read the article

  • Data Driven MSTest: DataRow is always null

    - by David Back
    I am having a problem using Visual Studio data driven testing. I have tried to deconstruct this to the simplest example. I am using Visual Studio 2012. I create a new unit test project. I am referencing system data. My code looks like this: namespace UnitTestProject1 { [TestClass] public class UnitTest1 { [DeploymentItem(@"OrderService.csv")] [DataSource("Microsoft.VisualStudio.TestTools.DataSource.CSV", "OrderService.csv", "OrderService#csv", DataAccessMethod.Sequential)] [TestMethod] public void TestMethod1() { try { Debug.WriteLine(TestContext.DataRow["ID"]); } catch (Exception ex) { Assert.Fail(); } } public TestContext TestContext { get; set; } } } I have a very small csv file that I have set the Build Options to to 'Content' and 'Copy Always'. I have added a .testsettings file to the solution, and set enable deployment, and added the csv file. I have tried this with and without |DataDirectory|, and with/without a full path specified (the same path that I get with Environment.CurrentDirectory). I've tried variations of "../" and "../../" just in case. Right now the csv is at the project root level, same as the .cs test code file. I have tried variations with xml as well as csv. TestContext is not null, but DataRow always is. I have not gotten this to work despite a lot of fiddling with it. I'm not sure what I'm doing wrong. Does mstest create a log anywhere that would tell me if it is failing to find the csv file, or what specific error might be causing DataRow to fail to populate? I have tried the following csv files: ID 1 2 3 4 and ID, Whatever 1,0 2,1 3,2 4,3 So far, no dice.

    Read the article

  • $.getJson> $.each returns undefined

    - by Der Sep
    function getData(d){ Back = new Object(); $.getJSON('../do.php?', function(response){ if(response.type == 'success'){ Back = { "type" : "success", "content" : "" }; $.each(response.data, function(data){ Back.content +='<div class="article"><h5>'+data.title+'</h5>' Back.content +='<div class="article-content">'+data.content+'</div></div>'; }); } else{ Back = {"type" : "error" }; } return Back; }); } console.log(getData()); is returning undefined! why?

    Read the article

  • Synchronizing an ERWin model with a Visual Studio 2008 GDR 2/2010 db project

    - by Grant Back
    I am looking for options to get our vast collection of DB objects across many DBs into source control (TFS 2010). Once we succeed here, we will work toward generating our alter scripts for a particular DB change via TFS build. The problem is, our data architecture group is responsible for maintaining the DB objects (excluding SPs), and they work within a model centric process, via ERWin. What this means, is that they maintain the DBs via ERWin models, and generate alters from them that are used to release changes. In order to achieve our goal of getting the DB objects (not just the ERWin models) into TFS, I believe the best option is to do this via Visual Studio DB projects. From what I can tell, there is very little urgency for CA to continue supporting an integration between ERWin and Visual Studio, that no longer works as of Visual Studio 2008 DB Ed. GDR. If I have been mislead in this regard, please feel free to set me straight. One potential solution is to: Perform changes in the ERWin model. Take the alter script generated from ERWin, and import the script into the appropriate Visual Studio DB project, updating the objects in the in the DB project Check the changed objects in the DB project into TFS. TFS Build executes to generate the alter scripts that will be used to push the changes through our release process. My question is, is this solution viable, or are there any other options?

    Read the article

  • Why display SELECT value is not changing?

    - by I'll-Be-Back
    When the page load, I expected <option value="B">B</option> value to change to red. It didn't work. Why? jQuery $(document).ready(function () { $('[name=HeaderFields] option[value="B"]').val('red'); } Dropdown: <select name="HeaderFields" style="width:60px"> <option value="A">A</option> <option value="B">B</option> <option value="C">C</option> </select>

    Read the article

  • Converting to Visual Studio 2008 and .NET 3.5

    - by Grant Back
    The process of converting from Visual Studio .NET 2003 to Visual Studio 2008 is satisfyingly start forward. I thought it would be worth asking a couple of questions though: 1) Are there any 'gotchas' with this conversion process that we should be aware of? 2) Same question goes for upgrading the .NET Framework from 1.1 to 3.5? Thanks.

    Read the article

  • Simple question about javascript history.go

    - by Camran
    I have a classifieds website. In every classified, there is a back link which simply takes the browser back one step. This is because when users search classifieds, and click on one to view it, they can easily go back with a link also (instead of only the browser back button). Here is the problem, if the classified is entered directly into the adress bar of a browser, or if somebody bookmarked a classified, then this back-link would take them someplace else... Is there any way of making sure that the previous page is a certain page (index.php in my case)? This way I would only display the back link if the previous page was index.php... Thanks

    Read the article

  • How to better create stacked bar graphs with multiple variables from ggplot2?

    - by deoksu
    I often have to make stacked barplots to compare variables, and because I do all my stats in R, I prefer to do all my graphics in R with ggplot2. I would like to learn how to do two things: First, I would like to be able to add proper percentage tick marks for each variable rather than tick marks by count. Counts would be confusing, which is why I take out the axis labels completely. Second, there must be a simpler way to reorganize my data to make this happen. It seems like the sort of thing I should be able to do natively in ggplot2 with plyR, but the documentation for plyR is not very clear (and I have read both the ggplot2 book and the online plyR documentation. My best graph looks like this, the code to create it follows: the R code I use to get it is the following: library(epicalc) ### recode the variables to factors ### recode(c(int_newcoun, int_newneigh, int_neweur, int_newusa, int_neweco, int_newit, int_newen, int_newsp, int_newhr, int_newlit, int_newent, int_newrel, int_newhth, int_bapo, int_wopo, int_eupo, int_educ), c(1,2,3,4,5,6,7,8,9, NA), c('Very Interested','Somewhat Interested','Not Very Interested','Not At All interested',NA,NA,NA,NA,NA,NA)) ### Combine recoded variables to a common vector Interest1<-c(int_newcoun, int_newneigh, int_neweur, int_newusa, int_neweco, int_newit, int_newen, int_newsp, int_newhr, int_newlit, int_newent, int_newrel, int_newhth, int_bapo, int_wopo, int_eupo, int_educ) ### Create a second vector to label the first vector by original variable ### a1<-rep("News about Bangladesh", length(int_newcoun)) a2<-rep("Neighboring Countries", length(int_newneigh)) [...] a17<-rep("Education", length(int_educ)) Interest2<-c(a1, a2, a3, a4, a5, a6, a7, a8, a9, a10, a11, a12, a13, a14, a15, a16, a17) ### Create a Weighting vector of the proper length ### Interest.weight<-rep(weight, 17) ### Make and save a new data frame from the three vectors ### Interest.df<-cbind(Interest1, Interest2, Interest.weight) Interest.df<-as.data.frame(Interest.df) write.csv(Interest.df, 'C:\\Documents and Settings\\[name]\\Desktop\\Sweave\\InterestBangladesh.csv') ### Sort the factor levels to display properly ### Interest.df$Interest1<-relevel(Interest$Interest1, ref='Not Very Interested') Interest.df$Interest1<-relevel(Interest$Interest1, ref='Somewhat Interested') Interest.df$Interest1<-relevel(Interest$Interest1, ref='Very Interested') Interest.df$Interest2<-relevel(Interest$Interest2, ref='News about Bangladesh') Interest.df$Interest2<-relevel(Interest$Interest2, ref='Education') [...] Interest.df$Interest2<-relevel(Interest$Interest2, ref='European Politics') detach(Interest) attach(Interest) ### Finally create the graph in ggplot2 ### library(ggplot2) p<-ggplot(Interest, aes(Interest2, ..count..)) p<-p+geom_bar((aes(weight=Interest.weight, fill=Interest1))) p<-p+coord_flip() p<-p+scale_y_continuous("", breaks=NA) p<-p+scale_fill_manual(value = rev(brewer.pal(5, "Purples"))) p update_labels(p, list(fill='', x='', y='')) I'd very much appreciate any tips, tricks or hints. Thanks.

    Read the article

< Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >