Search Results

Search found 7685 results on 308 pages for 'job scheduler'.

Page 202/308 | < Previous Page | 198 199 200 201 202 203 204 205 206 207 208 209  | Next Page >

  • Fluent NHibernate Many to one mapping

    - by Jit
    I am creating a NHibenate application with one to many relationship. Like City and State data. City table CREATE TABLE [dbo].[State]( [StateId] [varchar](2) NOT NULL primary key, [StateName] [varchar](20) NULL) CREATE TABLE [dbo].[City]( [Id] [int] primary key IDENTITY(1,1) NOT NULL , [State_id] [varchar](2) NULL refrences State(StateId), [CityName] [varchar](50) NULL) My mapping is follows public CityMapping() { Id(x = x.Id); Map(x = x.State_id); Map(x = x.CityName); HasMany(x = x.EmployeePreferedLocations) .Inverse() .Cascade.SaveUpdate() ; References(x = x.State) //.Cascade.All(); //.Class(typeof(State)) //.Not.Nullable() .Cascade.None() .Column("State_id") ; } public StateMapping() { Id(x => x.StateId) .GeneratedBy.Assigned(); Map(x => x.StateName); HasMany(x => x.Jobs) .Inverse(); //.Cascade.SaveUpdate(); HasMany(x => x.EmployeePreferedLocations) .Inverse(); HasMany(x => x.Cities) // .Inverse() .Cascade.SaveUpdate() //.Not.LazyLoad() ; } Models are as follows: [Serializable] public partial class City { public virtual System.String CityName { get; set; } public virtual System.Int32 Id { get; set; } public virtual System.String State_id { get; set; } public virtual IList<EmployeePreferedLocation> EmployeePreferedLocations { get; set; } public virtual JobPortal.Data.Domain.Model.State State { get; set; } public City(){} } public partial class State { public virtual System.String StateId { get; set; } public virtual System.String StateName { get; set; } public virtual IList<City> Cities { get; set; } public virtual IList<EmployeePreferedLocation> EmployeePreferedLocations { get; set; } public virtual IList<Job> Jobs { get; set; } public State() { Cities = new List<City>(); EmployeePreferedLocations = new List<EmployeePreferedLocation>(); Jobs = new List<Job>(); } //public virtual void AddCity(City city) //{ // city.State = this; // Cities.Add(city); //} } My Unit Testing code is below. City city = new City(); IRepository<State> rState = new Repository<State>(); Dictionary<string, string> critetia = new Dictionary<string, string>(); critetia.Add("StateId", "TX"); State frState = rState.GetByCriteria(critetia); city.CityName = "Waco"; city.State = frState; IRepository<City> rCity = new Repository<City>(); rCity.SaveOrUpdate(city); City frCity = rCity.GetById(city.Id); The problem is , I am not able to insert record. The error is below. "Invalid index 2 for this SqlParameterCollection with Count=2." But the error will not come if I comment State_id mapping field in the CityMapping file. I donot know what mistake is I did. If do not give the mapping Map(x = x.State_id); the value of this field is null, which is desired. Please help me how to solve this issue.

    Read the article

  • Java Refuses to Start - Could not reserve enough space for object heap

    - by Randyaa
    Background We have a pool of aproximately 20 linux blades. Some are running Suse, some are running Redhat. ALL share NAS space which contains the following 3 folders: /NAS/app/java - a symlink that points to an installation of a Java JDK. Currently version 1.5.0_10 /NAS/app/lib - a symlink that points to a version of our application. /NAS/data - directory where our output is written All our machines have 2 processors (hyperthreaded) with 4gb of physical memory and 4gb of swap space. We limit the number of 'jobs' each machine can process at a given time to 6 (this number likely needs to change, but that does not enter into the current problem so please ignore it for the time being). Some of our jobs set a Max Heap size of 512mb, some others reserve a Max Heap size of 2048mb. Again, we realize we could go over our available memory if 6 jobs started on the same machine with the heap size set to 2048, but to our knowledge this has not yet occurred. The Problem Once and a while a Job will fail immediately with the following message: Error occurred during initialization of VM Could not reserve enough space for object heap Could not create the Java virtual machine. We used to chalk this up to too many jobs running at the same time on the same machine. The problem happened infrequently enough (MAYBE once a month) that we'd just restart it and everything would be fine. The problem has recently gotten much worse. All of our jobs which request a max heap size of 2048m fail immediately almost every time and need to get restarted several times before completing. We've gone out to individual machines and tried executing them manually with the same result. Debugging It turns out that the problem only exists for our SuSE boxes. The reason it has been happening more frequently is becuase we've been adding more machines, and the new ones are SuSE. 'cat /proc/version' on the SuSE boxes give us: Linux version 2.6.5-7.244-bigsmp (geeko@buildhost) (gcc version 3.3.3 (SuSE Linux)) #1 SMP Mon Dec 12 18:32:25 UTC 2005 'cat /proc/version' on the RedHat boxes give us: Linux version 2.4.21-32.0.1.ELsmp ([email protected]) (gcc version 3.2.3 20030502 (Red Hat Linux 3.2.3-52)) #1 SMP Tue May 17 17:52:23 EDT 2005 'uname -a' gives us the following on BOTH types of machines: UTC 2005 i686 i686 i386 GNU/Linux No jobs are running on the machine, and no other processes are utilizing much memory. All of the processes currently running might be using 100mb total. 'top' currently shows the following: Mem: 4146528k total, 3536360k used, 610168k free, 132136k buffers Swap: 4194288k total, 0k used, 4194288k free, 3283908k cached 'vmstat' currently shows the following: procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 0 0 0 610292 132136 3283908 0 0 0 2 26 15 0 0 100 0 If we kick off a job with the following command line (Max Heap of 1850mb) it starts fine: java/bin/java -Xmx1850M -cp helloworld.jar HelloWorld Hello World If we bump up the max heap size to 1875mb it fails: java/bin/java -Xmx1875M -cp helloworld.jar HelloWorld Error occurred during initialization of VM Could not reserve enough space for object heap Could not create the Java virtual machine. It's quite clear that the memory currently being used is for Buffering/Caching and that's why so little is being displayed as 'free'. What isn't clear is why there is a magical 1850mb line where anything higher means Java can't start. Any explanations would be greatly appreciated.

    Read the article

  • One letter game problem?

    - by Alex K
    Recently at a job interview I was given the following problem: Write a script capable of running on the command line as python It should take in two words on the command line (or optionally if you'd prefer it can query the user to supply the two words via the console). Given those two words: a. Ensure they are of equal length b. Ensure they are both words present in the dictionary of valid words in the English language that you downloaded. If so compute whether you can reach the second word from the first by a series of steps as follows a. You can change one letter at a time b. Each time you change a letter the resulting word must also exist in the dictionary c. You cannot add or remove letters If the two words are reachable, the script should print out the path which leads as a single, shortest path from one word to the other. You can /usr/share/dict/words for your dictionary of words. My solution consisted of using breadth first search to find a shortest path between two words. But apparently that wasn't good enough to get the job :( Would you guys know what I could have done wrong? Thank you so much. import collections import functools import re def time_func(func): import time def wrapper(*args, **kwargs): start = time.time() res = func(*args, **kwargs) timed = time.time() - start setattr(wrapper, 'time_taken', timed) return res functools.update_wrapper(wrapper, func) return wrapper class OneLetterGame: def __init__(self, dict_path): self.dict_path = dict_path self.words = set() def run(self, start_word, end_word): '''Runs the one letter game with the given start and end words. ''' assert len(start_word) == len(end_word), \ 'Start word and end word must of the same length.' self.read_dict(len(start_word)) path = self.shortest_path(start_word, end_word) if not path: print 'There is no path between %s and %s (took %.2f sec.)' % ( start_word, end_word, find_shortest_path.time_taken) else: print 'The shortest path (found in %.2f sec.) is:\n=> %s' % ( self.shortest_path.time_taken, ' -- '.join(path)) def _bfs(self, start): '''Implementation of breadth first search as a generator. The portion of the graph to explore is given on demand using get_neighboors. Care was taken so that a vertex / node is explored only once. ''' queue = collections.deque([(None, start)]) inqueue = set([start]) while queue: parent, node = queue.popleft() yield parent, node new = set(self.get_neighbours(node)) - inqueue inqueue = inqueue | new queue.extend([(node, child) for child in new]) @time_func def shortest_path(self, start, end): '''Returns the shortest path from start to end using bfs. ''' assert start in self.words, 'Start word not in dictionnary.' assert end in self.words, 'End word not in dictionnary.' paths = {None: []} for parent, child in self._bfs(start): paths[child] = paths[parent] + [child] if child == end: return paths[child] return None def get_neighbours(self, word): '''Gets every word one letter away from the a given word. We do not keep these words in memory because bfs accesses a given vertex only once. ''' neighbours = [] p_word = ['^' + word[0:i] + '\w' + word[i+1:] + '$' for i, w in enumerate(word)] p_word = '|'.join(p_word) for w in self.words: if w != word and re.match(p_word, w, re.I|re.U): neighbours += [w] return neighbours def read_dict(self, size): '''Loads every word of a specific size from the dictionnary into memory. ''' for l in open(self.dict_path): l = l.decode('latin-1').strip().lower() if len(l) == size: self.words.add(l) if __name__ == '__main__': import sys if len(sys.argv) not in [3, 4]: print 'Usage: python one_letter_game.py start_word end_word' else: g = OneLetterGame(dict_path = '/usr/share/dict/words') try: g.run(*sys.argv[1:]) except AssertionError, e: print e

    Read the article

  • Trouble in ActiveX multi-thread invoke javascript callback routine

    - by code0tt
    everyone. I'm get some trouble in ActiveX programming with ATL. I try to make a activex which can async-download files from http server to local folder and after download it will invoke javascript callback function. My solution: run a thread M to monitor download thread D, when D is finish the job, M is going to terminal themself and invoke IDispatch inferface to call javascript function. **************** THERE IS MY CODE: **************** /* javascript code */ funciton download() { var xfm = new ActiveXObject("XFileMngr.FileManager.1"); xfm.download( 'http://somedomain/somefile','localdev:\\folder\localfile',function(msg){alert(msg);}); } /* C++ code */ // main routine STDMETHODIMP CFileManager::download(BSTR url, BSTR local, VARIANT scriptCallback) { CString csURL(url); CString csLocal(local); CAsyncDownload download; download.Download(this, csURL, csLocal, scriptCallback); return S_OK; } // parts of CAsyncDownload.h typedef struct tagThreadData { CAsyncDownload* pThis; } THREAD_DATA, *LPTHREAD_DATA; class CAsyncDownload : public IBindStatusCallback { private: LPUNKNOWN pcaller; CString csRemoteFile; CString csLocalFile; CComPtr<IDispatch> spCallback; public: void onDone(HRESULT hr); HRESULT Download(LPUNKNOWN caller, CString& csRemote, CString& csLocal, VARIANT callback); static DWORD __stdcall ThreadProc(void* param); }; // parts of CAsyncDownload.cpp void CAsyncDownload::onDone(HRESULT hr) { if(spCallback) { TRACE(TEXT("invoke callback function\n")); CComVariant vParams[1]; vParams[0] = "callback is working!"; DISPPARAMS params = { vParams, NULL, 1, 0 }; HRESULT hr = spCallback->Invoke(0, IID_NULL, LOCALE_USER_DEFAULT, DISPATCH_METHOD, &params, NULL, NULL, NULL); if(FAILED(hr)) { CString csBuffer; csBuffer.Format(TEXT("invoke failed, result value: %d \n"),hr); TRACE(csBuffer); }else { TRACE(TEXT("invoke was successful\n")); } } } HRESULT CAsyncDownload::Download(LPUNKNOWN caller, CString& csRemote, CString& csLocal, VARIANT callback) { CoInitializeEx(NULL, COINIT_MULTITHREADED); csRemoteFile = csRemote; csLocalFile = csLocal; pcaller = caller; switch(callback.vt){ case VT_DISPATCH: case VT_VARIANT:{ spCallback = callback.pdispVal; } break; default:{ spCallback = NULL; } } LPTHREAD_DATA pData = new THREAD_DATA; pData->pThis = this; // create monitor thread M HANDLE hThread = CreateThread(NULL, 0, ThreadProc, (void*)(pData), 0, NULL); if(!hThread) { delete pData; return HRESULT_FROM_WIN32(GetLastError()); } WaitForSingleObject(hThread, INFINITE); CloseHandle(hThread); CoUninitialize(); return S_OK; } DWORD __stdcall CAsyncDownload::ThreadProc(void* param) { LPTHREAD_DATA pData = (LPTHREAD_DATA)param; // here, we will create http download thread D // when download job is finish, call onDone method; pData->pThis->onDone(S_OK); delete pData; return 0; } **************** CODE FINISH **************** OK, above is parts of my source code, if I call onDone method in sub-thread, I will get OLE ERROR(-2147418113 (8000FFFF) Catastrophic failure.). Did I miss something? please help me to figure it out.

    Read the article

  • Read files from directory to create a ZIP hadoop

    - by Félix
    I'm looking for hadoop examples, something more complex than the wordcount example. What I want to do It's read the files in a directory in hadoop and get a zip, so I have thought to collect al the files in the map class and create the zip file in the reduce class. Can anyone give me a link to a tutorial or example than can help me to built it? I don't want anyone to do this for me, i'm asking for a link with better examples than the wordaccount. This is what I have, maybe it's useful for someone public class Testing { private static class MapClass extends MapReduceBase implements Mapper<LongWritable, Text, Text, BytesWritable> { // reuse objects to save overhead of object creation Logger log = Logger.getLogger("log_file"); public void map(LongWritable key, Text value, OutputCollector<Text, BytesWritable> output, Reporter reporter) throws IOException { String line = ((Text) value).toString(); log.info("Doing something ... " + line); BytesWritable b = new BytesWritable(); b.set(value.toString().getBytes() , 0, value.toString().getBytes() .length); output.collect(value, b); } } private static class ReduceClass extends MapReduceBase implements Reducer<Text, BytesWritable, Text, BytesWritable> { Logger log = Logger.getLogger("log_file"); ByteArrayOutputStream bout; ZipOutputStream out; @Override public void configure(JobConf job) { super.configure(job); log.setLevel(Level.INFO); bout = new ByteArrayOutputStream(); out = new ZipOutputStream(bout); } public void reduce(Text key, Iterator<BytesWritable> values, OutputCollector<Text, BytesWritable> output, Reporter reporter) throws IOException { while (values.hasNext()) { byte[] data = values.next().getBytes(); ZipEntry entry = new ZipEntry("entry"); out.putNextEntry(entry); out.write(data); out.closeEntry(); } BytesWritable b = new BytesWritable(); b.set(bout.toByteArray(), 0, bout.size()); output.collect(key, b); } @Override public void close() throws IOException { // TODO Auto-generated method stub super.close(); out.close(); } } /** * Runs the demo. */ public static void main(String[] args) throws IOException { int mapTasks = 20; int reduceTasks = 1; JobConf conf = new JobConf(Prue.class); conf.setJobName("testing"); conf.setNumMapTasks(mapTasks); conf.setNumReduceTasks(reduceTasks); MultipleInputs.addInputPath(conf, new Path("/messages"), TextInputFormat.class, MapClass.class); conf.setOutputKeyClass(Text.class); conf.setOutputValueClass(BytesWritable.class); FileOutputFormat.setOutputPath(conf, new Path("/czip")); conf.setMapperClass(MapClass.class); conf.setCombinerClass(ReduceClass.class); conf.setReducerClass(ReduceClass.class); // Delete the output directory if it exists already JobClient.runJob(conf); } }

    Read the article

  • C#: Parallel forms, multithreading and "applications in application"

    - by Harry
    First, what I need is - n WebBrowser-s, each in its own window doing its own job. The user should be able to see them all, or just one of them (or none), and to execute commands on each one. There is a main form, without a browser, this one contains control panel for my application. The key feautre is, each browser logs on to secured web page and it needs to stay logged in as long as possible. Well, I've done it, but I'm afraid something is wrong with my approach. The question is: Is code below valid, or rather a nasty hack which can cause problems: internal class SessionList : List<Session> { public SessionList(Server main) { MyRecords.ForEach(record => { var st = new System.Threading.Thread((data) => { var s = new Session(main, data as MyRecord); this.Add(s); Application.Run(s); Application.ExitThread(); }); st.SetApartmentState(System.Threading.ApartmentState.STA); st.Start(record); }); } // some other uninteresting methods here... } What's going on here? Session inherits from Form, so it creates a form, puts WebBrowser into it, and has methods to operate on websites. WebBrowser requires to be run in STA thread, so we provide one for each browser. The most interesting part of it is Application.Run(s). It makes the newly created forms alive and interactive. The next Application.ExitThread() is called after browser window is closed and its controls disposed. Main application stays alive to perform the rest of the cleanup job. When user select "Exit" or "Shutdown" option - first the browser threads are ended, so Application.ExitThread() is called. It all works, but everywhere I can read about "main GUI thread" - and here - I've created many GUI threads. I handle communication between main form and my new forms (sessions) with thread-safe methods using Invoke(). It all works, so is it right or is it wrong? Is everything right with using Application.Run() more than once in one application? :) An ugly hack or a normal practice? This code dies if I start a WebBrowser from the session form thread. It beats me why. It works however if I start WebBrowser (by changing its Url property) from any other thread. I'd like to know more what is really happening in such application. But most of all - I'd like to know if my idea of "applications in application" is OK. I'm not sure what exactly does Application.Run() do. Without it forms created in new threads were dead unresponsive. How is it possible I can call Application.Run() many times? It seems to do exactly what it should, but it seems a little undocumented feature to me. I'm almost sure, that the crashes are caused by WebBrowser component itself (since it's not completely "managed" and "native"). But maybe it's something else.

    Read the article

  • get text from a certain <tr> tag

    - by WideBlade
    Is there a way to get the text in a dynamic way from a certain <tr> tag in the page? e.g. I've a page with a <tr> with the value "a1". I'd like to get only the text from this <tr> tag, and echo it into the page. is this possible? here is the HTML: <html><tr id='ieconn2' > <td><table width='100%'><tr><td valign='top'><table width='100%'><tr><td><script type="text/javascript"><!-- google_ad_client = "pub-4503439170693445"; /* 300x250, created 7/21/10 */ google_ad_slot = "7608120147"; google_ad_width = 300; google_ad_height = 250; //--> </script> <script type="text/javascript" src="http://pagead2.googlesyndication.com/pagead/show_ads.js"> </script><br>When Marshall and Lily fear they will never get pregnant, they see a specialist who can hopefully help move the process along. Meanwhile, Robin starts her new job.<br><br><b>Source: </b>CBS <br>&nbsp;</td></tr><tr><td><b>There are no foreign summaries for this episode:</b> <a href='/edit/shows/3918/episode_foreign_summary/?eid=1065002553&season=6'>Contribute</a></td></tr><tr><td><b>English Recap Available: </b> <a href='/How_I_Met_Your_Mother/episodes/1065002553?show_recap=1'>View Here</a></td></tr></table></td><td valign='top' width='250'><div align='left'> <img alt='How I Met Your Mother season 6 episode 13' src="http://images.tvrage.com/screencaps/20/3918/1065002553.jpg" width="248" border='0' > </div><div align='center'><a href='/How_I_Met_Your_Mother/episodes/1065002553?gallery=1'>6 gallery images</a></div></td></tr></table></td></tr><tr> <td background='/_layout_v3/buttons/title.jpg' height='39' width='631' align='center'> <table width='100%' cellpadding='0' cellspacing='0' style='margin: 1px 1px 1px 1px;'> <tr> <td align='left' style='cursor: pointer;' onclick="SwitchHeader('ieconn3','iehide3','26')" width='90'>&nbsp;<span style='font-size: 15px; font-weight: bold; color: black; padding-left: 8px;' id='iehide3'><img src='/_layout_v3/misc/minus.gif' width='26'></span></td> <td align='center' style='cursor: pointer;' onclick="SwitchHeader('ieconn3','iehide3','26')" ><h5 class='nospace'>Sponsored Links</h5><a name=''></a></td> <td align='left' width='90' >&nbsp;</td></tr></table></td> </tr></html> All I want to get is this text: "When Marshall and Lily fear they will never get pregnant, they see a specialist who can hopefully help move the process along. Meanwhile, Robin starts her new job. "

    Read the article

  • PHP - Accessing a value selected in Combobox

    - by Pavitar
    I want to write a code that should let me select from a drop down list and onClick of a button load the data from a mysql database.However I cannot access the value selected in the drop down menu.I have tried to access them by $_POST['var_name'] but still can't do it. I'm new to PHP. Following is my code: <?php function load(){ $department = $_POST['dept']; $employee = $_POST['emp']; //echo "$department"; //echo "$employee"; $con = mysqli_connect("localhost", "root", "pwd", "payroll"); $rs = $con->query("select * from dept where deptno='$department'"); $row = $rs->fetch_row(); $n = $row[0]; $rs->free(); $con->close(); } ?> <html> <head> <title>Payroll</title> </head> <body> <h1 align="center">IIT Department</h1> <form method="post"> <table align="center"> <tr> <td> Dept Number: <select name="dept"> <option value="10" selected="selected">10</option> <option value="20">20</option> <option value="30">30</option> <option value="40">40</option> </select> </td> <td> <input type="button" value="ShowDeptEmp" name="btn1"> </td> <td> Job: <select name="job"> <option value="President" selected="selected">President</option> <option value="Manager">Manager</option> <option value="Clerk">Clerk</option> <option value="Salesman">Salesman</option> <option value="Analyst">Analyst</option> </select> </td> <td> <input type="button" value="ShowJobEmp" name="btn1"> </td> </tr> </table> </form> <?php if(isset($_POST['dept']) && $_POST['dept'] != "") load(); ?> </body> </html>

    Read the article

  • VB6 app not executing as scheduled task unless user is logged on

    - by Tedd Hansen
    Hi Would greatly apprechiate some help on this one! It may be a tricky one. :) Problem I have an VB6 application which is set up as scheduled task. It starts every time, but when executing CreateObject it fails if user is not logged on to computer. I am looking for information on what could cause this. Primary suspicion is that some Windows API fails. Key points Behaviour confirmed on Windows 2000, 2003, 2008 and Vista. The application executes as user X at scheduled time, executed by Windows Task Scheduler. It executes every time. Application does start! -- If user X is logged on via RDP it runs perfectly. (Note that user doesn't need to be connected, only logged on) -- If user X is not logged on to computer the application fails. Failure point Application fails when using CreateObject() to instansiate a DCOM object which is also part of the application. The DCOM objects declare .dll-references at startup (globally/on top of .bas-file) and run a small startup function. Failure must be during startup, possibly in one of the .dll-declarations. Thoughts After some Googling my initial suspicion was directed at MAPI. From what I could see MAPI required user to be logged on. The application has MAPI references. But even with all MAPI references removed it still does not work. What is the difference if an user is logged on? Registry mapping? Environment? explorer.exe is running. Isn't the user logged on when application executes as the user? What info would help? A definitive answer would be truly great. Any information regarding any VB6 feature/Windows API that could act differently depending on wether user is logged on or not would definitively help. Similar experiences may lead me in the right direction. Tips on debuggin this. Thanks! :)

    Read the article

  • How to break a Hibernate session?

    - by Péter Török
    In the Hibernate reference, it is stated several times that All exceptions thrown by Hibernate are fatal. This means you have to roll back the database transaction and close the current Session. You aren’t allowed to continue working with a Session that threw an exception. One of our legacy apps uses a single session to update/insert many records from files into a DB table. Each recourd update/insert is done in a separate transaction, which is then duly committed (or rolled back in case an error occurred). Then for the next record a new transaction is opened etc. But the same session is used throughout the whole process, even if a HibernateException was caught in the middle. We are using Oracle 9i btw with Hibernate 3.24.sp1 on JBoss 4.2. Reading the above in the book, I realized that this design may fail. So I refactored the app to use a separate session for each record update. In a unit test with a mock session factory, I could prove that it is now requesting a new session for each record update. So far, so good. However, we found no way to reproduce the session failure while testing the whole app (would this be a stress test btw, or ...?). We thought of shutting down the listener of the DB but we realized that the app is keeping a bunch of connections open to the DB, and the listener would not affect those connections. (This is a web app, activated once every night by a scheduler, but it can also be activated via the browser.) Then we tried to kill some of those connections in the DB while the app was processing updates - this resulted in some failed updates, but then the app happily continued. Apparently Hibernate is clever enough to reopen broken connections under the hood without breaking the whole session. So this might not be a critical issue, as our app seems to be robust enough even in its original form. However, the issue keeps bugging me. I would like to know: Under what circumstances does the Hibernate session really become unusable after a HibernateException was thrown? How to reproduce this in a test? (What's the proper term for such a test?)

    Read the article

  • Fairness: Where can it be better handled?

    - by Srinivas Nayak
    Hi, I would like to share one of my practical experience with multiprogramming here. Yesterday I had written a multiprogram. Modifications to sharable resources were put under critical sections protected by P(mutex) and V(mutex) and those critical section code were put in a common library. The library will be used by concurrent applications (of my own). I had three applications that will use the common code from library and do their stuff independently. my library --------- work_on_shared_resource { P(mutex) get_shared_resource work_with_it V(mutex) } --------- my application ----------- application1 { *[ work_on_shared_resource do_something_else_non_ctitical ] } application2 { *[ work_on_shared_resource do_something_else_non_ctitical ] } application3 { *[ work_on_shared_resource ] } *[...] denote a loop. ------------ I had to run the applications on Linux OS. I had a thought in my mind, hanging over years, that, OS shall schedule all the processes running under him with all fairness. In other words, it will give all the processes, their pie of resource-usage equally well. When first two applications were put to work, they run perfectly well without deadlock. But when the third application started running, always the third one got the resources, but since it is not doing anything in its non-critical region, it gets the shared resource more often when other tasks are doing something else. So the other two applications were found almost totally halted. When the third application got terminated forcefully, the previous two applications resumed their work as before. I think, this is a case of starvation, first two applications had to starve. Now how can we ensure fairness? Now I started believing that OS scheduler is innocent and blind. It depends upon who won the race; he got the largest pie of CPU and resource. Shall we attempt to ensure fairness of resource users in the critical-section code in library? Or shall we leave it up to the applications to ensure fairness by being liberal, not greedy? To my knowledge, adding code to ensure fairness to the common library shall be an overwhelming task. On the other hand, believing on the applications will also never ensure 100% fairness. The application which does a very little task after working with shared resources shall win the race where as the application which does heavy processing after their work with shared resources shall always starve. What is the best practice in this case? Where we ensure fairness and how? Sincerely, Srinivas Nayak

    Read the article

  • I'm cloning a table row that contains an input that's being set to jQuery TimeEntry that errors when

    - by Kendall Crouch
    I'm adding a TimeEntry to a page where the user can add a row (clone) to a table. The row that is cloned is hidden (display:none). The user clicks a button and javascript is run to clone the row which renames all of the fields and then appends the new row to the table. <tr id="blankRowShift"> <td> <input type="text" id="timeStart" name="timeStart" /> </td> <td> <input type="text" id="timeEnd" name="timeEnd" /> </td> <td> <select id="userLevel"> <option value="0">Please Select One</option> <option value="2">Admin</option> <option value="1">Employee</option> <option value="3">Scheduler</option> </select> </td> </tr> var r = $("#tbl #blankRowShift").clone().removeAttr("id"); $("#timeStart", r).attr("name", "timeStart" + nn).attr("id", "timeStart" + nn); $("#timeEnd", r).attr("name", "timeEnd" + nn).attr("id", "timeEnd" + nn); $("#userLevel option:nth(0)", r).attr("selected", "selected"); $("#userLevel", r).attr("name", "userLevel" + nn).attr("id", "userLevel" + nn).attr("value", 0); $("#tbl").append(r); $("#timeStart" + nn).timeEntry({ show24Hours: false, showSeconds: false, timeSteps: [1, 15, 0], spinnerImage: 'includes/js/spinnerOrange.png', beforeShow: customRangeStart }); $("#timeStart" + nn).timeEntry('setTime', new Date()); $("#timeEnd" + nn).timeEntry({ show24Hours: false, showSeconds: false, timeSteps: [1, 15, 0], spinnerImage: 'includes/js/spinnerOrange.png', beforeShow: customRangeEnd }); $("#timeEnd" + nn).timeEntry('setTime', new Date()); The spinner works just fine and the times can be changed. Then when submitting the form, I validate the time. The getTime errors in jQuery with the message "elem is undefined var id = elem[ expando ];". I've placed the statement 'console.dir(input)' in the _getTimeTimeEntry: function and it returns nothing for the cloned fields. el = $("#timeStart" + i); if (el.timeEntry("getTime") == null) {

    Read the article

  • Startup in Windows 7

    - by iira
    Hi, I am trying to add my program run in Windows 7 startup, but it does'nt works. My program has embedded uac manifest. My current way is by adding String Value at HKCU..\Run I found a manual solution for Vista from http://social.technet.microsoft.com/Forums/en/w7itprosecurity/thread/81c3c1f2-0169-493a-8f87-d300ea708ecf 1. Click Start, right click on Computer and choose “Manage”. 2. Click “Task Scheduler” on the left panel. 3. Click “Create Task” on the right panel. 4. Type a name for the task. 5. Check “Run with highest privileges”. 6. Click Actions tab. 7. Click “New…”. 8. Browse to the program in the “Program/script” box. Click OK. 9. On desktop, right click, choose New and click “Shortcut”. 10. In the box type: schtasks.exe /run /tn TaskName where TaskName is the name of task you put in on the basics tab and click next. 11. Type a name for the shortcut and click Finish. Additionally, you need to run the saved scheduled task shortcut to run the program instead of running the application shortcut to ignore the IAC prompt. When startup the system will run the program via the original shortcut. Therefore you need to change the location to run the saved task. Please: 1. Open Regedit. 2. Find the entry of the startup item in Registry. It will be stored in one of the following branches. HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Run HKEY_USERS\.DEFAULT\Software\Microsoft\Windows\CurrentVersion\Run HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Run 3. Double-click on the correct key, change the path to the saved scheduled task you created. Is there any free code to add item with privileges option in scheduled task? I havent found the free one in torry.net Thanks a lot.

    Read the article

  • Directly Jump to another C++ function

    - by maligree
    I'm porting a small academic OS from TriCore to ARM Cortex (Thumb-2 instruction set). For the scheduler to work, I sometimes need to JUMP directly to another function without modifying the stack nor the link register. On TriCore (or, rather, on tricore-g++), this wrapper template (for any three-argument-function) works: template< class A1, class A2, class A3 > inline void __attribute__((always_inline)) JUMP3( void (*func)( A1, A2, A3), A1 a1, A2 a2, A3 a3 ) { typedef void (* __attribute__((interrupt_handler)) Jump3)( A1, A2, A3); ( (Jump3)func )( a1, a2, a3 ); } //example for using the template: JUMP3( superDispatch, this, me, next ); This would generate the assembler instruction J (a.k.a. JUMP) instead of CALL, leaving the stack and CSAs unchanged when jumping to the (otherwise normal) C++ function superDispatch(SchedulerImplementation* obj, Task::Id from, Task::Id to). Now I need an equivalent behaviour on ARM Cortex (or, rather, for arm-none-linux-gnueabi-g++), i.e. generate a B (a.k.a. BRANCH) instruction instead of BLX (a.k.a. BRANCH with link and exchange). But there is no interrupt_handler attribute for arm-g++ and I could not find any equivalent attribute. So I tried to resort to asm volatile and writing the asm code directly: template< class A1, class A2, class A3 > inline void __attribute__((always_inline)) JUMP3( void (*func)( A1, A2, A3), A1 a1, A2 a2, A3 a3 ) { asm volatile ( "mov.w r0, %1;" "mov.w r1, %2;" "mov.w r2, %3;" "b %0;" : : "r"(func), "r"(a1), "r"(a2), "r"(a3) : "r0", "r1", "r2" ); } So far, so good, in my theory, at least. Thumb-2 requires function arguments to be passed in the registers, i.e. r0..r2 in this case, so it should work. But then the linker dies with undefined reference to `r6' on the closing bracket of the asm statement ... and I don't know what to make of it. OK, I'm not the expert in C++, and the asm syntax is not very straightforward... so has anybody got a hint for me? A hint to the correct __attribute__ for arm-g++ would be one way, a hint to fix the asm code would be another. Another way maybe would be to tell the compiler that a1..a3 should already be in the registers r0..r2 when the asm statement is entered (I looked into that a bit, but did not find any hint).

    Read the article

  • TS-7800 Hangs on bootup

    - by Reid
    I have a TS-7800, and it typically boots from the SD card inserted in it. When I tried to boot it up today, it hung on the syslog line. I am now having "Read only file system" problems. What has gone wrong? Bootup console: >> Copyright (c) 2008, Technologic Systems >> Booting from SD card... . . . . >> Booting to SD Card... INIT: version 2.86 booting Starting the hotplug events dispatcher: udevd. Synthesizing the initial hotplug events...done. Waiting for /dev to be fully populated...done. mount: can't find / in /etc/fstab or /etc/mtab Cleaning up ifupdown...rm: cannot remove `/etc/network/run/ifstate': Read-only file system Loading kernel modules...done. Checking all file systems... fsck 1.37 (21-Mar-2005) ... done. none on /dev/pts type devpts (rw,gid=5,mode=620) /etc/init.d/rcS: line 39: /tmp/.clean: Read-only file system Setting up networking...done. Setting up IP spoofing protection: rp_filter. Enabling packet forwarding...done. Configuring network interfaces...ifup: failed to open statefile /etc/network/run/ifstate: Read-only file system done. Starting portmap daemon: portmap. /etc/init.d/rcS: line 39: /tmp/.clean: Read-only file system /etc/init.d/rcS: line 24: /var/run/utmp: Read-only file system rm: cannot remove `/var/lib/urandom/random-seed': Read-only file system urandom start: failed. Recovering nvi editor sessions... done. INIT: Entering runlevel: 3 Starting system log daemon: syslogd . Starting kernel log daemon: klogd. Starting MTA: open: Read-only file system touch: cannot touch `/var/lib/exim4/config.autogenerated.tmp': Read-only file system chown: cannot access `/var/lib/exim4/config.autogenerated.tmp': No such file or directory chmod: cannot access `/var/lib/exim4/config.autogenerated.tmp': No such file or directory chmod: changing permissions of `/var/lib/exim4/config.autogenerated': Read-only file system /usr/sbin/update-exim4.conf: line 260: cannot create temp file for here document: Read-only file system /usr/sbin/update-exim4.conf: line 387: /var/lib/exim4/config.autogenerated.tmp: Read-only file system 2002-01-01 01:31:36 Cannot open main log file "/var/log/exim4/mainlog": Read-only file system: euid=0 egid=0 2002-01-01 01:31:36 non-existent configuration file(s): /var/lib/exim4/config.autogenerated.tmp 2002-01-01 01:31:36 Cannot open main log file "/var/log/exim4/mainlog": Read-only file system: euid=0 egid=0 exim: could not open panic log - aborting: see message(s) above Invalid new configfile /var/lib/exim4/config.autogenerated.tmp not installing /var/lib/exim4/config.autogenerated.tmp to /var/lib/exim4/config.autogenerated Starting internet superserver: inetd. Starting OpenBSD Secure Shell server: sshd. Starting NFS common utilities: statdStarting periodic command scheduler: cron/usr/sbin/cron: can't open or create /var/run/crond.pid: Read-only file system . Starting web server (apache2)...(30)Read-only file system: apache2: could not open error log file /var/log/apache2/error.log. Unable to open logs failed! Debian GNU/Linux 3.1 ts7800 ttyS0 ts7800 login:

    Read the article

  • How to configure emacs by using this file?

    - by Andy Leman
    From http://public.halogen-dg.com/browser/alex-emacs-settings/.emacs?rev=1346 I got: (setq load-path (cons "/home/alex/.emacs.d/" load-path)) (setq load-path (cons "/home/alex/.emacs.d/configs/" load-path)) (defconst emacs-config-dir "~/.emacs.d/configs/" "") (defun load-cfg-files (filelist) (dolist (file filelist) (load (expand-file-name (concat emacs-config-dir file))) (message "Loaded config file:%s" file) )) (load-cfg-files '("cfg_initsplit" "cfg_variables_and_faces" "cfg_keybindings" "cfg_site_gentoo" "cfg_conf-mode" "cfg_mail-mode" "cfg_region_hooks" "cfg_apache-mode" "cfg_crontab-mode" "cfg_gnuserv" "cfg_subversion" "cfg_css-mode" "cfg_php-mode" "cfg_tramp" "cfg_killbuffer" "cfg_color-theme" "cfg_uniquify" "cfg_tabbar" "cfg_python" "cfg_ack" "cfg_scpaste" "cfg_ido-mode" "cfg_javascript" "cfg_ange_ftp" "cfg_font-lock" "cfg_default_face" "cfg_ecb" "cfg_browser" "cfg_orgmode" ; "cfg_gnus" ; "cfg_cyrillic" )) ; enable disabled advanced features (put 'downcase-region 'disabled nil) (put 'scroll-left 'disabled nil) (put 'upcase-region 'disabled nil) ; narrow cursor ;(setq-default cursor-type 'hbar) (cua-mode) ; highlight current line (global-hl-line-mode 1) ; AV: non-aggressive scrolling (setq scroll-conservatively 100) (setq scroll-preserve-screen-position 't) (setq scroll-margin 0) (custom-set-variables ;; custom-set-variables was added by Custom. ;; If you edit it by hand, you could mess it up, so be careful. ;; Your init file should contain only one such instance. ;; If there is more than one, they won't work right. '(ange-ftp-passive-host-alist (quote (("redbus2.chalkface.com" . "on") ("zope.halogen-dg.com" . "on") ("85.119.217.50" . "on")))) '(blink-cursor-mode nil) '(browse-url-browser-function (quote browse-url-firefox)) '(browse-url-new-window-flag t) '(buffers-menu-max-size 30) '(buffers-menu-show-directories t) '(buffers-menu-show-status nil) '(case-fold-search t) '(column-number-mode t) '(cua-enable-cua-keys nil) '(user-mail-address "[email protected]") '(cua-mode t nil (cua-base)) '(current-language-environment "UTF-8") '(file-name-shadow-mode t) '(fill-column 79) '(grep-command "grep --color=never -nHr -e * | grep -v .svn --color=never") '(grep-use-null-device nil) '(inhibit-startup-screen t) '(initial-frame-alist (quote ((width . 80) (height . 40)))) '(initsplit-customizations-alist (quote (("tabbar" "configs/cfg_tabbar.el" t) ("ecb" "configs/cfg_ecb.el" t) ("ange\\-ftp" "configs/cfg_ange_ftp.el" t) ("planner" "configs/cfg_planner.el" t) ("dired" "configs/cfg_dired.el" t) ("font\\-lock" "configs/cfg_font-lock.el" t) ("speedbar" "configs/cfg_ecb.el" t) ("muse" "configs/cfg_muse.el" t) ("tramp" "configs/cfg_tramp.el" t) ("uniquify" "configs/cfg_uniquify.el" t) ("default" "configs/cfg_font-lock.el" t) ("ido" "configs/cfg_ido-mode.el" t) ("org" "configs/cfg_orgmode.el" t) ("gnus" "configs/cfg_gnus.el" t) ("nnmail" "configs/cfg_gnus.el" t)))) '(ispell-program-name "aspell") '(jabber-account-list (quote (("[email protected]")))) '(jabber-nickname "AVK") '(jabber-password nil) '(jabber-server "halogen-dg.com") '(jabber-username "alex") '(remember-data-file "~/Plans/remember.org") '(safe-local-variable-values (quote ((dtml-top-element . "body")))) '(save-place t nil (saveplace)) '(scroll-bar-mode (quote right)) '(semantic-idle-scheduler-idle-time 432000) '(show-paren-mode t) '(svn-status-hide-unmodified t) '(tool-bar-mode nil nil (tool-bar)) '(transient-mark-mode t) '(truncate-lines f) '(woman-use-own-frame nil)) ; ?? ????? ??????? y ??? n? (fset 'yes-or-no-p 'y-or-n-p) (custom-set-faces ;; custom-set-faces was added by Custom. ;; If you edit it by hand, you could mess it up, so be careful. ;; Your init file should contain only one such instance. ;; If there is more than one, they won't work right. '(compilation-error ((t (:foreground "tomato" :weight bold)))) '(cursor ((t (:background "red1")))) '(custom-variable-tag ((((class color) (background dark)) (:inherit variable-pitch :foreground "DarkOrange" :weight bold)))) '(hl-line ((t (:background "grey24")))) '(isearch ((t (:background "orange" :foreground "black")))) '(message-cited-text ((((class color) (background dark)) (:foreground "SandyBrown")))) '(message-header-name ((((class color) (background dark)) (:foreground "DarkGrey")))) '(message-header-other ((((class color) (background dark)) (:foreground "LightPink2")))) '(message-header-subject ((((class color) (background dark)) (:foreground "yellow2")))) '(message-separator ((((class color) (background dark)) (:foreground "thistle")))) '(region ((t (:background "brown")))) '(tooltip ((((class color)) (:inherit variable-pitch :background "IndianRed1" :foreground "black"))))) The above is a python emacs configure file. Where should I put it to use it? And, are there any other changes I need to make?

    Read the article

  • Broken cups installation on a ubuntu server 64

    - by user67046
    Hi, I am having trouble with an cups installation. It seems to be in a broken state. When i try to reinstall it it stalls, the same if i try to remove it completely. I am running the server version 64 bit of Ubuntu 10.10 with kernel Linux version 2.6.35-22-server. When i try to start the cups daemon with the following command sudo service cups start It just stays there and nothing happens. I have tried to remove it, to be able to reinstall it, with the following command sudo apt-get purge cups It finally stalls with the following message Removing cups ... After that nothing happens. The process tree for the apt-get command looks like this. 1404 1404 1404 ? 00:00:00 sshd 26495 26495 26495 ? 00:00:00 sshd 26581 26495 26495 ? 00:00:00 sshd 26582 26582 26582 pts/4 00:00:00 bash 27158 27158 26582 pts/4 00:00:00 apt-get 27172 27172 27172 pts/2 00:00:00 dpkg 27176 27172 27172 pts/2 00:00:00 cups.prerm 27178 27172 27172 pts/2 00:00:00 stop I have tried to leave the process running for a while to see if i get any error messages but without success. To get out of it I have to kill the processes. sudo dpkg --configure cups dpkg: error processing cups (--configure): package cups is already installed and configured Errors were encountered while processing: cups sudo dpkg --status cups Package: cups Status: purge ok installed Priority: optional Section: net Installed-Size: 8292 Maintainer: Ubuntu Developers <[email protected]> Architecture: amd64 Version: 1.4.4-6ubuntu2.3 Replaces: cupsddk-drivers (<< 1.4.0) Provides: cupsddk-drivers Depends: libavahi-client3 (>= 0.6.16), libavahi-common3 (>= 0.6.16), libc6 (>= 2.7), libcups2 (>= 1.4.4-3~), libcupscgi1 (>= 1.4.2), libcupsdriver1 (>= 1.4.0), libcupsimage2 (>= 1.4.0), libcupsmime1 (>= 1.4.0), libcupsppdc1 (>= 1.4.0), libdbus-1-3 (>= 1.0.2), libgcc1 (>= 1:4.1.1), libgnutls26 (>= 2.7.14-0), libgssapi-krb5-2 (>= 1.8+dfsg), libijs-0.35, libkrb5-3 (>= 1.6.dfsg.2), libldap-2.4-2 (>= 2.4.7), libpam0g (>= 0.99.7.1), libpaper1, libpoppler7, libslp1, libstdc++6 (>= 4.1.1), libusb-0.1-4 (>= 2:0.1.12), zlib1g (>= 1:1.1.4), debconf (>= 1.2.9) | debconf-2.0, upstart-job, poppler-utils (>= 0.12), procps, ghostscript, lsb-base (>= 3), cups-common (>= 1.4.4), cups-client (>= 1.4.4-6ubuntu2.3), ssl-cert (>= 1.0.11), adduser, bc, ttf-freefont, cups-ppdc Recommends: foomatic-filters (>= 4.0), cups-driver-gutenprint, ghostscript-cups Suggests: cups-bsd, foomatic-db-compressed-ppds | foomatic-db, hplip, xpdf-korean | xpdf-japanese | xpdf-chinese-traditional | xpdf-chinese-simplified, cups-pdf, smbclient (>= 3.0.9), udev Breaks: foomatic-filters (<< 4.0) Conflicts: cupsddk-drivers (<< 1.4.0) Conffiles: /etc/fonts/conf.d/99pdftoopvp.conf a5221cfad70a981c80864229ef56586d /etc/logrotate.d/cups 5bb41fa9900f0d1c565954405a2bd7c4 /etc/default/cups 2b436fbb1a32b82b6aba45a76a1d7e40 /etc/pam.d/cups ff2488324854f7b1e892bb0df062d5f0 /etc/init/cups.conf 1a3cd022e8474e3d2b44640f33ce68e3 /etc/ufw/applications.d/cups 29e98a6d850da251e180c3d68dec2bd3 /etc/apparmor.d/usr.sbin.cupsd 60c4b26bfd5c033baa3dd48a3b2e9911 /etc/cups/cupsd.conf e2c7ec15835ea0939e5e86f7c6efcc03 /etc/cups/snmp.conf 2326a8af1e112676d55245bc5eb459ca /etc/cups/cupsd.conf.default a68d54d76021e857dd1d64edf57d36c5 Description: Common UNIX Printing System(tm) - server The Common UNIX Printing System (or CUPS(tm)) is a printing system and general replacement for lpd and the like. It supports the Internet Printing Protocol (IPP), and has its own filtering driver model for handling various document types. . This package provides the CUPS scheduler/daemon and related files. Original-Maintainer: Debian CUPS Maintainers <[email protected]> Would be greatful if someone could provide some help on how to solve this issue.

    Read the article

  • KVM guest io is much slower than host io: is that normal?

    - by Evolver
    I have a Qemu-KVM host system setup on CentOS 6.3. Four 1TB SATA HDDs working in Software RAID10. Guest CentOS 6.3 is installed on separate LVM. People say that they see guest performance almost equal to host performance, but I don't see that. My i/o tests are showing 30-70% slower performance on guest than on host system. I tried to change scheduler (set elevator=deadline on host and elevator=noop on guest), set blkio.weight to 1000 in cgroup, change io to virtio... But none of these changes gave me any significant results. This is a guest .xml config part: <disk type='file' device='disk'> <driver name='qemu' type='raw'/> <source file='/dev/vgkvmnode/lv2'/> <target dev='vda' bus='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </disk> There are my tests: Host system: iozone test # iozone -a -i0 -i1 -i2 -s8G -r64k random random KB reclen write rewrite read reread read write 8388608 64 189930 197436 266786 267254 28644 66642 dd read test: one process and then four simultaneous processes # dd if=/dev/vgkvmnode/lv2 of=/dev/null bs=1M count=1024 iflag=direct 1073741824 bytes (1.1 GB) copied, 4.23044 s, 254 MB/s # dd if=/dev/vgkvmnode/lv2 of=/dev/null bs=1M count=1024 iflag=direct skip=1024 & dd if=/dev/vgkvmnode/lv2 of=/dev/null bs=1M count=1024 iflag=direct skip=2048 & dd if=/dev/vgkvmnode/lv2 of=/dev/null bs=1M count=1024 iflag=direct skip=3072 & dd if=/dev/vgkvmnode/lv2 of=/dev/null bs=1M count=1024 iflag=direct skip=4096 1073741824 bytes (1.1 GB) copied, 14.4528 s, 74.3 MB/s 1073741824 bytes (1.1 GB) copied, 14.562 s, 73.7 MB/s 1073741824 bytes (1.1 GB) copied, 14.6341 s, 73.4 MB/s 1073741824 bytes (1.1 GB) copied, 14.7006 s, 73.0 MB/s dd write test: one process and then four simultaneous processes # dd if=/dev/zero of=test bs=1M count=1024 oflag=direct 1073741824 bytes (1.1 GB) copied, 6.2039 s, 173 MB/s # dd if=/dev/zero of=test bs=1M count=1024 oflag=direct & dd if=/dev/zero of=test2 bs=1M count=1024 oflag=direct & dd if=/dev/zero of=test3 bs=1M count=1024 oflag=direct & dd if=/dev/zero of=test4 bs=1M count=1024 oflag=direct 1073741824 bytes (1.1 GB) copied, 32.7173 s, 32.8 MB/s 1073741824 bytes (1.1 GB) copied, 32.8868 s, 32.6 MB/s 1073741824 bytes (1.1 GB) copied, 32.9097 s, 32.6 MB/s 1073741824 bytes (1.1 GB) copied, 32.9688 s, 32.6 MB/s Guest system: iozone test # iozone -a -i0 -i1 -i2 -s512M -r64k random random KB reclen write rewrite read reread read write 524288 64 93374 154596 141193 149865 21394 46264 dd read test: one process and then four simultaneous processes # dd if=/dev/mapper/VolGroup-lv_home of=/dev/null bs=1M count=1024 iflag=direct skip=1024 1073741824 bytes (1.1 GB) copied, 5.04356 s, 213 MB/s # dd if=/dev/mapper/VolGroup-lv_home of=/dev/null bs=1M count=1024 iflag=direct skip=1024 & dd if=/dev/mapper/VolGroup-lv_home of=/dev/null bs=1M count=1024 iflag=direct skip=2048 & dd if=/dev/mapper/VolGroup-lv_home of=/dev/null bs=1M count=1024 iflag=direct skip=3072 & dd if=/dev/mapper/VolGroup-lv_home of=/dev/null bs=1M count=1024 iflag=direct skip=4096 1073741824 bytes (1.1 GB) copied, 24.7348 s, 43.4 MB/s 1073741824 bytes (1.1 GB) copied, 24.7378 s, 43.4 MB/s 1073741824 bytes (1.1 GB) copied, 24.7408 s, 43.4 MB/s 1073741824 bytes (1.1 GB) copied, 24.744 s, 43.4 MB/s dd write test: one process and then four simultaneous processes # dd if=/dev/zero of=test bs=1M count=1024 oflag=direct 1073741824 bytes (1.1 GB) copied, 10.415 s, 103 MB/s # dd if=/dev/zero of=test bs=1M count=1024 oflag=direct & dd if=/dev/zero of=test2 bs=1M count=1024 oflag=direct & dd if=/dev/zero of=test3 bs=1M count=1024 oflag=direct & dd if=/dev/zero of=test4 bs=1M count=1024 oflag=direct 1073741824 bytes (1.1 GB) copied, 49.8874 s, 21.5 MB/s 1073741824 bytes (1.1 GB) copied, 49.8608 s, 21.5 MB/s 1073741824 bytes (1.1 GB) copied, 49.8693 s, 21.5 MB/s 1073741824 bytes (1.1 GB) copied, 49.9427 s, 21.5 MB/s I wonder is that normal situation or did I missed something?

    Read the article

  • vlans on openvz, centos 6

    - by arheops
    i have centos 6 with openvz installed on it, switch with vlan support. I need following setup: 1) eth0 on openvz have be tagged multiple vlans. 2) each virtualhost have to be in single vlan. yes,i already read wiki on openvz, but it is just not work. I have on main server interface eth0.108 and able ping address on that interface(using nootbook on untagged port vlan 108), but i not able ping address inside container. Main node: [root@box1 conf]# ifconfig eth0 Link encap:Ethernet HWaddr D0:67:E5:F4:11:60 inet6 addr: fe80::d267:e5ff:fef4:1160/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:506 errors:0 dropped:0 overruns:0 frame:0 TX packets:25 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:68939 (67.3 KiB) TX bytes:1780 (1.7 KiB) Interrupt:16 Memory:c0000000-c0012800 eth0.108 Link encap:Ethernet HWaddr D0:67:E5:F4:11:60 inet addr:10.11.108.3 Bcast:10.11.111.255 Mask:255.255.252.0 inet6 addr: fe80::d267:e5ff:fef4:1160/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:238 errors:0 dropped:0 overruns:0 frame:0 TX packets:19 errors:0 dropped:12 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:25890 (25.2 KiB) TX bytes:926 (926.0 b) eth1 Link encap:Ethernet HWaddr D0:67:E5:F4:11:61 inet addr:192.168.23.233 Bcast:192.168.23.255 Mask:255.255.255.0 inet6 addr: fe80::d267:e5ff:fef4:1161/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1967 errors:0 dropped:0 overruns:0 frame:0 TX packets:356 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:365298 (356.7 KiB) TX bytes:115007 (112.3 KiB) Interrupt:17 Memory:c2000000-c2012800 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:7 errors:0 dropped:0 overruns:0 frame:0 TX packets:7 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:784 (784.0 b) TX bytes:784 (784.0 b) venet0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet6 addr: fe80::1/128 Scope:Link UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:3 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b) veth108.0 Link encap:Ethernet HWaddr 00:18:51:DA:94:D5 inet6 addr: fe80::218:51ff:feda:94d5/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:639 errors:0 dropped:0 overruns:0 frame:0 TX packets:5 errors:0 dropped:1 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:17996 (17.5 KiB) TX bytes:308 (308.0 b) virtual node [root@pbx108 /]# ifconfig eth0.108 Link encap:Ethernet HWaddr 00:18:51:CA:B5:C5 inet addr:10.11.108.1 Bcast:10.11.111.255 Mask:255.255.252.0 inet6 addr: fe80::218:51ff:feca:b5c5/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:5 errors:0 dropped:0 overruns:0 frame:0 TX packets:685 errors:0 dropped:2 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:308 (308.0 b) TX bytes:19284 (18.8 KiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:683 errors:0 dropped:0 overruns:0 frame:0 TX packets:683 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:76288 (74.5 KiB) TX bytes:76288 (74.5 KiB) /etc/vz/conf/108.conf # RAM PHYSPAGES="0:4000M" # Swap SWAPPAGES="0:512M" # Disk quota parameters (in form of softlimit:hardlimit) DISKSPACE="200G:200G" DISKINODES="20000000:22000000" QUOTATIME="0" # CPU fair scheduler parameter CPUUNITS="4000" VE_ROOT="/vz/root/$VEID" VE_PRIVATE="/vz/private/$VEID" OSTEMPLATE="centos-6-x86_64" ORIGIN_SAMPLE="vswap-256m" NETIF="ifname=eth0.108,mac=00:18:51:CA:B5:C5,host_ifname=veth108.0,host_mac=00:18:51:DA:94:D5" NAMESERVER="8.8.8.8" HOSTNAME="pbx108.localhost" IP_ADDRESS=""

    Read the article

  • Distributed and/or Parallel SSIS processing

    - by Jeff
    Background: Our company hosts SaaS DSS applications, where clients provide us data Daily and/or Weekly, which we process & merge into their existing database. During business hours, load in the servers are pretty minimal as it's mostly users running simple pre-defined queries via the website, or running drill-through reports that mostly hit the SSAS OLAP cube. I manage the IT Operations Team, and so far this has presented an interesting "scaling" issue for us. For our daily-refreshed clients, the server is only "busy" for about 4-6 hrs at night. For our weekly-refresh clients, the server is only "busy" for maybe 8-10 hrs per week! We've done our best to use some simple methods of distributing the load by spreading the daily clients evenly among the servers such that we're not trying to process daily clients back-to-back over night. But long-term this scaling strategy creates two notable issues. First, it's going to consume a pretty immense amount of hardware that sits idle for large periods of time. Second, it takes significant Production Support over-head to basically "schedule" the ETL such that they don't over-lap, and move clients/schedules around if they out-grow the resources on a particular server or allocated time-slot. As the title would imply, one option we've tried is running multiple SSIS packages in parallel, but in most cases this has yielded VERY inconsistent results. The most common failures are DTExec, SQL, and SSAS fighting for physical memory and throwing out-of-memory errors, and ETLs running 3,4,5x longer than expected. So from my practical experience thus far, it seems like running multiple ETL packages on the same hardware isn't a good idea, but I can't be the first person that doesn't want to scale multiple ETLs around manual scheduling, and sequential processing. One option we've considered is virtualizing the servers, which obviously doesn't give you any additional resources, but moves the resource contention onto the hypervisor, which (from my experience) seems to manage simultaneous CPU/RAM/Disk I/O a little more gracefully than letting DTExec, SQL, and SSAS battle it out within Windows. Question to the forum: So my question to the forum is, are we missing something obvious here? Are there tools out there that can help manage running multiple SSIS packages on the same hardware? Would it be more "efficient" in terms of parallel execution if instead of running DTExec, SQL, and SSAS same machine (with every machine running that configuration), we run in pairs of three machines with SSIS running on one machine, SQL on another, and SSAS on a third? Obviously that would only make sense if we could process more than the three ETL we were able to process on the machine independently. Another option we've considered is completely re-architecting our SSIS package to have one "master" package for all clients that attempts to intelligently chose a server based off how "busy" it already is in terms of CPU/Memory/Disk utilization, but that would be a herculean effort, and seems like we're trying to reinvent something that you would think someone would sell (although I haven't had any luck finding it). So in summary, are we missing an obvious solution for this, and does anyone know if any tools (for free or for purchase, doesn't matter) that facilitate running multiple SSIS ETL packages in parallel and on multiple servers? (What I would call a "queue & node based" system, but that's not an official term). Ultimately VMWare's Distributed Resource Scheduler addresses this as you simply run a consistent number of clients per VM that you know will never conflict scheduleing-wise, then leave it up to VMWare to move the VMs around to balance out hardware usage. I'm definitely not against using VMWare to do this, but since we're a 100% Microsoft app stack, it seems like -someone- out there would have solved this problem at the application layer instead of the hypervisor layer by checking on resource utilization at the OS, SQL, SSAS levels. I'm open to ANY discussion on this, and remember no suggestion is too crazy or radical! :-) Right now, VMWare is the only option we've found to get away from "manually" balancing our resources, so any suggestions that leave us on a pure Microsoft stack would be great. Thanks guys, Jeff

    Read the article

  • MySQL crash. Unknown cause. Signal 11

    - by fortmac
    This is a database that I installed ~6 months ago and had been running fine. This is currently running in Ubuntu 12.04. Attempting to connect to MySQL causes this error: ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (111) Then theres: $ sudo mysqld which returns: 130702 15:38:54 [Note] Plugin 'FEDERATED' is disabled. 130702 15:38:54 InnoDB: The InnoDB memory heap is disabled 130702 15:38:54 InnoDB: Mutexes and rw_locks use GCC atomic builtins 130702 15:38:54 InnoDB: Compressed tables use zlib 1.2.3.4 130702 15:38:54 InnoDB: Initializing buffer pool, size = 128.0M 130702 15:38:54 InnoDB: Completed initialization of buffer pool 130702 15:38:54 InnoDB: highest supported file format is Barracuda. InnoDB: The log sequence number in ibdata files does not match InnoDB: the log sequence number in the ib_logfiles! 130702 15:38:54 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery. InnoDB: Reading tablespace information from the .ibd files... InnoDB: Restoring possible half-written data pages from the doublewrite InnoDB: buffer... 130702 15:38:55 InnoDB: Waiting for the background threads to start 130702 15:38:56 InnoDB: 1.1.8 started; log sequence number 5201901917 130702 15:38:56 [Note] Server hostname (bind-address): '127.0.0.1'; port: 3306 130702 15:38:56 [Note] - '127.0.0.1' resolves to '127.0.0.1'; 130702 15:38:56 [Note] Server socket created on IP: '127.0.0.1'. 130702 15:38:56 [Note] Event Scheduler: Loaded 0 events 130702 15:38:56 [Note] mysqld: ready for connections. Version: '5.5.28-0ubuntu0.12.04.3' socket: '/var/run/mysqld/mysqld.sock' port: 3306 (Ubuntu) 19:39:02 UTC - mysqld got signal 11 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=16777216 read_buffer_size=131072 max_used_connections=1 max_threads=151 thread_count=1 connection_count=1 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 346681 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. Thread pointer: 0x7f9509e51530 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 7f94f1d3de60 thread_stack 0x30000 mysqld(my_print_stacktrace+0x29)[0x7f95083427b9] mysqld(handle_fatal_signal+0x483)[0x7f9508209b43] /lib/x86_64-linux-gnu/libpthread.so.0(+0xfcb0)[0x7f9506f5bcb0] mysqld(+0x320e1c)[0x7f9508113e1c] mysqld(_ZN4JOIN15alloc_func_listEv+0x9c)[0x7f950812391c] mysqld(_ZN4JOIN7prepareEPPP4ItemP10TABLE_LISTjS1_jP8st_orderS7_S1_S7_P13st_select_lexP18st_select_lex_unit+0x918)[0x7f9508124658] mysqld(_Z12mysql_selectP3THDPPP4ItemP10TABLE_LISTjR4ListIS1_ES2_jP8st_orderSB_S2_SB_yP13select_resultP18st_select_lex_unitP13st_select_lex+0x130)[0x7f950812d060] mysqld(_Z13handle_selectP3THDP3LEXP13select_resultm+0x17c)[0x7f9508132fbc] mysqld(+0x2f6714)[0x7f95080e9714] mysqld(_Z21mysql_execute_commandP3THD+0x16d8)[0x7f95080f1178] mysqld(_Z11mysql_parseP3THDPcjP12Parser_state+0x10f)[0x7f95080f5e0f] mysqld(_Z16dispatch_command19enum_server_commandP3THDPcj+0x1380)[0x7f95080f7260] mysqld(_Z24do_handle_one_connectionP3THD+0x1bd)[0x7f950819b80d] mysqld(handle_one_connection+0x50)[0x7f950819b870] /lib/x86_64-linux-gnu/libpthread.so.0(+0x7e9a)[0x7f9506f53e9a] /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d)[0x7f9506684cbd] Trying to get some variables. Some pointers may be invalid and cause the dump to abort. Query (7f94e0004b80): is an invalid pointer Connection ID (thread ID): 1 Status: NOT_KILLED The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains information that should help you find out what is causing the crash. I'm at a loss. What other reports would be useful in diagnosing this? /var/log/mysql.err & /var/log/mysql.log are empty.

    Read the article

  • Parallelism in .NET – Part 5, Partitioning of Work

    - by Reed
    When parallelizing any routine, we start by decomposing the problem.  Once the problem is understood, we need to break our work into separate tasks, so each task can be run on a different processing element.  This process is called partitioning. Partitioning our tasks is a challenging feat.  There are opposing forces at work here: too many partitions adds overhead, too few partitions leaves processors idle.  Trying to work the perfect balance between the two extremes is the goal for which we should aim.  Luckily, the Task Parallel Library automatically handles much of this process.  However, there are situations where the default partitioning may not be appropriate, and knowledge of our routines may allow us to guide the framework to making better decisions. First off, I’d like to say that this is a more advanced topic.  It is perfectly acceptable to use the parallel constructs in the framework without considering the partitioning taking place.  The default behavior in the Task Parallel Library is very well-behaved, even for unusual work loads, and should rarely be adjusted.  I have found few situations where the default partitioning behavior in the TPL is not as good or better than my own hand-written partitioning routines, and recommend using the defaults unless there is a strong, measured, and profiled reason to avoid using them.  However, understanding partitioning, and how the TPL partitions your data, helps in understanding the proper usage of the TPL. I indirectly mentioned partitioning while discussing aggregation.  Typically, our systems will have a limited number of Processing Elements (PE), which is the terminology used for hardware capable of processing a stream of instructions.  For example, in a standard Intel i7 system, there are four processor cores, each of which has two potential hardware threads due to Hyperthreading.  This gives us a total of 8 PEs – theoretically, we can have up to eight operations occurring concurrently within our system. In order to fully exploit this power, we need to partition our work into Tasks.  A task is a simple set of instructions that can be run on a PE.  Ideally, we want to have at least one task per PE in the system, since fewer tasks means that some of our processing power will be sitting idle.  A naive implementation would be to just take our data, and partition it with one element in our collection being treated as one task.  When we loop through our collection in parallel, using this approach, we’d just process one item at a time, then reuse that thread to process the next, etc.  There’s a flaw in this approach, however.  It will tend to be slower than necessary, often slower than processing the data serially. The problem is that there is overhead associated with each task.  When we take a simple foreach loop body and implement it using the TPL, we add overhead.  First, we change the body from a simple statement to a delegate, which must be invoked.  In order to invoke the delegate on a separate thread, the delegate gets added to the ThreadPool’s current work queue, and the ThreadPool must pull this off the queue, assign it to a free thread, then execute it.  If our collection had one million elements, the overhead of trying to spawn one million tasks would destroy our performance. The answer, here, is to partition our collection into groups, and have each group of elements treated as a single task.  By adding a partitioning step, we can break our total work into small enough tasks to keep our processors busy, but large enough tasks to avoid overburdening the ThreadPool.  There are two clear, opposing goals here: Always try to keep each processor working, but also try to keep the individual partitions as large as possible. When using Parallel.For, the partitioning is always handled automatically.  At first, partitioning here seems simple.  A naive implementation would merely split the total element count up by the number of PEs in the system, and assign a chunk of data to each processor.  Many hand-written partitioning schemes work in this exactly manner.  This perfectly balanced, static partitioning scheme works very well if the amount of work is constant for each element.  However, this is rarely the case.  Often, the length of time required to process an element grows as we progress through the collection, especially if we’re doing numerical computations.  In this case, the first PEs will finish early, and sit idle waiting on the last chunks to finish.  Sometimes, work can decrease as we progress, since previous computations may be used to speed up later computations.  In this situation, the first chunks will be working far longer than the last chunks.  In order to balance the workload, many implementations create many small chunks, and reuse threads.  This adds overhead, but does provide better load balancing, which in turn improves performance. The Task Parallel Library handles this more elaborately.  Chunks are determined at runtime, and start small.  They grow slowly over time, getting larger and larger.  This tends to lead to a near optimum load balancing, even in odd cases such as increasing or decreasing workloads.  Parallel.ForEach is a bit more complicated, however. When working with a generic IEnumerable<T>, the number of items required for processing is not known in advance, and must be discovered at runtime.  In addition, since we don’t have direct access to each element, the scheduler must enumerate the collection to process it.  Since IEnumerable<T> is not thread safe, it must lock on elements as it enumerates, create temporary collections for each chunk to process, and schedule this out.  By default, it uses a partitioning method similar to the one described above.  We can see this directly by looking at the Visual Partitioning sample shipped by the Task Parallel Library team, and available as part of the Samples for Parallel Programming.  When we run the sample, with four cores and the default, Load Balancing partitioning scheme, we see this: The colored bands represent each processing core.  You can see that, when we started (at the top), we begin with very small bands of color.  As the routine progresses through the Parallel.ForEach, the chunks get larger and larger (seen by larger and larger stripes). Most of the time, this is fantastic behavior, and most likely will out perform any custom written partitioning.  However, if your routine is not scaling well, it may be due to a failure in the default partitioning to handle your specific case.  With prior knowledge about your work, it may be possible to partition data more meaningfully than the default Partitioner. There is the option to use an overload of Parallel.ForEach which takes a Partitioner<T> instance.  The Partitioner<T> class is an abstract class which allows for both static and dynamic partitioning.  By overriding Partitioner<T>.SupportsDynamicPartitions, you can specify whether a dynamic approach is available.  If not, your custom Partitioner<T> subclass would override GetPartitions(int), which returns a list of IEnumerator<T> instances.  These are then used by the Parallel class to split work up amongst processors.  When dynamic partitioning is available, GetDynamicPartitions() is used, which returns an IEnumerable<T> for each partition.  If you do decide to implement your own Partitioner<T>, keep in mind the goals and tradeoffs of different partitioning strategies, and design appropriately. The Samples for Parallel Programming project includes a ChunkPartitioner class in the ParallelExtensionsExtras project.  This provides example code for implementing your own, custom allocation strategies, including a static allocator of a given chunk size.  Although implementing your own Partitioner<T> is possible, as I mentioned above, this is rarely required or useful in practice.  The default behavior of the TPL is very good, often better than any hand written partitioning strategy.

    Read the article

  • Guide to reduce TFS database growth using the Test Attachment Cleaner

    - by terje
    Recently there has been several reports on TFS databases growing too fast and growing too big.  Notable this has been observed when one has started to use more features of the Testing system.  Also, the TFS 2010 handles test results differently from TFS 2008, and this leads to more data stored in the TFS databases. As a consequence of this there has been released some tools to remove unneeded data in the database, and also some fixes to correct for bugs which has been found and corrected during this process.  Further some preventive practices and maintenance rules should be adopted. A lot of people have blogged about this, among these are: Anu’s very important blog post here describes both the problem and solutions to handle it.  She describes both the Test Attachment Cleaner tool, and also some QFE/CU releases to fix some underlying bugs which prevented the tool from being fully effective. Brian Harry’s blog post here describes the problem too This forum thread describes the problem with some solution hints. Ravi Shanker’s blog post here describes best practices on solving this (TBP) Grant Holidays blogpost here describes strategies to use the Test Attachment Cleaner both to detect space problems and how to rectify them.   The problem can be divided into the following areas: Publishing of test results from builds Publishing of manual test results and their attachments in particular Publishing of deployment binaries for use during a test run Bugs in SQL server preventing total cleanup of data (All the published data above is published into the TFS database as attachments.) The test results will include all data being collected during the run.  Some of this data can grow rather large, like IntelliTrace logs and video recordings.   Also the pushing of binaries which happen for automated test runs, including tests run during a build using code coverage which will include all the files in the deployment folder, contributes a lot to the size of the attached data.   In order to handle this systematically, I have set up a 3-stage process: Find out if you have a database space issue Set up your TFS server to minimize potential database issues If you have the “problem”, clean up the database and otherwise keep it clean   Analyze the data Are your database( s) growing ?  Are unused test results growing out of proportion ? To find out about this you need to query your TFS database for some of the information, and use the Test Attachment Cleaner (TAC) to obtain some  more detailed information. If you don’t have too many databases you can use the SQL Server reports from within the Management Studio to analyze the database and table sizes. Or, you can use a set of queries . I find queries often faster to use because I can tweak them the way I want them.  But be aware that these queries are non-documented and non-supported and may change when the product team wants to change them. If you have multiple Project Collections, find out which might have problems: (Disclaimer: The queries below work on TFS 2010. They will not work on Dev-11, since the table structure have been changed.  I will try to update them for Dev-11 when it is released.) Open a SQL Management Studio session onto the SQL Server where you have your TFS Databases. Use the query below to find the Project Collection databases and their sizes, in descending size order.  use master select DB_NAME(database_id) AS DBName, (size/128) SizeInMB FROM sys.master_files where type=0 and substring(db_name(database_id),1,4)='Tfs_' and DB_NAME(database_id)<>'Tfs_Configuration' order by size desc Doing this on one of our SQL servers gives the following results: It is pretty easy to see on which collection to start the work   Find out which tables are possibly too large Keep a special watch out for the Tfs_Attachment table. Use the script at the bottom of Grant’s blog to find the table sizes in descending size order. In our case we got this result: From Grant’s blog we learnt that the tbl_Content is in the Version Control category, so the major only big issue we have here is the tbl_AttachmentContent.   Find out which team projects have possibly too large attachments In order to use the TAC to find and eventually delete attachment data we need to find out which team projects have these attachments. The team project is a required parameter to the TAC. Use the following query to find this, replace the collection database name with whatever applies in your case:   use Tfs_DefaultCollection select p.projectname, sum(a.compressedlength)/1024/1024 as sizeInMB from dbo.tbl_Attachment as a inner join tbl_testrun as tr on a.testrunid=tr.testrunid inner join tbl_project as p on p.projectid=tr.projectid group by p.projectname order by sum(a.compressedlength) desc In our case we got this result (had to remove some names), out of more than 100 team projects accumulated over quite some years: As can be seen here it is pretty obvious the “Byggtjeneste – Projects” are the main team project to take care of, with the ones on lines 2-4 as the next ones.  Check which attachment types takes up the most space It can be nice to know which attachment types takes up the space, so run the following query: use Tfs_DefaultCollection select a.attachmenttype, sum(a.compressedlength)/1024/1024 as sizeInMB from dbo.tbl_Attachment as a inner join tbl_testrun as tr on a.testrunid=tr.testrunid inner join tbl_project as p on p.projectid=tr.projectid group by a.attachmenttype order by sum(a.compressedlength) desc We then got this result: From this it is pretty obvious that the problem here is the binary files, as also mentioned in Anu’s blog. Check which file types, by their extension, takes up the most space Run the following query use Tfs_DefaultCollection select SUBSTRING(filename,len(filename)-CHARINDEX('.',REVERSE(filename))+2,999)as Extension, sum(compressedlength)/1024 as SizeInKB from tbl_Attachment group by SUBSTRING(filename,len(filename)-CHARINDEX('.',REVERSE(filename))+2,999) order by sum(compressedlength) desc This gives a result like this:   Now you should have collected enough information to tell you what to do – if you got to do something, and some of the information you need in order to set up your TAC settings file, both for a cleanup and for scheduled maintenance later.    Get your TFS server and environment properly set up Even if you have got the problem or if have yet not got the problem, you should ensure the TFS server is set up so that the risk of getting into this problem is minimized.  To ensure this you should install the following set of updates and components. The assumption is that your TFS Server is at SP1 level. Install the QFE for KB2608743 – which also contains detailed instructions on its use, download from here. The QFE changes the default settings to not upload deployed binaries, which are used in automated test runs. Binaries will still be uploaded if: Code coverage is enabled in the test settings. You change the UploadDeploymentItem to true in the testsettings file. Be aware that this might be reset back to false by another user which haven't installed this QFE. The hotfix should be installed to The build servers (the build agents) The machine hosting the Test Controller Local development computers (Visual Studio) Local test computers (MTM) It is not required to install it to the TFS Server, test agents or the build controller – it has no effect on these programs. If you use the SQL Server 2008 R2 you should also install the CU 10 (or later).  This CU fixes a potential problem of hanging “ghost” files.  This seems to happen only in certain trigger situations, but to ensure it doesn’t bite you, it is better to make sure this CU is installed. There is no such CU for SQL Server 2008 pre-R2 Work around:  If you suspect hanging ghost files, they can be – with some mental effort, deduced from the ghost counters using the following SQL query: use master SELECT DB_NAME(database_id) as 'database',OBJECT_NAME(object_id) as 'objectname', index_type_desc,ghost_record_count,version_ghost_record_count,record_count,avg_record_size_in_bytes FROM sys.dm_db_index_physical_stats (DB_ID(N'<DatabaseName>'), OBJECT_ID(N'<TableName>'), NULL, NULL , 'DETAILED') The problem is a stalled ghost cleanup process.  Restarting the SQL server after having stopped all components that depends on it, like the TFS Server and SPS services – that is all applications that connect to the SQL server. Then restart the SQL server, and finally start up all dependent processes again.  (I would guess a complete server reboot would do the trick too.) After this the ghost cleanup process will run properly again. The fix will come in the next CU cycle for SQL Server R2 SP1.  The R2 pre-SP1 and R2 SP1 have separate maintenance cycles, and are maintained individually. Each have its own set of CU’s. When it comes I will add the link here to that CU. The "hanging ghost file” issue came up after one have run the TAC, and deleted enourmes amount of data.  The SQL Server can get into this hanging state (without the QFE) in certain cases due to this. And of course, install and set up the Test Attachment Cleaner command line power tool.  This should be done following some guidelines from Ravi Shanker: “When you run TAC, ensure that you are deleting small chunks of data at regular intervals (say run TAC every night at 3AM to delete data that is between age 730 to 731 days) – this will ensure that small amounts of data are being deleted and SQL ghosted record cleanup can catch up with the number of deletes performed. “ This rule minimizes the risk of the ghosted hang problem to occur, and further makes it easier for the SQL server ghosting process to work smoothly. “Run DBCC SHRINKDB post the ghosted records are cleaned up to physically reclaim the space on the file system” This is the last step in a 3 step process of removing SQL server data. First they are logically deleted. Then they are cleaned out by the ghosting process, and finally removed using the shrinkdb command. Cleaning out the attachments The TAC is run from the command line using a set of parameters and controlled by a settingsfile.  The parameters point out a server uri including the team project collection and also point at a specific team project. So in order to run this for multiple team projects regularly one has to set up a script to run the TAC multiple times, once for each team project.  When you install the TAC there is a very useful readme file in the same directory. When the deployment binaries are published to the TFS server, ALL items are published up from the deployment folder. That often means much more files than you would assume are necessary. This is a brute force technique. It works, but you need to take care when cleaning up. Grant has shown how their settings file looks in his blog post, removing all attachments older than 180 days , as long as there are no active workitems connected to them. This setting can be useful to clean out all items, both in a clean-up once operation, and in a general There are two scenarios we need to consider: Cleaning up an existing overgrown database Maintaining a server to avoid an overgrown database using scheduled TAC   1. Cleaning up a database which has grown too big due to these attachments. This job is a “Once” job.  We do this once and then move on to make sure it won’t happen again, by taking the actions in 2) below.  In this scenario you should only consider the large files. Your goal should be to simply reduce the size, and don’t bother about  the smaller stuff. That can be left a scheduled TAC cleanup ( 2 below). Here you can use a very general settings file, and just remove the large attachments, or you can choose to remove any old items.  Grant’s settings file is an example of the last one.  A settings file to remove only large attachments could look like this: <!-- Scenario : Remove large files --> <DeletionCriteria> <TestRun /> <Attachment> <SizeInMB GreaterThan="10" /> </Attachment> </DeletionCriteria> Or like this: If you want only to remove dll’s and pdb’s about that size, add an Extensions-section.  Without that section, all extensions will be deleted. <!-- Scenario : Remove large files of type dll's and pdb's --> <DeletionCriteria> <TestRun /> <Attachment> <SizeInMB GreaterThan="10" /> <Extensions> <Include value="dll" /> <Include value="pdb" /> </Extensions> </Attachment> </DeletionCriteria> Before you start up your scheduled maintenance, you should clear out all older items. 2. Scheduled maintenance using the TAC If you run a schedule every night, and remove old items, and also remove them in small batches.  It is important to run this often, like every night, in order to keep the number of deleted items low. That way the SQL ghost process works better. One approach could be to delete all items older than some number of days, let’s say 180 days. This could be combined with restricting it to keep attachments with active or resolved bugs.  Doing this every night ensures that only small amounts of data is deleted. <!-- Scenario : Remove old items except if they have active or resolved bugs --> <DeletionCriteria> <TestRun> <AgeInDays OlderThan="180" /> </TestRun> <Attachment /> <LinkedBugs> <Exclude state="Active" /> <Exclude state="Resolved"/> </LinkedBugs> </DeletionCriteria> In my experience there are projects which are left with active or resolved workitems, akthough no further work is done.  It can be wise to have a cleanup process with no restrictions on linked bugs at all. Note that you then have to remove the whole LinkedBugs section. A approach which could work better here is to do a two step approach, use the schedule above to with no LinkedBugs as a sweeper cleaning task taking away all data older than you could care about.  Then have another scheduled TAC task to take out more specifically attachments that you are not likely to use. This task could be much more specific, and based on your analysis clean out what you know is troublesome data. <!-- Scenario : Remove specific files early --> <DeletionCriteria> <TestRun > <AgeInDays OlderThan="30" /> </TestRun> <Attachment> <SizeInMB GreaterThan="10" /> <Extensions> <Include value="iTrace"/> <Include value="dll"/> <Include value="pdb"/> <Include value="wmv"/> </Extensions> </Attachment> <LinkedBugs> <Exclude state="Active" /> <Exclude state="Resolved" /> </LinkedBugs> </DeletionCriteria> The readme document for the TAC says that it recognizes “internal” extensions, but it does recognize any extension. To run the tool do the following command: tcmpt attachmentcleanup /collection:your_tfs_collection_url /teamproject:your_team_project /settingsfile:path_to_settingsfile /outputfile:%temp%/teamproject.tcmpt.log /mode:delete   Shrinking the database You could run a shrink database command after the TAC has run in cases where there are a lot of data being deleted.  In this case you SHOULD do it, to free up all that space.  But, after the shrink operation you should do a rebuild indexes, since the shrink operation will leave the database in a very fragmented state, which will reduce performance. Note that you need to rebuild indexes, reorganizing is not enough. For smaller amounts of data you should NOT shrink the database, since the data will be reused by the SQL server when it need to add more records.  In fact, it is regarded as a bad practice to shrink the database regularly.  So on a daily maintenance schedule you should NOT shrink the database. To shrink the database you do a DBCC SHRINKDATABASE command, and then follow up with a DBCC INDEXDEFRAG afterwards.  I find the easiest way to do this is to create a SQL Maintenance plan including the Shrink Database Task and the Rebuild Index Task and just execute it when you need to do this.

    Read the article

  • Introducing Oracle VM Server for SPARC

    - by Honglin Su
    As you are watching Oracle's Virtualization Strategy Webcast and exploring the great virtualization offerings of Oracle VM product line, I'd like to introduce Oracle VM Server for SPARC --  highly efficient, enterprise-class virtualization solution for Sun SPARC Enterprise Systems with Chip Multithreading (CMT) technology. Oracle VM Server for SPARC, previously called Sun Logical Domains, leverages the built-in SPARC hypervisor to subdivide supported platforms' resources (CPUs, memory, network, and storage) by creating partitions called logical (or virtual) domains. Each logical domain can run an independent operating system. Oracle VM Server for SPARC provides the flexibility to deploy multiple Oracle Solaris operating systems simultaneously on a single platform. Oracle VM Server also allows you to create up to 128 virtual servers on one system to take advantage of the massive thread scale offered by the CMT architecture. Oracle VM Server for SPARC integrates both the industry-leading CMT capability of the UltraSPARC T1, T2 and T2 Plus processors and the Oracle Solaris operating system. This combination helps to increase flexibility, isolate workload processing, and improve the potential for maximum server utilization. Oracle VM Server for SPARC delivers the following: Leading Price/Performance - The low-overhead architecture provides scalable performance under increasing workloads without additional license cost. This enables you to meet the most aggressive price/performance requirement Advanced RAS - Each logical domain is an entirely independent virtual machine with its own OS. It supports virtual disk mutipathing and failover as well as faster network failover with link-based IP multipathing (IPMP) support. Moreover, it's fully integrated with Solaris FMA (Fault Management Architecture), which enables predictive self healing. CPU Dynamic Resource Management (DRM) - Enable your resource management policy and domain workload to trigger the automatic addition and removal of CPUs. This ability helps you to better align with your IT and business priorities. Enhanced Domain Migrations - Perform domain migrations interactively and non-interactively to bring more flexibility to the management of your virtualized environment. Improve active domain migration performance by compressing memory transfers and taking advantage of cryptographic acceleration hardware. These methods provide faster migration for load balancing, power saving, and planned maintenance. Dynamic Crypto Control - Dynamically add and remove cryptographic units (aka MAU) to and from active domains. Also, migrate active domains that have cryptographic units. Physical-to-virtual (P2V) Conversion - Quickly convert an existing SPARC server running the Oracle Solaris 8, 9 or 10 OS into a virtualized Oracle Solaris 10 image. Use this image to facilitate OS migration into the virtualized environment. Virtual I/O Dynamic Reconfiguration (DR) - Add and remove virtual I/O services and devices without needing to reboot the system. CPU Power Management - Implement power saving by disabling each core on a Sun UltraSPARC T2 or T2 Plus processor that has all of its CPU threads idle. Advanced Network Configuration - Configure the following network features to obtain more flexible network configurations, higher performance, and scalability: Jumbo frames, VLANs, virtual switches for link aggregations, and network interface unit (NIU) hybrid I/O. Official Certification Based On Real-World Testing - Use Oracle VM Server for SPARC with the most sophisticated enterprise workloads under real-world conditions, including Oracle Real Application Clusters (RAC). Affordable, Full-Stack Enterprise Class Support - Obtain worldwide support from Oracle for the entire virtualization environment and workloads together. The support covers hardware, firmware, OS, virtualization, and the software stack. SPARC Server Virtualization Oracle offers a full portfolio of virtualization solutions to address your needs. SPARC is the leading platform to have the hard partitioning capability that provides the physical isolation needed to run independent operating systems. Many customers have already used Oracle Solaris Containers for application isolation. Oracle VM Server for SPARC provides another important feature with OS isolation. This gives you the flexibility to deploy multiple operating systems simultaneously on a single Sun SPARC T-Series server with finer granularity for computing resources.  For SPARC CMT processors, the natural level of granularity is an execution thread, not a time-sliced microsecond of execution resources. Each CPU thread can be treated as an independent virtual processor. The scheduler is naturally built into the CPU for lower overhead and higher performance. Your organizations can couple Oracle Solaris Containers and Oracle VM Server for SPARC with the breakthrough space and energy savings afforded by Sun SPARC Enterprise systems with CMT technology to deliver a more agile, responsive, and low-cost environment. Management with Oracle Enterprise Manager Ops Center The Oracle Enterprise Manager Ops Center Virtualization Management Pack provides full lifecycle management of virtual guests, including Oracle VM Server for SPARC and Oracle Solaris Containers. It helps you streamline operations and reduce downtime. Together, the Virtualization Management Pack and the Ops Center Provisioning and Patch Automation Pack provide an end-to-end management solution for physical and virtual systems through a single web-based console. This solution automates the lifecycle management of physical and virtual systems and is the most effective systems management solution for Oracle's Sun infrastructure. Ease of Deployment with Configuration Assistant The Oracle VM Server for SPARC Configuration Assistant can help you easily create logical domains. After gathering the configuration data, the Configuration Assistant determines the best way to create a deployment to suit your requirements. The Configuration Assistant is available as both a graphical user interface (GUI) and terminal-based tool. Oracle Solaris Cluster HA Support The Oracle Solaris Cluster HA for Oracle VM Server for SPARC data service provides a mechanism for orderly startup and shutdown, fault monitoring and automatic failover of the Oracle VM Server guest domain service. In addition, applications that run on a logical domain, as well as its resources and dependencies can be controlled and managed independently. These are managed as if they were running in a classical Solaris Cluster hardware node. Supported Systems Oracle VM Server for SPARC is supported on all Sun SPARC Enterprise Systems with CMT technology. UltraSPARC T2 Plus Systems ·   Sun SPARC Enterprise T5140 Server ·   Sun SPARC Enterprise T5240 Server ·   Sun SPARC Enterprise T5440 Server ·   Sun Netra T5440 Server ·   Sun Blade T6340 Server Module ·   Sun Netra T6340 Server Module UltraSPARC T2 Systems ·   Sun SPARC Enterprise T5120 Server ·   Sun SPARC Enterprise T5220 Server ·   Sun Netra T5220 Server ·   Sun Blade T6320 Server Module ·   Sun Netra CP3260 ATCA Blade Server Note that UltraSPARC T1 systems are supported on earlier versions of the software.Sun SPARC Enterprise Systems with CMT technology come with the right to use (RTU) of Oracle VM Server, and the software is pre-installed. If you have the systems under warranty or with support, you can download the software and system firmware as well as their updates. Oracle Premier Support for Systems provides fully-integrated support for your server hardware, firmware, OS, and virtualization software. Visit oracle.com/support for information about Oracle's support offerings for Sun systems. For more information about Oracle's virtualization offerings, visit oracle.com/virtualization.

    Read the article

  • How Expedia Made My New Bride Cry

    - by Lance Robinson
    Tweet this? Email Expedia and ask them to give me and my new wife our honeymoon? When Expedia followed up their failure with our honeymoon trip with a complete and total lack of acknowledgement of any responsibility for the problem and endless loops of explaining the issue over and over again - I swore that they would make it right. When they brought my new bride to tears, I got an immediate and endless supply of motivation. I hope you will help me make them make it right by posting our story on Twitter, Facebook, your blog, on Expedia itself, and when talking to your friends in person about their own travel plans.   If you are considering using them now for an important trip - reconsider. Short summary: We arrived early for a flight - but Expedia had made a mistake with the data they supplied to JetBlue and Emirates, which resulted in us not being able to check in (one leg of our trip was missing)!  At the time of this post, three people (myself, my wife, and an exceptionally patient JetBlue employee named Mary) each spent hours on the phone with Expedia.  I myself spent right at 3 hours (according to iPhone records), Lauren spent an hour and a half or so, and poor Mary was probably on the phone for a good 3.5 hours.  This is after 5 hours total at the airport.  If you add up our phone time, that is nearly 8 hours of phone time over a 5 hour period with little or no help, stall tactics (?), run-around, denial, shifting of blame, and holding. Details below (times are approximate): First, my wife and I were married yesterday - June 18th, the 3 year anniversary of our first date. She is awesome. She is the nicest person I have ever known, a ton of fun, absolutely beautiful in every way. Ok enough mushy - here are the dirty details. 2:30 AM - Early Check-in Attempt - we attempted to check-in for our flight online. Some sort of technology error on website, instructed to checkin at desk. 4:30 AM - Arrive at airport. Try to check-in at kiosk, get the same error. We got to the JetBlue desk at RDU International Airport, where Mary helped us. Mary discovered that the Expedia provided itinerary does not match the Expedia provided tickets. We are informed that when that happens American, JetBlue, and others that use the same software cannot check you in for the flight because. Why? Because the itinerary was missing a leg of our flight! Basically we were not shown in the system as definitely being able to make it home. Mary called Expedia and was put on hold by their automated system. 4:55 AM - Mary, myself, and my brand new bride all waited for about 25 minutes when finally I decided I would make a call myself on my iPhone while Mary was on the airport phone. In their automated system, I chose "make a new reservation", thinking they might answer a little more quickly than "customer service". Not surprisingly I was connected to an Expedia person within 1 minute. They informed me that they would have to forward me to a customer service specialist. I explained to them that we were already on hold for that and had been for nearly half an hour, that we were going on our honeymoon and that our flight would be leaving soon - could they please help us. "Yes, I will help you". I hand the phone to JetBlue Mary who explains the situation 3 or 4 times. Obviously I couldn't hear both ends of the conversation at this point, but the Expedia person explained what the problem was by stating exactly what Mary had just spent 15 minutes explaining. Mary calmly confirms that this is the problem, and asks Expedia to re-issue the itinerary. Expedia tells Mary that they'll have to transfer her to customer service. Mary asks for someone specific so that we get an answer this time, and goes on hold. Mary get's connected, explains the situation, and then Mary's connection gets terminated. 5:10 AM - Mary calls back to the Expedia automated system again, and we wait for about 5 minutes on hold this time before I pick up my iPhone and call Expedia again myself. Again I go to sales, a person picks up the phone in less than a minute. I explain the situation and let them know that we are now very close to missing our flight for our honeymoon, could they please help us. "Yes, I will help you". Again I give the phone to Mary who provides them with a call back number in case we get disconnected again and explains the situation again. More back and forth with Expedia doing nothing but repeating the same questions, Mary answering the questions with the same information she provided in the original explanation, and Expedia simply restating the problem. Mary again asks them to re-issue the itinerary, and explains that doing so will fix the problem. Expedia again repeats the problem instead of fixing it, and Mary's connection gets terminated. 5:20 AM - Mary again calls back to Expedia. My beautiful bride also calls on her own phone. At this point she is struggling to hold back her tears, stumbling through an explanation of all that has happened and that we are about to miss our flight. Please help us. "Yes, I will help". My beautiful bride's connection gets terminated. Ok, maybe this disconnection isn't an accident. We've now been disconnected 3 times on two different phones. 5:45 AM - I walk away and pleadingly beg a person to help me. They "escalate" the issue to "Rosy" (sp?) at Expedia. I go through the whole song and dance again with Rosy, who gives me the same treatment Mary was given. Rosy blames JetBlue for now having the correct data. Meanwhile Mary is on the phone with Emirates Air (the airline for the second leg of our trip), who agrees with JetBlue that Expedia's data isn't up to date. We are informed by two airport employees that issues like this with Expedia are not uncommon, and that the fix is simple. On the phone iwth Rosy, I ask her to re-issue the itinerary because we are about to miss our flight. She again explains the problem to me. At this point, I am standing at the window, pleading with Rosy to help us get to our honeymoon, watching our airplane. Then our airplane leaves without us. 6:03 AM - At this point we have missed our flight. Re-issuing the itinerary is no longer a solution. I ask Rosy to start from the beginning and work us up a new trip. She says that she cannot do that. She says that she needs to talk to JetBlue and Emirates and find out why we cannot check-in for our flight. I remind Rosy that our flight has already left - I just watched it taxi away - it no longer matters why (not to mention the fact that we already knew why, and have known why since 4:30 AM), and have known the solution since 4:30 AM. Rosy, can you please book a new trip? Yes, but it will cost $400. Excuse me? Now you can, but it will cost ME to fix your mistake? Rosy says that she can escalate the situation to her supervisor but that will take 1.5 hours. 6:15 AM - I told Rosy that if they had re-issued the itinerary as JetBlue asked (at 4:30 AM), my new wife and I might be on the airplane now instead of dealing with this on the phone and missing the beginning (and how much more?) of our honeymoon. Rosy said that it was not necessary to re-issue the itinerary. Out of curiosity, i asked Rosy if there was some financial burden on them to re-issue the itinerary. "No", said Rosy. I asked her if it was a large time burden on Expedia to re-issue the itinerary. "No", said Rosy. I directly asked Rosy: Why wouldn't Expedia have re-issued the itinerary when JetBlue asked? No answer. I asked Rosy: If you had re-issued the itinerary at 4:30, isn't it possible that I would be on that flight right now? She actually surprised me by answering "Yes" to that question. So I pointed out that it followed that Expedia was responsible for the fact that we missed out flight, and she immediately went into more about how the problem was with JetBlue - but now it was ALSO an Emirates Air problem as well. I tell Rosy to go ahead and escalate the issue again, and please call me back in that 1.5 hours (which how is about 1 hour and 10 minutes away). 6:30 AM - I start tweeting my frustration with iPhone. It's now pretty much impossible for us to make it to The Maldives by 3pm, which is the time at which we would need to arrive in order to be allowed service to the actual island where we are staying. Expedia has now given me the run-around for 2 hours, caused me to miss my flight, and worst of all caused my amazing new wife Lauren to miss our honeymoon. You think I was mad? No. Furious. Its ok to make mistakes - but to refuse to fix them and to ruin our honeymoon? No, not ok, Expedia. I swore right then that Expedia would make this right. 7:45 AM - JetBlue mary is still talking her tail off to other people in JetBlue and Emirates Air. Mary works it out so that if Expedia simply books a new trip, JetBlue and Emirates will both waive all the fees. Now we just have to convince Expedia to fix their mistake and get us on our way! Around this time Expedia Rosy calls me back! I inform her of the excellent work of JetBlue Mary - that JetBlue and Emirates both will waive the fees so Expedia can fix their mistake and get us going on our way. She says that she sees documentation of this in her system and that she needs to put me on hold "for 1 to 10 minutes" to talk to Emirates Air (why I'm not exactly sure). I say ok. 8:45 AM - After an hour on hold, Rosy comes on the line and asks me to hold more. I ask her to call me back. 9:35 AM - I put down the iPhone Twitter app and picks up the laptop. You think I made some noise with my iPhone? Heh 11:25 AM - Expedia follows me and sends a canned "We're sorry, DM us the details".  If you look at their Twitter feed, 16 out of the most recent 20 tweets are exactly the same canned response.  The other 4?  Ads.  Um - #MultiFAIL? To Expedia:  You now have had (as explained above) 8 hours of 3 different people explaining our situation, you know the email address of our Expedia account, you know my web blog, you know my Twitter address, you know my phone number.  You also know how upset you have made both me and my new bride by treating us with such a ... non caring, scripted, uncooperative, argumentative, and possibly even deceitful manner.  In the wise words of the great Kenan Thompson of SNL: "FIX IT!".  And no, I'm NOT going away until you make this right. Period. 11:45 AM - Expedia corporate office called.  The woman I spoke to was very nice and apologetic.  She listened to me tell the story again, she says she understands the problem and she is going to work to resolve it.  I don't have any details on what exactly that resolution might me, she said she will call me back in 20 minutes.  She found out about the problem via Twitter.  Thank you Twitter, and all of you who helped.  Hopefully social media will win my wife and I our honeymoon, and hopefully Expedia will encourage their customer service teams treat their customers properly. 12:22 PM - Spoke to Fran again from Expedia corporate office.  She has a flight for us tonight.  She is booking it now.  We will arrive at our honeymoon destination of beautiful Veligandu Island Resort only 1 day late.  She cannot confirm today, but she expects that Expedia will pay for the lost honeymoon night.  Thank you everyone for your help.  I will reflect more on this whole situation and confirm its resolution after our flight is 100% confirmed.  For now, I'm going to take a breather and go kiss my wonderful wife! 1:50 PM - Have not yet received the promised phone call.  We did receive an email with a new itinerary for a flight but the booking is not for specific seats, so there is no guarantee that my wife and I will be able to sit together.  With the original booking I carefully selected our seats for every segment of our trip.  I decided to call into the phone number that Fran from the Expedia corporate office gave me.  Its automated voice system identified itself as "Tier 3 Support".  I am currently still on hold with them, I have not gotten through to a human yet. 1:55 PM - Fran from Expedia called me back.  She confirmed us as booked.  She called the airlines to confirm.  Unfortunately, Expedia was unwilling or unable to allow us any type of seat selection.  It is possible that i won't get to sit next to the woman I married less than a day ago on our 40 total hours of flight time (there and back).  In addition, our seats could be the worst seats on the planes, with no reclining seat back or right next to the restroom.  Despite this fact (which in my opinion is huge), the horrible inconvenience, the hours at the airport, and the negative Internet publicity that Expedia is receiving, Expedia declined to offer us any kind of upgrade or to mark us as SFU (suitable for upgrade).  Since they didn't offer - I asked, and was rejected.  I am grateful to finally be heading in the right direction, but not only did Expedia horribly botch this job from the very beginning, they followed that botch job with near zero customer service, followed by a verbally apologetic but otherwise half-hearted resolution.  If this works out favorably for us, great.  If not - I'm not done making noise, Expedia.  You owe us, and I expect you to make it right.  You haven't quite done that yet. Thanks - Thank you to Twitter.  Thanks to all those who sympathize with us and helped us get the attention of Expedia, since three people (one of them an airline employee) using Expedia's normal channels of communication for many hours didn't help.  Thanks especially to my PowerShell and Sharepoint friends, my local friends, and those connectors who encouraged me and spread my story. 5:15 PM - Love Wins - After all this, Lauren and I are exhausted.  We both took a short nap, and when we woke up we talked about the last 24 hours.  It was a big, amazing, story-filled 24 hours.  I said that Expedia won, but Lauren said no.  She pointed out how lucky we are.  We are in love and married.  We have wonderful family and friends.  We are both hard-working successful people who love what they do.  We get to go to an amazing exotic destination for our honeymoon like Veligandu in The Maldives...  That's a lot of good.  Expedia didn't win.  This was (is) a big loss for Expedia.  It is a public blemish for all to see.  But Lauren and I did win, big time.  Expedia may not have made things right - but things are right for us.  Post in progress... I will relay any further comments (or lack of) from Expedia soon, as well as an update on confirmation of their repayment of our lost resort room rates.  I'll also post a picture of us on our honeymoon as soon as I can!

    Read the article

< Previous Page | 198 199 200 201 202 203 204 205 206 207 208 209  | Next Page >