Search Results

Search found 11321 results on 453 pages for 'shared libraries'.

Page 96/453 | < Previous Page | 92 93 94 95 96 97 98 99 100 101 102 103  | Next Page >

  • Dalvik + Java licensing question

    - by Andrew Bate
    This is a licensing question about the Dalvik and J2SE core libraries. In particular the license governing java.util.concurrent.SynchronousQueue. The license header of the class in the JDK source states that it is GPLv2 only (see grepcode). However, the same file in the Dalvik core libraries seems to be governed by the Apache 2 license only (see android source). How is this possible? I didn't think you could take GPLv2 source and re-license it as Apache 2. (It's obvious they did: a comment above the Java Doc even says "removed link to collections framework docs"!) I'm asking because I have a GPLv3 project and would like to include a derivative work of some source from the core libraries (either Dalvik or J2SE) but publish it under GPLv3. I thought I could do this with Apache 2, but not GPLv2. I know that the J2SE class source is itself derivative work from public domain source, but the changes from the original are substantial. (The original is available at gee.cs.oswego.edu if you are interested.) Therefore the android source really is just a copy of the J2SE source, but published under Apache 2 instead of GPLv2. Is Google really allowed to do this?

    Read the article

  • What Technology can Render Medium Scale 3d Environments in a Web-Browser

    - by JakeM
    I intend to make a web application that displays 3d environments that can be navigated by dragging(with a finger or mouse depending on the platform). The web app will render 3d environments of development sites including contours, water pipeline locations, buildings etc. I am trying to decide what technology/libraries to use that will create a web-app that will work on Android-Web-Browser, iOS-Safari, IE9, Safari, Firefox and Chrome. And also what technology will provide speed in development. I understand that this is 'asking for my cake and eating it too'/'asking for the moon' but I don't know all the technologies out there - so there may be advanced libraries that can render 3d environments across many web-browsers including the main smart phone ones and I dont know of them. The 3d rendering would not be highly detailed buildings or water with effects, but rather simple 3d representations of these objects. The environment would be navigable by dragging around and you could view the landscape in layers(view only contour lines, view only underground pipelines, view only sewerage pipes, etc.). Are there any 3d libraries for web-browsers out there? Is there a way to run OpenGL(or OpenGL ES) through a webbrowser? What technology would you use if you were making this kind of app/web app that should work on desktop Windows, Android, iOS and WindowsPhone? Is there any technology I have failed to mention that would be good for this kind of project? I am tending towards a Browser Driven Web App because I get that cross platform ability(where it even works on linux and MacOS by using compatible web-browsers). Also I know of CSS3 transforms that can create cubes that can rotate in 3d space(NOTE only works for WebKit browsers - so no IE :( ). But I don't know if CSS3 is robust enough to render whole 3d environments? Do you think it could? Maybe I could use HTML5 canvas's for this? Can Google maps create custom 3d maps?

    Read the article

  • Compiling C++ code with mingw under 12.04

    - by golemit
    I tried to setting up compiling of the C++ projects under my Ubuntu 12.04 by mingw with QT libraries. The idea was to get executable independent from variations of target Windows versions and development environments of my colleagues. It was successfully implemented under OpenSuse 12.2 with mingw32 and some additional libraries including mingw32-libqt4 and some others. Fine. However when trying to do the same under Ubuntu 12.04 with mingw-w64 including latest libraries QT-4.8.3 copied from Windows there were always errors. No luck. The typical errors in these attempts can be seen in attachments. The commands used: qmake -spec /path_to_my_conf/win32-x-g++ my_project.pro make Can someone give a hint of the problem source? I would appreciate a good advice. Serge some exctracts from LOG: ./.obj/moc_xlseditor.o:moc_xlseditor.cpp:(.rdata$_ZTV10GXlsEditor[vtable for GXlsEditor]+0xec): undefined reference to `QDialog::accept()' ./.obj/moc_xlseditor.o:moc_xlseditor.cpp:(.rdata$_ZTV10GXlsEditor[vtable for GXlsEditor]+0xf0): undefined reference to `QDialog::reject()' ./.obj/moc_xlseditor.o:moc_xlseditor.cpp:(.rdata$_ZTV10GXlsEditor[vtable for GXlsEditor]+0x104): undefined reference to `non-virtual thunk to QWidget::devType() const' ./.obj/moc_xlseditor.o:moc_xlseditor.cpp:(.rdata$_ZTV10GXlsEditor[vtable for GXlsEditor]+0x108): undefined reference to `non-virtual thunk to QWidget::paintEngine() const' ./.obj/moc_xlseditor.o:moc_xlseditor.cpp:(.rdata$_ZTV10GXlsEditor[vtable for GXlsEditor]+0x10c): undefined reference to `non-virtual thunk to QWidget::getDC() const' ./.obj/moc_xlseditor.o:moc_xlseditor.cpp:(.rdata$_ZTV10GXlsEditor[vtable for GXlsEditor]+0x110): undefined reference to `non-virtual thunk to QWidget::releaseDC(HDC__*) const' ./.obj/moc_xlseditor.o:moc_xlseditor.cpp:(.rdata$_ZTV10GXlsEditor[vtable for GXlsEditor]+0x114): undefined reference to `non-virtual thunk to QWidget::metric(QPaintDevice::PaintDeviceMetric) const' ./.obj/qrc_images.o:qrc_images.cpp:(.text+0x24): undefined reference to `__imp___Z21qRegisterResourceDataiPKhS0_S0_' ./.obj/qrc_images.o:qrc_images.cpp:(.text+0x64): undefined reference to `__imp___Z23qUnregisterResourceDataiPKhS0_S0_' collect2: ld returned 1 exit status

    Read the article

  • User Switching in XFCE 12.04 with LightDM and dumping unneeccesary Gnome libs

    - by user111120
    I'm an elder non-techie Mac-to-Linux convert trying to play the linux tech game by ear, so please be gentle! :) I am running XFCE Ubuntu 12.04 totally on a 8-gig flash drive and it's fantastic. I am starting to run into potential space issues (down to 1.0 gig free from 1.9 gigs since being installed last summer), most likely because of growing Thunderbird mail files, and this prompted my question. I just installed lightDM on my system because I want the ability to switch users in XFCE if I follow instructions on another blog. They advised using LightDM instead of GDM because LightDM doesn't download Gnome libraries. That's great since I need the space, but my question is how can I tell whether I don't already have Gnome libraries installed from other updates and such? And can I minimize having any Gnome libraries? The method for me to switch users entails creating a "fast-user-switch" file in /usr/local/bin; is there any easier way? One last thing so I din't have topen another needless thread; while experimenting I somehow lost the share folder in one of my accounts. Is there any way to get a share folder back? Thanks for any tips! Jim in NYC

    Read the article

  • Clutter for game GUI

    - by tjameson
    I'm pretty new to game development, having only written a simple 3d game for a class project, but I'd like to get started on a bigger project. I'm writing an MMORPG to run in both the browser (WebGL) and natively (OpenGL ES 2). In choosing a GUI toolkit, I'm trying to find a style that work work natively and would be simple to emulate in WebGL. I am considering using D or Go for writing my game, so interfacing with C++ libraries will be difficult, if not impossible. Of course, the language isn't the end goal here, so if using C++ will save considerable time, I'll bite the bullet and use that. In order to reduce the amount of code I'll have to write for the browser, I'm considering using something simple like Clutter for basic abstractions, which I think will be pretty easy to emulate (layered canvases maybe?). Does anyone have experience using Clutter for a 3d game? Note: I haven't used any game development libraries, and I only have limited experience with GUI libraries. I do have HTML+CSS experience, so maybe librocket is a viable solution?

    Read the article

  • Quantify value for management

    - by nivlam
    We have two different legacy systems (window services in this case) that do exactly the same thing. Both of these systems have small differences for the different applications they serve. Both of these system's core functionality lies within a shared library. Most of the time, the updates occur in the shared library and we simply deploy the updated library to both of these systems. The systems themselves rarely change. Since both of these systems do essentially the same thing, our development team would like to consolidate these two systems into a single service. What can I do to convince management to allocate time for such a task? Some of the points I've noted are: Easier maintenance Decrease testing/QA time Unfortunately, this isn't enough. They would like us to provide them with hard numbers on the amount of hours this will save in the future and how this will speed up future development. Since most of the work is done in the shared library and the systems themselves never change, it's hard for us to quantify how many hours this will save. What kind of arguments can I make to justify the extra work to consolidate these systems?

    Read the article

  • How to automatically mount a folder and change ownership from root in virtualbox

    - by Fiztban
    It is my first time using virtualbox and ubuntu (14.04), I am on a host Windows 7 OS. I am trying to mount a shared folder that has files I need to access both in the virtualbox and on the windows OS. I have successfully mounted them using the vboxsf from the Guest Additions installed. To mount I used the command sudo mount -t vboxsf <dir name in vbox> <directory in linux for example I used sudo mount -t vboxsf Test /home/user/Test I found several ways of mounting the directories automatically upon startup using for example the /etc/rc.local method (here) where you modify said file appending the command to it (without sudo). Or by using the fstab method (here). I prefer the rc.local method personally. Once mounted it has permissions dr-xr-xr-x however once mounted the directory is of root ownership and chown user /home/user/Test has no effect. This means I cannot make or change files in it as a normal user. In the VirtualBox the directory to be shared is not set as read-only. Is there a way to automatically mount the shared folder and assign ownership to my non root user?

    Read the article

  • Many user stories share the same technical tasks: what to do?

    - by d3prok
    A little introduction to my case: As part of a bigger product, my team is asked to realize a small IDE for a DSL. The user of this product will be able to make function calls in the code and we are also asked to provide some useful function libraries. The team, together with the PO, put on the wall a certain number of user stories regarding the various libraries for the IDE user. When estimating the first of those stories, the team decided that the function call mechanism would have been an engaging but not completely obvious task, so the estimate for that user story raised up from a simple 3 to a more dangerous 5. Coming to the problem: The team then moved to the user stories regarding the other libraries, actually 10 stories, and added those 2 points of "function call mechanism" thing to each of those user story. This immediately raised up the total points for the product of 20 points! Everyone in the team knows that each user story could be picked up by the PO for the next iteration at any time, so we shouldn't isolate that part in one user story, but those 20 points feel so awfully unrealistic! I've proposed a solution, but I'm absolutely not satisfied: We created a "Design story" and put those annoying 2 points over it. However when we came to realize and demonstrate it to our customers, we were unable to show something really valuable for them about that story! Here the problem is whether we should ignore the principle of having isolated user stories (without any dependency between them). What would you do, or even better what have you done, in situations like this? (a small foot-note: following a suggestion I've moved this question from stackoverflow)

    Read the article

  • How could there still not be a mysqldb module for Python 3? [closed]

    - by itsadok
    This SO question is now more than two years old. MySQL is an incredibly popular database engine, Python is an incredibly popular programming language, and Python 3 has been officially released two years ago, and was available even before that. What's more, the whole mysqldb module is just a layer translating Python's db-api to MySQL's API. It's not that big of a library. I must be missing something here. How come almost* nobody in the entire open source community has spent the (I'm guessing) two weeks it takes to port this lib? Is Python 3 that unpopular? Is the combination of python and mysql not as common as I assume? Or maybe it's just a lot harder to port mysqldb than I assume? Anyone know the inside story on this? * Now I see that this guy has done it, which takes some of the wind out of my question, but it still seems to little and too late to make sense. EDIT: OK, I'm aware that the stock answers for these kind of questions cover this one as well. Patches welcome, scratch your itch, we don't work for you and we don't have the time, etc. I actually took a shot at porting this about a year ago, but it was my first time doing anything with Python C extensions, and I failed. My point in writing this was not a plea for somebody to write it, but genuine curiosity: it seems that some much more complicated libraries have been ported to python 3 already, and in the poll for which libraries should be ported, mysqldb is not even nominated! That suggests that maybe (2) is the right answer. UPDATE: I found that there are several new libraries that provide mysql support under Python 3, I just wasn't googling hard enough. That explains everything.

    Read the article

  • Composer support

    - by Tomas Mysik
    Hi all, today we would like to introduce you our Composer support which will be present in NetBeans 7.3. If anyone of you does not know Composer yet, please be informed that: "Composer is a tool for dependency management in PHP. It allows you to declare the dependent libraries your project needs and it will install them in your project for you." So, what support do we have in NetBeans? The first step, as usually, is to open the Composer IDE Options panel: Once it is configured properly, it is time to create composer.json file where we can define dependencies (libraries) of our PHP project: The generated file is opened so we can review it and add any libraries:  Now, you are ready to install, update or validate library dependencies of your PHP project: We hope that you enjoy this initial support and that we will be able to improve it in the next version of NetBeans.    That's all for today, as always, please test it and report all the issues or enhancements you find in NetBeans Bugzilla (component php, subcomponent Composer).

    Read the article

  • ?12c????RAC Cluster Hub Node-Leaf Node

    - by Liu Maclean(???)
    ?12c?cluster?????????????,?????????????flex cluster?flux asm?? ??Hub Node?Leaf Node,?????Hub Node?Leaf Node. Hub Node????: A node in and Oracle Flex Cluster that is tightly connected with other servers and has direct access to a shared disk. Leaf Node????: Servers that are loosely coupled with Hub Nodes, which may not have direct access to the shared storage. ?????????? Leaf Node??????shared storage ,????leaf node??share disk?? ??Hub Node?12c?????cluster node???, ?Leaf Node????? Leaf Node???: ? Hub Node?? ?????cluster?? ????????Hub Node ????Hub Node????? Hub Node????????????Leaf Node??? ??????????? ?Hub Node????? ??Leaf Node??Flex Cluster???????: hub-and-spoke???cluster?????????? ????Hub Node????OCR?Votedisk ????HUB node???,???????clusterware?????,??ocr?Votedisk ? ?????????????? ??????????,???????? ????????,12???Flex cluster??12?????, ???????? [ n * (n-1)]/2?66?????? ???1000?????,?????????????40?Hub Node,???Hub Node??24?Leaf Node,?Flex Cluster???1740??????  ????,??Cluster??499500?????? ?Flex Cluster??????????????,??cluster software????? ??Hub Node ?? ????????? , ??????????relocate???Hub Node ?Hub Node???Leaf Node??????,????????relocate???Leaf Node? ??Leaf Node?? ?????????,????????relocate????Leaf Node?

    Read the article

  • ????????????3?????

    - by Feng
    ?? ??blog?????oracle????????????,??????????????,??????: ?????????. ???????: ??????????,????????; ????????????,?” ???”??. 1. OS swapping/paging ??????concurrency??????? Oracle?????????, ??latch/mutex???????”?”,??????????????/???(????????????,??????????????????). ????OS??????swapping/paging????,???????????,??latch/mutex???????,????????????hung/slow???. ??swapping/paging??????: a). ???? b). ??????; ?????, ?????????????? c). ?????/????? ????????????????? ???????: Lock SGA, ??SGA(???latch/mutex)???pin???????swapping???. ???SGA??????,????large page(hugepage)??,??latch/mutex??/?????. 2. SGA resizing?????????? ?AMM/ASMM??????????, shared pool?buffer cache?????component????????????,??ora-4031???.??????????,???????resize????????????(?latch/mutex?????)?????, ?????????latch/mutex??. ????shared pool?resize??????,??latch/mutex???????. ?????????:  ?????bug; ???????????,??resize???????????????,???????????. ??bug?fix??????????impact, ???????????. ???????: 1). ??buffer cache?shared pool??(???????????,?????????) 2). ??resize???????16?? alter system set "_memory_broker_stat_interval"=999; Disable AMM/ASMM?????????,?????: ??ora-4031????????????. 3. DDL?????????? ??????????????????. ???????????DDL (??grant, ?????, ????????),???????????SQL?????invalidate?;????????SQL????????????,?????????hard parse ? SQL??????. ??????? “hardparse storm”, latch/mutex????????, ??library cache lock/row cache lock????; ??????????slow/hung. ???????: ???????????DDL ??????????,???????????,?? “????????????3?????"?

    Read the article

  • Flat File Connection Manager in SSIS package shows "Valid File Name Must be Selected"

    - by Traples
    (Flat File Location) Samba Share | Windows Share (SSIS) _______________________________ | | XP 32bit | Works | Works | | 2003 Serv 32bit | Works | Works | | Vista 64bit | ERROR | Works | | Win 7 64bit | ERROR | Works | | 2008 Serv 64bit | ERROR | Works I created an SSIS package in VS 2008 that parses a flat file from a shared folder and puts the records into a SQL Server db. I recently installed Windows 7 and VS 2008 on a new workstation. When I import the package from TFS and open it, I get the error Validation error. Parse and Import Catalog Flat File: MySSISPackage: The file name "\\shared\flatfile.txt" specified in the connection was not valid. When I open the Flat File Connection Manager Editor, there is an error stating: A valid file name must be selected I can browse to and select the file from inside the editor, but I cannot change any properties, or move away from the General tab because of this error. If I go back to my laptop (Windows XP), where the package was first created, there is no error. Both workstations are on the same domain, and I'm logging in using the same credentials. Any ideas as to why I would receive this error from one workstation and not another? UPDATE: If I take the .dtsx package from the running workstation and load it into SSIS on the server, I get the following errors when it tries to run: Error: The file name "\\shared\flatfile.txt" specified in the connection was not valid. and... Error: Connection "MySSISPackage" failed validation. and... Error: The file name property is not valid. The file name is a device or contains invalid characters. UPDATE 2: a) The Shared folder I'm trying to pull the flat file from is a Samba share on a Unix box. b) If I access the file using SSIS on any 64-bit platform (Windows 7 64-bit, Vista 64-bit, Windows Server 2008) I get the error "A valid file name must be selected." c) Accessing the file using SSIS from 32-bit environments (Windows XP 32-bit, Windows Server 2003 32-bit) there is no problem. d) If I move the file to a shared folder on a Windows server, 64-bit SSIS recognizes the file.

    Read the article

  • C++ vs Matlab vs Python as a main language for Computer Vision Postgraduate

    - by Hough
    Hi all, Firstly, sorry for a somewhat long question but I think that many people are in the same situation as me and hopefully they can also gain some benefit from this. I'll be starting my PhD very soon which involve the fields of computer vision, pattern recognition and machine learning. Currently, I'm using opencv (2.1) C++ interface and I especially like its powerful Mat class and the overloaded operations available for matrix and image seamless operations and transformations. I've also tried (and implemented many small vision projects) using opencv python interface (new bindings; opencv 2.1) and I really enjoy python's ability to integrate opencv, numpy, scipy and matplotlib. But recently, I went back to opencv C++ interface because I felt that the official python new bindings were not stable enough and no overloaded operations are available for matrices and images, not to mention the lack of machine learning modules and slow speeds in certain operations. I've also used Matlab extensively in the past and although I've used mex files and other means to speed up the program, I just felt that Matlab's performance was inadequate for real-time vision tasks, be it for fast prototyping or not. When the project becomes larger and larger, many tasks have to be re-written in C and compiled into Mex files increasingly and Matlab becomes nothing more than a glue language. Here comes the sub-questions: For postgrad studies in these fields (machine learning, vision, pattern recognition), what is your main or ideal programming language for rapid prototyping of ideas and testing algorithms contained in papers? For postgrad studies, can you list down the pros and cons of using the following languages? C++ (with opencv + gsl + svmlib + other libraries) vs Matlab (with all its toolboxes) vs python (with the imcomplete opencv bindings + numpy + scipy + matplotlib). Are there computer vision PhD/postgrad students here who are using only C++ (with all its availabe libraries including opencv) without even needing to resort to Matlab or python? In other words, given the current existing computer vision or machine learning libraries, is C++ alone sufficient for fast prototyping of ideas? If you're currently using Java or C# for your postgrad work, can you list down the reasons why they should be used and how they compare to other languages in terms of available libraries? What is the de facto vision/machine learning programming language and its associated libraries used in your university research group? Thanks in advance.

    Read the article

  • C++ vs Matlab vs Python as a main language for Computer Vision Research

    - by Hough
    Hi all, Firstly, sorry for a somewhat long question but I think that many people are in the same situation as me and hopefully they can also gain some benefit from this. I'll be starting my PhD very soon which involves the fields of computer vision, pattern recognition and machine learning. Currently, I'm using opencv (2.1) C++ interface and I especially like its powerful Mat class and the overloaded operations available for matrix and image operations and seamless transformations. I've also tried (and implemented many small vision projects) using opencv python interface (new bindings; opencv 2.1) and I really enjoy python's ability to integrate opencv, numpy, scipy and matplotlib. But recently, I went back to opencv C++ interface because I felt that the official python new bindings were not stable enough and no overloaded operations are available for matrices and images, not to mention the lack of machine learning modules and slow speeds in certain operations. I've also used Matlab extensively in the past and although I've used mex files and other means to speed up the program, I just felt that Matlab's performance was inadequate for real-time vision tasks, be it for fast prototyping or not. When the project becomes larger and larger, many tasks have to be re-written in C and compiled into Mex files increasingly and Matlab becomes nothing more than a glue language. Here comes the sub-questions: For carrying out research in these fields (machine learning, vision, pattern recognition), what is your main or ideal programming language for rapid prototyping of ideas and testing algorithms contained in papers? For computer vision research work, can you list down the pros and cons of using the following languages? C++ (with opencv + gsl + svmlib + other libraries) vs Matlab (with all its toolboxes) vs python (with the imcomplete opencv bindings + numpy + scipy + matplotlib). Are there computer vision PhD/postgrad students here who are using only C++ (with all its availabe libraries including opencv) without even needing to resort to Matlab or python? In other words, given the current existing computer vision or machine learning libraries, is C++ alone sufficient for fast prototyping of ideas? If you're currently using Java or C# for your research, can you list down the reasons why they should be used and how they compare to other languages in terms of available libraries? What is the de facto vision/machine learning programming language and its associated libraries used in your research group? Thanks in advance. Edit: As suggested, I've opened the question to both academic and non-academic computer vision/machine learning/pattern recognition researchers and groups.

    Read the article

  • What Amazon S3 .NET Library is most useful and efficient?

    - by Geo
    There are two main open source .net Amazon S3 libraries. Three Sharp LitS3 I am currently using LitS3 in our MVC demo project, but there is some criticism about it. Has anyone here used both libraries so they can give an objective point of view. Below some sample calls using LitS3: On demo controller: private S3Service s3 = new S3Service() { AccessKeyID = "Thekey", SecretAccessKey = "testing" }; public ActionResult Index() { ViewData["Message"] = "Welcome to ASP.NET MVC!"; return View("Index",s3.GetAllBuckets()); } On demo view: <% foreach (var item in Model) { %> <p> <%= Html.Encode(item.Name) %> </p> <% } %> EDIT 1: Since I have to keep moving and there is no clear indication of what library is more effective and kept more up to date, I have implemented a repository pattern with an interface that will allow me to change library if I need to in the future. Below is a section of the S3Repository that I have created and will let me change libraries in case I need to: using LitS3; namespace S3Helper.Models { public class S3Repository : IS3Repository { private S3Service _repository; #region IS3Repository Members public IQueryable<Bucket> FindAllBuckets() { return _repository.GetAllBuckets().AsQueryable(); } public IQueryable<ListEntry> FindAllObjects(string BucketName) { return _repository.ListAllObjects(BucketName).AsQueryable(); } #endregion If you have any information about this question please let me know in a comment, and I will get back and edit the question. EDIT 2: Since this question is not getting attention, I integrated both libraries in my web app to see the differences in design, I know this is probably a waist of time, but I really want a good long run solution. Below you will see two samples of the same action with the two libraries, maybe this will motivate some of you to let me know your thoughts. WITH THREE SHARP LIBRARY: public IQueryable<T> FindAllBuckets<T>() { List<string> list = new List<string>(); using (BucketListRequest request = new BucketListRequest(null)) using (BucketListResponse response = service.BucketList(request)) { XmlDocument bucketXml = response.StreamResponseToXmlDocument(); XmlNodeList buckets = bucketXml.SelectNodes("//*[local-name()='Name']"); foreach (XmlNode bucket in buckets) { list.Add(bucket.InnerXml); } } return list.Cast<T>().AsQueryable(); } WITH LITS3 LIBRARY: public IQueryable<T> FindAllBuckets<T>() { return _repository.GetAllBuckets() .Cast<T>() .AsQueryable(); }

    Read the article

  • Wierdness debugging Visual Studio C++ 2008

    - by Jeff Dege
    I have a legacy C++ app, that in its most incarnation we've been building with makefiles and VS2003's command-line tool. I'm trying to get it to build using VS2008 and MsBuild. The build is working OK, but I'm getting errors where I'd never seen errors, before, and stepping through in VS2008's debugger only confuses me. The app links a number of static libraries, which fall into two categories: those that are part of the same application suite, and those that are shared between a number of application suites. Originally, I had a .csproj file for each static library, and two .sln files, one for the application suite (including the suite-specific libraries) and one for the non-suite-specific shared libraries. The shared libraries were included in the link, their projects were not included in the application suite .sln. The application instantiates an object from a class that is defined in one of the shared libraries. The class has a member object of a class that wraps a linked list. The constructor of the linked list class sets its "head" pointer to null. When I run the app, and try to add an element to the linked list, I get an error - the head pointer contains the value 0xCCCCCCCC. So I step through with the debugger. And see weirdness. When the current line in the debugger is in a source file belonging to the static library, the head pointer contains 0x00000000. When I step into the constructor, I can see the pointer being set to that value, and when I'm stepped into any other method of the class, I can see that the head pointer still contains 0x00000000. But when I step out into methods that are defined in the application suite .sln, it contains 0xCCCCCCCC. It's not like it's being overwritten. It changes back and forth depending upon which source file I am currently debugging. So I included the shared library's project in the application suite .sln, and now I see the head pointer containing 0xCCCCCCCC all the time. It looks like the constructor of the linked list class is not being called. So now, I'm entirely confused. Anyone have any ideas?

    Read the article

  • Loosely coupled implicit conversion

    - by ltjax
    Implicit conversion can be really useful when types are semantically equivalent. For example, imagine two libraries that implement a type identically, but in different namespaces. Or just a type that is mostly identical, except for some semantic-sugar here and there. Now you cannot pass one type into a function (in one of those libraries) that was designed to use the other, unless that function is a template. If it's not, you have to somehow convert one type into the other. This should be trivial (or otherwise the types are not so identical after-all!) but calling the conversion explicitly bloats your code with mostly meaningless function-calls. While such conversion functions might actually copy some values around, they essentially do nothing from a high-level "programmers" point-of-view. Implicit conversion constructors and operators could obviously help, but they introduce coupling, so that one of those types has to know about the other. Usually, at least when dealing with libraries, that is not the case, because the presence of one of those types makes the other one redundant. Also, you cannot always change libraries. Now I see two options on how to make implicit conversion work in user-code: The first would be to provide a proxy-type, that implements conversion-operators and conversion-constructors (and assignments) for all the involved types, and always use that. The second requires a minimal change to the libraries, but allows great flexibility: Add a conversion-constructor for each involved type that can be externally optionally enabled. For example, for a type A add a constructor: template <class T> A( const T& src, typename boost::enable_if<conversion_enabled<T,A>>::type* ignore=0 ) { *this = convert(src); } and a template template <class X, class Y> struct conversion_enabled : public boost::mpl::false_ {}; that disables the implicit conversion by default. Then to enable conversion between two types, specialize the template: template <> struct conversion_enabled<OtherA, A> : public boost::mpl::true_ {}; and implement a convert function that can be found through ADL. I would personally prefer to use the second variant, unless there are strong arguments against it. Now to the actual question(s): What's the preferred way to associate types for implicit conversion? Are my suggestions good ideas? Are there any downsides to either approach? Is allowing conversions like that dangerous? Should library implementers in-general supply the second method when it's likely that their type will be replicated in software they are most likely beeing used with (I'm thinking of 3d-rendering middle-ware here, where most of those packages implement a 3D vector).

    Read the article

  • Cluster Node Recovery Using Second Node in Solaris Cluster

    - by Onur Bingul
    Assumptions:Node 0a is the cluster node that has crashed and could not boot anymore.Node 0b is the node in cluster and in production with services active.Both nodes have their boot disk mirrored via SDS/SVM.We have many options to clone the boot disk from node 0b:- make a copy via network using the ufsdump command and pipe to ufsrestore - make a copy inserting the disk locally on node 0b and creating the third mirror with SDS- make a copy inserting the disk locally on node 0b using dd commandIn this procedure we are going to use dd command (from my experience this is the best option).Bare in mind that in the examples provided we work on Sun Fire V240 systems which have SCSI internal disks. In the case of Fibre Channel (FC) internal disks you must pay attention to the unique identifier, or World Wide Name (WWN), associated with each FC disk (in this case take a look at infodoc #40133 in order to recreate the device tree correctly).Procedure:On node 0b the boot disk is c1t0d0 (c1t1d0 mirror) and this is the VTOC:* Partition  Tag  Flags    Sector     Count    Sector  Mount Directory      0      2    00          0   2106432   2106431      1      3    01    2106432  74630784  76737215      2      5    00          0 143349312 143349311      4      7    00   76737216  50340672 127077887      5      4    00  127077888  14683968 141761855      6      0    00  141761856   1058304 142820159      7      0    00  142820160    529152 143349311We will insert the new disk on node 0b and it will be seen as c1t2d0.1) On node 0b we make a copy via dd from disk c1t0d0s2 to disk c1t2d0s2# dd if=/dev/rdsk/c1t0d0s2 of=/dev/rdsk/c1t2d0s2 bs=8192kA copy of a 72GB disk will take approximately about 45 minutes.Note: as an alternative to make identical copy of root over network follow Document ID: 47498Title: Sun[TM] Cluster 3.0: How to Rebuild a node with Veritas Volume Manager2) Perform an fsck on disk c1t2d0 data slices:   1.  fsck -o f /dev/rdsk/c1t2d0s0 (root)   2.  fsck -o f /dev/rdsk/c1t2d0s4 (/var)   3.  fsck -o f /dev/rdsk/c1t2d0s5 (/usr)   4.  fsck -o f /dev/rdsk/c1t2d0s6 (/globaldevices)3) Mount the root file system in order to edit following files for changing the node name:# mount /dev/dsk/c1t2d0s0 /mntChange the hostname from 0b to 0a:# cd /mnt/etc# vi hosts # vi hostname.bge0 # vi hostname.bge2 # vi nodename 4) Change the /mnt/etc/vfstab from the actual:/dev/md/dsk/d201        -       -       swap    -       no      -/dev/md/dsk/d200        /dev/md/rdsk/d200       /       ufs     1       no      -/dev/md/dsk/d205        /dev/md/rdsk/d205       /usr    ufs     1       no      logging/dev/md/dsk/d204        /dev/md/rdsk/d204       /var    ufs     1       no      logging#/dev/md/dsk/d206       /dev/md/rdsk/d206       /globaldevices  ufs     2       yes     loggingswap    -       /tmp    tmpfs   -       yes     -/dev/md/dsk/d206        /dev/md/rdsk/d206       /global/.devices/node@2 ufs     2       noglobalto this (unencapsulate disk from SDS/SVM):/dev/dsk/c1t0d0s1        -       -       swap    -       no      -/dev/dsk/c1t0d0s0       /dev/rdsk/c1t0d0s0       /       ufs     1       no      -/dev/dsk/c1t0d0s5       /dev/rdsk/c1t0d0s5       /usr    ufs     1       no      logging/dev/dsk/c1t0d0s4       /dev/rdsk/c1t0d0s4       /var    ufs     1       no      logging#/dev/md/dsk/d206       /dev/md/rdsk/d206       /globaldevices  ufs     2       yes     loggingswap    -       /tmp    tmpfs   -       yes     -/dev/dsk/c1t0d0s6       /dev/rdsk/c1t0d0s6       /global/.devices/node@1 ufs     2       no globalIt is important that global device partition (slice 6) in the new vfstab will point to the physical partition of the disk (in our case slice 6).Be careful with the name you use for the new disk. In this case we define it as c1t0d0 because we will insert it as target 0 in node 0a.But this could be different based on the configuration you are working on.5) Remove following entry from /mnt/etc/system (part of unencapsulation procedure):rootdev:/pseudo/md@0:0,200,blk6) Correct the link shared -> ../../global/.devices/node@2/dev/md/shared in order to point to the nodeid of node 0a (in our case nodeid 1):# cd /mnt/dev/mdhow it is now.... node 0b has nodeid 2lrwxrwxrwx   1 root     root          42 Mar 10  2005 shared ->../../global/.devices/node@2/dev/md/shared# rm shared# ln -s ../../global/.devices/node@1/dev/md/shared sharedhow is going to be... with nodeid 1 for node 0alrwxrwxrwx   1 root     root          42 Mar 10  2005 shared ->../../global/.devices/node@1/dev/md/shared7) Change nodeid (in our case from 2 to 1):# cd /mnt/etc/cluster# vi nodeid8) Change the file /mnt/etc/path_to_inst in order to reflect the correct nodeid for node 0a:# cd /mnt/etc# vi path_to_instChange entries from node@2 to node@1 with the vi command ":%s/node@2/node@1/g"9) Write the bootblock to the disk... just in case:# /usr/sbin/installboot /usr/platform/sun4u/lib/fs/ufs/bootblk /dev/rdsk/c1t2d0s0Now the disk is ready to be inserted in node 0a in order to bootup the node.10) Bootup node 0a with command "boot -sx"... this is becasue we need to make some changes in ccr files in order to recreate did environment.11) Modify cluster ccr:# cd /etc/cluster/ccr# rm did_instances# rm did_instances.bak# vi directory - remove the did_instances line.# /usr/cluster/lib/sc/ccradm -i /etc/cluster/ccr/directory # grep ccr_gennum /etc/cluster/ccr/directory ccr_gennum -1 # /usr/cluster/lib/sc/ccradm -i /etc/cluster/ccr/infrastructure # grep ccr_gennum /etc/cluster/ccr/infrastructure ccr_gennum -112) Bring the node 0a down again to the ok prompt and then issue the command "boot -r"Now the node will join the cluster and from scstat and metaset command you can verify functionality. Next step is to encapsulate the boot disk in SDS/SVM and create the mirrors.In our case node 0b has metadevice name starting from d200. For this reason on node 0a we need to create metadevice starting from d100. This is just an example, you can have different names.The important thing to remember is that metadevice boot disks have different names on each node.13) Remove metadevice pointing to the boot and mirror disks (inherit from node 0b):# metaclear -r -f d200# metaclear -r -f d201# metaclear -r -f d204# metaclear -r -f d205# metaclear -r -f d206verify from metastat that no metadevices are set for boot and mirror disks.14) Encapsulate the boot disk:# metainit -f d110 1 1 c1t0d0s0# metainit d100 -m d110# metaroot d10015) Reboot node 0a.16) Create all the metadevice for slices remaining on boot disk# metainit -f d111 1 1 c1t0d0s1# metainit d101 -m d111# metainit -f d114 1 1 c1t0d0s4# metainit d104 -m d114# metainit -f d115 1 1 c1t0d0s5# metainit d105 -m d115# metainit -f d116 1 1 c1t0d0s6# metainit d106 -m d11617) Edit the vfstab in order to specifiy metadevices created:old:/dev/dsk/c1t0d0s1        -       -       swap    -       no      -/dev/md/dsk/d100        /dev/md/rdsk/d100       /       ufs     1       no      -/dev/dsk/c1t0d0s5       /dev/rdsk/c1t0d0s5       /usr    ufs     1       no      logging/dev/dsk/c1t0d0s4       /dev/rdsk/c1t0d0s4       /var    ufs     1       no      logging#/dev/md/dsk/d206       /dev/md/rdsk/d206       /globaldevices  ufs     2       yes     loggingswap    -       /tmp    tmpfs   -       yes     -/dev/dsk/c1t0d0s6       /dev/rdsk/c1t0d0s6       /global/.devices/node@1 ufs      2       no  globalnew:/dev/md/dsk/d101        -       -       swap    -       no      -/dev/md/dsk/d100        /dev/md/rdsk/d100       /       ufs     1       no      -/dev/md/dsk/d105        /dev/md/rdsk/d105       /usr    ufs     1       no      logging/dev/md/dsk/d104        /dev/md/rdsk/d104       /var    ufs     1       no      logging#/dev/md/dsk/106       /dev/md/rdsk/d106       /globaldevices  ufs     2       yes     loggingswap    -       /tmp    tmpfs   -       yes     -/dev/md/dsk/d106        /dev/md/rdsk/d106       /global/.devices/node@1 ufs     2       noglobal18) Reboot node 0a in order to check new SDS/SVM boot configuration.19) Label the mirror disk c1t1d0 with the VTOC of boot disk c1t0d0:# prtvtoc /dev/dsk/c1t0d0s2 > /var/tmp/VTOC_c1t0d0 # fmthard -s /var/tmp/VTOC_c1t0d0 /dev/rdsk/c1t1d0s220) Put DB replica on slice 7 of disk c1t1d0:# metadb -a -c 3 /dev/dsk/c1t1d0s721) Create metadevice for mirror disk c1t1d0 and attach the new mirror side:# metainit d120 1 1 c1t1d0s0# metattach d100 d120# metainit d121 1 1 c1t1d0s1# metattach d101 d121# metainit d124 1 1 c1t1d0s4# metattach d104 d124# metainit d125 1 1 c1t1d0s5# metattach d105 d125# metainit d126 1 1 c1t1d0s6# metattach d106 d126

    Read the article

  • Set the Windows Explorer Startup Folder in Windows 7

    - by Mysticgeek
    When you open Windows Explorer from the Taskbar in Windows 7, it defaults to the Libraries view. Today we take a look at changing the target path to allow you to customize which location opens by default. When you click on the Windows Explorer icon on the Windows 7 Taskbar, it’s set to open to the Libraries view by default. You might not use the Libraries feature, or want to set it to a different location that is more commonly used. Set Windows Explorer Startup Location To change the default startup location for the Windows Explorer Taskbar icon, if you have no Explorer screens open, hold down the Shift key, right-click the Explorer icon, and select Properties. Or if you have Windows open, right-click on the Explorer icon to bring up the Jumplist, then right-click on Windows Explorer and select Properties. Windows Explorer Properties opens up and you’ll want to click on the Shortcut tab so we can change the Target.   A common place you might want it to default to is your Documents folder. So to do that we need to enter the following into the Target field. %SystemRoot%\explorer.exe /n,::{450D8FBA-AD25-11D0-98A8-0800361B1103}   Now when you open Windows Explorer from the Taskbar it defaults to My Documents… If you use the Start Menu to access Windows Explorer, open the Start Menu and go to All Programs \ Accessories and right-click on Windows Explorer then select Properties. Change the target path to where you want it to go. In this example we want Windows Explorer to open up to My Computer so we entered the following in the Target field. %SystemRoot%\explorer.exe /E,::{20D04FE0-3AEA-1069-A2D8-08002B30309D} When click on the Explorer icon in the Start Menu it defaults to My Computer… You can set it to open to various locations. For instance if you wanted to mess with someone at work, you could enter the following and Explorer will always open to the Recycle Bin. %SystemRoot%\explorer.exe /E,::{645FF040-5081-101B-9F08-00AA002F954E} Conclusion Here we showed you a couple of commonly used locations that you might want Windows Explorer to open to instead of Libraries. You can set it to other locations if you know the GUID (Globally Unique Identifiers) for the object or location you want it to default to. For more on using GUIDs check out The Geek’s article on how to enable the secret “How-To Geek” mode in Windows 7. Actually it’s just a play on the so-called “God Mode” for Windows, but there is some good information, and a list of some locations you might want to have Windows Explorer open to. Similar Articles Productive Geek Tips Make Explorer Show Window Titles in Windows VistaDisable Explorer Breadcrumbs in Windows VistaStill Useful in Vista: Startup Control PanelStop an Application from Running at Startup in Windows VistaHotkey for Creating New Folder in Windows Explorer TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips VMware Workstation 7 Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Twelve must-have Google Chrome plugins Cool Looking Skins for Windows Media Player 12 Move the Mouse Pointer With Your Face Movement Using eViacam Boot Windows Faster With Boot Performance Diagnostics Create Ringtones For Your Android Phone With RingDroid Enhance Your Laptop’s Battery Life With These Tips

    Read the article

  • Stream Media from Windows 7 to XP with VLC Media Player

    - by DigitalGeekery
    So you’ve got yourself a new computer with Windows 7 and you’re itching to take advantage of it’s ability to stream media across your home network. But, the rest of the family is still on Windows XP and you’re not quite ready to shell out the cash for the upgrades. Well, today we’ll show you how to easily stream media from Windows 7 to Windows XP with VLC Media Player. On the host computer running Windows 7, you’ll need to have an account set up with both a username and password. A blank password will not work. The media files will need to be located in a shared folder. Note: If the media files are located within the Public directory, or within the profile of the user account you use to log into the Windows 7 computer, they will be shared automatically. Sharing your Media Folders On your Windows 7 computer, right-click on the folder containing the files you’d like to stream and choose Properties.     On the Sharing Tab of the folder properties, click the Share button. Click OK.   Type or select from the drop down the user account you’ll use to log in, or select “Everyone” to share with all users. Then click Add. You may change the permission level, but only Read permission is required to play the media. Repeat this process for any additional folders you wish to share.    The Windows XP Client Computer Now that we’ve shared our media folders from the Windows 7 computer, we’re ready to play our files on the Windows XP computer. Download and install the VLC Media Player. (See link below) Then open VLC. Click on Media from the and select Open File… Browse your network for the shared folder that contains your media.   You’ll be prompted to log in to the host computer. Provide the credentials for a user on the Windows 7 computer. Click OK.   Select your media file and click Open.    Your media playback will begin momentarily.   This is a nice and easy way to stream media across your home network without upgrading multiple computers to Windows 7.  Plus, VLC is certainly no slouch as a Media Player. It’ll play virtually any video or audio file you can throw at it. Have you already upgraded all your home PCs to Windows 7? Check out our previous article on streaming media between Windows 7 computers on your home network. Download VLC Media Player Similar Articles Productive Geek Tips Fixing When Windows Media Player Library Won’t Let You Add FilesShare Digital Media With Other Computers on a Home Network with Windows 7Enable Media Streaming in Windows Home Server to Windows Media PlayerInstall and Use the VLC Media Player on Ubuntu LinuxInstalling Windows Media Player Plugin for Firefox TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Need Help with Your Home Network? Awesome Lyrics Finder for Winamp & Windows Media Player Download Videos from Hulu Pixels invade Manhattan Convert PDF files to ePub to read on your iPad Hide Your Confidential Files Inside Images

    Read the article

  • dns queries not using nscd for caching

    - by xenoterracide
    I'm trying to use nscd (Nameservices Cache Daemon) to cache dns locally so I can stop using bind to do it. I've gotten it started and ntpd seems to attempt to use it. But everything else for hosts seems to ignore it. e.g if I do dig apache.org 3 times none of them will hit the cache. I'm viewing the cache stats using nscd -g to determine whether it's been used. I've also turned the debug log level up to see if I can see it hitting and the queries don't even hit nscd. nsswitch.conf # Begin /etc/nsswitch.conf passwd: files group: files shadow: files publickey: files hosts: cache files dns networks: files protocols: files services: files ethers: files rpc: files netgroup: files # End /etc/nsswitch.confenter code here nscd.conf # # /etc/nscd.conf # # An example Name Service Cache config file. This file is needed by nscd. # # Legal entries are: # # logfile <file> # debug-level <level> # threads <initial #threads to use> # max-threads <maximum #threads to use> # server-user <user to run server as instead of root> # server-user is ignored if nscd is started with -S parameters # stat-user <user who is allowed to request statistics> # reload-count unlimited|<number> # paranoia <yes|no> # restart-interval <time in seconds> # # enable-cache <service> <yes|no> # positive-time-to-live <service> <time in seconds> # negative-time-to-live <service> <time in seconds> # suggested-size <service> <prime number> # check-files <service> <yes|no> # persistent <service> <yes|no> # shared <service> <yes|no> # max-db-size <service> <number bytes> # auto-propagate <service> <yes|no> # # Currently supported cache names (services): passwd, group, hosts, services # logfile /var/log/nscd.log threads 4 max-threads 32 server-user nobody # stat-user somebody debug-level 9 # reload-count 5 paranoia no # restart-interval 3600 enable-cache passwd yes positive-time-to-live passwd 600 negative-time-to-live passwd 20 suggested-size passwd 211 check-files passwd yes persistent passwd yes shared passwd yes max-db-size passwd 33554432 auto-propagate passwd yes enable-cache group yes positive-time-to-live group 3600 negative-time-to-live group 60 suggested-size group 211 check-files group yes persistent group yes shared group yes max-db-size group 33554432 auto-propagate group yes enable-cache hosts yes positive-time-to-live hosts 3600 negative-time-to-live hosts 20 suggested-size hosts 211 check-files hosts yes persistent hosts yes shared hosts yes max-db-size hosts 33554432 enable-cache services yes positive-time-to-live services 28800 negative-time-to-live services 20 suggested-size services 211 check-files services yes persistent services yes shared services yes max-db-size services 33554432 resolv.conf # Generated by dhcpcd from eth0 nameserver 127.0.0.1 domain westell.com nameserver 192.168.1.1 nameserver 208.67.222.222 nameserver 208.67.220.220 as kind of a side note I'm using archlinux.

    Read the article

  • Share Files and Folders and Internet between Guest OS and the Host in Hyper-V

    - by Manesh Karunakaran
    For those who are familiar with the VirtualPC, vmWare and VirtualBox environments will be quite irritated to find out that there is no direct way to share files from the Host machine to the Virtualized guest environment. This is a good thing from a CIO perspective because there’s excellent isolation for the virtualized environments this way, but for the developer junkies like us, this is an irritant, especially for those who have nuked their Windows 7 OS and installed Windows Server 2008 R2 for all the the SharePoint friendliness that it offers. Here’s a quick 5 minutes howto on Enabling Shared Folders and Internet Access for the Hyper-V images, for those who are still struggling with this. Step 1: Add a Virtual Network Adapter to your Guest OS For this, shut down the guest machine, go to its settings and add a Virtual Network Adapter as given in the images below     Step 2: Enable Virtual Networking in Hyper-V   Setting this up is very easy. In the Hyper-V Manager, under Actions (right panel), click the Virtual Network Manager. In the Virtual Network Manager in the Create virtual network panel, select Internal and click the Add button.        At this point if you open Control Panel\Network and Internet\Network Connections you will be able to see the new Network Adapter, Now name it to something meaningful other than Network Adapter X. Now you can add this network to each of your virtual machines, but at this point, unless you assign an IP address in each connection, you won't be able to do much.   Step 3: Enable Internet Connection Sharing so that Guest OS’es also can connect to the internet. To enable ICS follow these steps: Click on the network icon in the tray of your host machine and select Network and Sharing Center. From there click Manage network connections. Select the network adapter that you use to access the Internet. Right click it and select Properties. In the properties dialog select the Sharing tab. On this tab check the box that says "Allow other network users..." and then set the Home networking connection to be the network adapter that was created above (now you see why I said to rename it to something useful). Now your virtual machines that have this network connection will automatically get an IP address and will be able to connect to the Internet (provided your internet connection is working). Because each adapter also gets an automatic address you can now share files and folders between your host and your virtual machines which is important since you can't just drag-and-drop files like you can with Virtual PC.   Step 4: Create a Shared Folder in the Host Machine and use it in the Guest machine. Right click on the folder that you want to Share and select ‘Share with\Specific People’ and specify who all can access the share. Open the Guest OS from Hyper V Navigate to Start > Run and type in the Address of the Share (Or Map a Drive to the Share) Bingo! The Share opens!! :)   Now you can share as many files and folders as you want between the host and the guest, and you also have internet access inside the Virtual machines. Hope that helps.   Technorati Tags: Shared folder,Hyper-V,Share Files,Share files and folders between guest and host,Hyper-V Networking,Share Internet Access in Hyper-V,Internet,Files,Shared folders in Hyper-V

    Read the article

  • Ubuntu 12.04 // Likewise Open // Unable to ever authenticate AD users

    - by Rob
    So Ubuntu 12.04, Likewise latest from the beyondtrust website. Joins domain fine. Gets proper information from lw-get-status. Can use lw-find-user-by-name to retrieve/locate users. Can use lw-enum-users to get all users. Attempting to login with an AD user via SSH generates the following errors in the auth.log file: Nov 28 19:15:45 hostname sshd[2745]: PAM unable to dlopen(pam_winbind.so): /lib/security/pam_winbind.so: cannot open shared object file: No such file or directory Nov 28 19:15:45 hostname sshd[2745]: PAM adding faulty module: pam_winbind.so Nov 28 19:15:51 hostname sshd[2745]: error: PAM: Authentication service cannot retrieve authentication info for DOMAIN\\user.name from remote.hostname Nov 28 19:16:06 hostname sshd[2745]: Connection closed by 10.1.1.84 [preauth] Attempting to login via the LightDM itself generates similar errors in the auth.log file. Nov 28 19:19:29 hostname lightdm: PAM unable to dlopen(pam_winbind.so): /lib/security/pam_winbind.so: cannot open shared object file: No such file or directory Nov 28 19:19:29 hostname lightdm: PAM adding faulty module: pam_winbind.so Nov 28 19:19:47 hostname lightdm: pam_succeed_if(lightdm:auth): requirement "user ingroup nopasswdlogin" not met by user "DOMAIN\user.name" Nov 28 19:19:52 hostname lightdm: [lsass-pam] [module:pam_lsass]pam_sm_authenticate error [login:DOMAIN\user.name][error code:40022] Nov 28 19:19:54 hostname lightdm: PAM unable to dlopen(pam_winbind.so): /lib/security/pam_winbind.so: cannot open shared object file: No such file or directory Nov 28 19:19:54 hostname lightdm: PAM adding faulty module: pam_winbind.so Attempting to login via a console on the system itself generates slightly different errors: Nov 28 19:31:09 hostname login[997]: PAM unable to dlopen(pam_winbind.so): /lib/security/pam_winbind.so: cannot open shared object file: No such file or directory Nov 28 19:31:09 hostname login[997]: PAM adding faulty module: pam_winbind.so Nov 28 19:31:11 hostname login[997]: [lsass-pam] [module:pam_lsass]pam_sm_authenticate error [login:DOMAIN\user.name][error code:40022] Nov 28 19:31:14 hostname login[997]: FAILED LOGIN (1) on '/dev/tty2' FOR 'DOMAIN\user.name', Authentication service cannot retrieve authentication info Nov 28 19:31:31 hostname login[997]: FAILED LOGIN (2) on '/dev/tty2' FOR 'DOMAIN\user.name', Authentication service cannot retrieve authentication info I am baffled. The errors obviously are correct, the file /lib/security/pam_winbind.so does not exist. If its a dependancy/required, surely it should be part of the package? I've installed/reinstalled, I've used the downloaded package from the beyondtrust website, i've used the repository, nothing seems to work, every method of installing this application generates the same errors for me. UPDATE : Hrmm, I thought likewise didn't use native winbind but its own modules. Installing winbind from apt-get uninstalls pbis-open (likewise) and generates failures when installing if pbis-open is installed first. Uninstalled winbind, reinstalled pbis-open, same issue as above. The file pam_winbind.so does not exist in that location. Setting up pbis-open-legacy (7.0.1.918) ... Installing Packages was successful This computer is joined to DOMAIN.LOCAL New libraries and configurations have been installed for PAM and NSS. Clearly it thinks it has installed it, but it hasn't. It may be a legacy issue with the previous attempt to configure domain integration manually with winbind. Does anyone have a working likewise-open installation and does the /etc/nsswitch.conf include references to winbind? Or do the /etc/pam.d/common-account or /etc/pam.d/common-password reference pam_winbind.so? I'm unsure if those entries are just legacy or setup by likewise. UPDATE 2 : Complete reinstall of OS fixed it and it worked seamlessly, like it was meant to and those 2 PAM files did NOT include entries for pam_winbind.so, so that was the underlying problem. Thanks for the assist.

    Read the article

  • maintaining a growing, diverse codebase with continuous integration

    - by Nate
    I am in need of some help with philosophy and design of a continuous integration setup. Our current CI setup uses buildbot. When I started out designing it, I inherited (well, not strictly, as I was involved in its design a year earlier) a bespoke CI builder that was tailored to run the entire build at once, overnight. After a while, we decided that this was insufficient, and started exploring different CI frameworks, eventually choosing buildbot. One of my goals in transitioning to buildbot (besides getting to enjoy all the whiz-bang extras) was to overcome some of the inadequacies of our bespoke nightly builder. Humor me for a moment, and let me explain what I have inherited. The codebase for my company is almost 150 unique c++ Windows applications, each of which has dependencies on one or more of a dozen internal libraries (and many on 3rd party libraries as well). Some of these libraries are interdependent, and have depending applications that (while they have nothing to do with each other) have to be built with the same build of that library. Half of these applications and libraries are considered "legacy" and unportable, and must be built with several distinct configurations of the IBM compiler (for which I have written unique subclasses of Compile), and the other half are built with visual studio. The code for each compiler is stored in two separate Visual SourceSafe repositories (which I am simply handling using a bunch of ShellCommands, as there is no support for VSS). Our original nightly builder simply took down the source for everything, and built stuff in a certain order. There was no way to build only a single application, or pick a revision, or to group things. It would launched virtual machines to build a number of the applications. It wasn't very robust, it wasn't distributable. It wasn't terribly extensible. I wanted to be able to overcame all of these limitations in buildbot. The way I did this originally was to create entries for each of the applications we wanted to build (all 150ish of them), then create triggered schedulers that could build various applications as groups, and then subsume those groups under an overall nightly build scheduler. These could run on dedicated slaves (no more virtual machine chicanery), and if I wanted I could simply add new slaves. Now, if we want to do a full build out of schedule, it's one click, but we can also build just one application should we so desire. There are four weaknesses of this approach, however. One is our source tree's complex web of dependencies. In order to simplify config maintenace, all builders are generated from a large dictionary. The dependencies are retrieved and built in a not-terribly robust fashion (namely, keying off of certain things in my build-target dictionary). The second is that each build has between 15 and 21 build steps, which is hard to browse and look at in the web interface, and since there are around 150 columns, takes forever to load (think from 30 seconds to multiple minutes). Thirdly, we no longer have autodiscovery of build targets (although, as much as one of my coworkers harps on me about this, I don't see what it got us in the first place). Finally, aformentioned coworker likes to constantly bring up the fact that we can no longer perform a full build on our local machine (though I never saw what that got us, either, considering that it took three times as long as the distributed build; I think he is just paranoically phobic of ever breaking the build). Now, moving to new development, we are starting to use g++ and subversion (not porting the old repository, mind you - just for the new stuff). Also, we are starting to do more unit testing ("more" might give the wrong picture... it's more like any), and integration testing (using python). I'm having a hard time figuring out how to fit these into my existing configuration. So, where have I gone wrong philosophically here? How can I best proceed forward (with buildbot - it's the only piece of the puzzle I have license to work on) so that my configuration is actually maintainable? How do I address some of my design's weaknesses? What really works in terms of CI strategies for large, (possibly over-)complex codebases?

    Read the article

< Previous Page | 92 93 94 95 96 97 98 99 100 101 102 103  | Next Page >