Search Results

Search found 5286 results on 212 pages for 'the third'.

Page 84/212 | < Previous Page | 80 81 82 83 84 85 86 87 88 89 90 91  | Next Page >

  • OSGI Declarative Services (DS): What is a good way of using service component instances

    - by Christoph
    I am just getting started with OSGI and Declarative Services (DS) using Equinox and Eclipse PDE. I have 2 Bundles, A and B. Bundle A exposes a component which is consumed by Bundle B. Both bundles also expose this service to the OSGI Service registry again. Everything works fine so far and Equinox is wireing the components together, which means the Bundle A and Bundle B are instanciated by Equinox (by calling the default constructor) and then the wireing happens using the bind / unbind methods. Now, as Equinox is creating the instances of those components / services I would like to know what is the best way of getting this instance? So assume there is third class class which is NOT instantiated by OSGI: Class WantsToUseComponentB{ public void doSomethingWithComponentB(){ // how do I get componentB??? Something like this maybe? ComponentB component = (ComponentB)someComponentRegistry.getComponent(ComponentB.class.getName()); } I see the following options right now: 1. Use a ServiceTracker in the Activator to get the Service of ComponentBundleA.class.getName() (I have tried that already and it works, but it seems to much overhead to me) and make it available via a static factory methods public class Activator{ private static ServiceTracker componentBServiceTracker; public void start(BundleContext context){ componentBServiceTracker = new ServiceTracker(context, ComponentB.class.getName(),null); } public static ComponentB getComponentB(){ return (ComponentB)componentBServiceTracker.getService(); }; } 2. Create some kind of Registry where each component registers as soon as the activate() method is called. public ComponentB{ public void bind(ComponentA componentA){ someRegistry.registerComponent(this); } or public ComponentB{ public void activate(ComponentContext context){ someRegistry.registerComponent(this); } } } 3. Use an existing registry inside osgi / equinox which has those instances? I mean OSGI is already creating instances and wires them together, so it has the objects already somewhere. But where? How can I get them? Conclusion Where does the class WantsToUseComponentB (which is NOT a Component and NOT instantiated by OSGI) get an instance of ComponentB from? Are there any patterns or best practises? As I said I managed to use a ServiceTracker in the Activator, but I thought that would be possible without it. What I am looking for is actually something like the BeanContainer of Springframework, where I can just say something like Container.getBean(ComponentA.BEAN_NAME). But I don't want to use Spring DS. I hope that was clear enough. Otherwise I can also post some source code to explain in more detail. Thanks Christoph UPDATED: Answer to Neil's comment: Thanks for clarifying this question against the original version, but I think you still need to state why the third class cannot be created via something like DS. Hmm don't know. Maybe there is a way but I would need to refactor my whole framework to be based on DS, so that there are no "new MyThirdClass(arg1, arg2)" statements anymore. Don't really know how to do that, but I read something about ComponentFactories in DS. So instead of doing a MyThirdClass object = new MyThirdClass(arg1, arg2); I might do a ComponentFactory myThirdClassFactory = myThirdClassServiceTracker.getService(); // returns a if (myThirdClassFactory != null){ MyThirdClass object = objectFactory.newInstance(); object.setArg1("arg1"); object.setArg2("arg2"); } else{ // here I can assume that some service of ComponentA or B went away so MyThirdClass Componenent cannot be created as there are missing dependencies? } At the time of writing I don't know exactly how to use the ComponentFactories but this is supposed to be some kind of pseudo code :) Thanks Christoph

    Read the article

  • Context Menu (Right Click) keyboard shortcut in Mac OS X

    - by czerwin
    Is it really possible to invoke a context menu using a keyboard shortcut instead of clicking the right/alt mouse button in OS X? In particular, I would like a menu-key-like feature in OS X. I am wondering whether there is an additional third party software that provides such feature. Please not that the Mouse Keys feature is not an option as I don't want to depend on the position of the mouse cursor. Similar Topics Keyboard Shortcut to Right Click in Mac OS X Right click using keyboard in Mac OS X Enable Right-Click on Mac OS X 10.7.5 by default Keyboard shortcut for spelling dropdown menu in OS X beyond Devonthink Pro? Add application to right click context menu on Mac OS X

    Read the article

  • Optimal storage of data structure for fast lookup and persistence

    - by Mikael Svenson
    Scenario I have the following methods: public void AddItemSecurity(int itemId, int[] userIds) public int[] GetValidItemIds(int userId) Initially I'm thinking storage on the form: itemId -> userId, userId, userId and userId -> itemId, itemId, itemId AddItemSecurity is based on how I get data from a third party API, GetValidItemIds is how I want to use it at runtime. There are potentially 2000 users and 10 million items. Item id's are on the form: 2007123456, 2010001234 (10 digits where first four represent the year). AddItemSecurity does not have to perform super fast, but GetValidIds needs to be subsecond. Also, if there is an update on an existing itemId I need to remove that itemId for users no longer in the list. I'm trying to think about how I should store this in an optimal fashion. Preferably on disk (with caching), but I want the code maintainable and clean. If the item id's had started at 0, I thought about creating a byte array the length of MaxItemId / 8 for each user, and set a true/false bit if the item was present or not. That would limit the array length to little over 1mb per user and give fast lookups as well as an easy way to update the list per user. By persisting this as Memory Mapped Files with the .Net 4 framework I think I would get decent caching as well (if the machine has enough RAM) without implementing caching logic myself. Parsing the id, stripping out the year, and store an array per year could be a solution. The ItemId - UserId[] list can be serialized directly to disk and read/write with a normal FileStream in order to persist the list and diff it when there are changes. Each time a new user is added all the lists have to updated as well, but this can be done nightly. Question Should I continue to try out this approach, or are there other paths which should be explored as well? I'm thinking SQL server will not perform fast enough, and it would give an overhead (at least if it's hosted on a different server), but my assumptions might be wrong. Any thought or insights on the matter is appreciated. And I want to try to solve it without adding too much hardware :) [Update 2010-03-31] I have now tested with SQL server 2008 under the following conditions. Table with two columns (userid,itemid) both are Int Clustered index on the two columns Added ~800.000 items for 180 users - Total of 144 million rows Allocated 4gb ram for SQL server Dual Core 2.66ghz laptop SSD disk Use a SqlDataReader to read all itemid's into a List Loop over all users If I run one thread it averages on 0.2 seconds. When I add a second thread it goes up to 0.4 seconds, which is still ok. From there on the results are decreasing. Adding a third thread brings alot of the queries up to 2 seonds. A forth thread, up to 4 seconds, a fifth spikes some of the queries up to 50 seconds. The CPU is roofing while this is going on, even on one thread. My test app takes some due to the speedy loop, and sql the rest. Which leads me to the conclusion that it won't scale very well. At least not on my tested hardware. Are there ways to optimize the database, say storing an array of int's per user instead of one record per item. But this makes it harder to remove items.

    Read the article

  • Triple Monitor Stand Recommendations

    - by Josh W.
    I've got two Acer X233Hbid 23" Widescreen LCD Monitors from new egg back last summer, each weigh 10.5lbs a piece I Want to Buy a third Acer 23" (closest I've found is the X235 on Newegg, weighs in at 11.5 lbs) , one of the new ATI video cards that will output to 3 displays, and then a monitor stand that will let me use them in portrait mode like the image below. I found the following: $260 - ERGOTRON 33-323-200 DS100 Triple-Monitor Desk Stand and was wondering if anyone has any experience with this kind of setup and whether it would work for me or not.. Thanks!

    Read the article

  • Virtual PC and hardware-assisted virtualization (VT-x) problem

    - by Vesa Huovi
    I've installed Microsoft's Virtual PC on Windows 7, but when I try to start a virtual machine I get the following error message: '<Virtual machine name' could not be started because hardware-assisted virtualization is disabled. Please enable hardware virtualization in the BIOS settings and try again. If hardware virtualization settings is already enabled, you may have to disable Trusted Execution Technology (TXT) setting in BIOS or update the system BIOS. However, if I download and run the Hardware-Assisted Virtualization Detection Tool, it gives the following positive message: This computer is configured with hardware-assisted virtualization. This computer meets the processor requirements to run Windows Virtual PC. If this computer runs a supported edition of Windows® 7, you can install Windows Virtual PC. I've also used the MSR Walker in the third-party utility CrystalCPUID to examine MSR 0x3a on both processors on my system, and it's 0x5 (0x4 = VT enabled, 0x1 = VT lock), as expected. Does anyone have any ideas of what else to check? Thanks.

    Read the article

  • 3 or 4 monitors with Nvidia and Ubuntu

    - by Jason
    I saw that you are (were?) running 4 monitors with Ubuntu 8.10 and two Nvidia cards (http://stackoverflow.com/questions/27113/how-to-use-3-monitors). I was curious if you were doing this with Xinerama, a hacked up TwinView config, or multiple X screens, or some other method? Does it work with compiz? I intend to run my Dell 30" in the middle with two 1280x1024 on the sides and continue to use one X screen, and run compiz, on Ubuntu 9.04. Currently, I am using 2 monitors with twinview and compiz, which runs fantastic. I just can't get the third monitor running (unless I enable it in its own X screen, and then enable Xinerama to enable windows to be dragged as if all one X screen, but this breaks compiz, and I don't care much for having separate X screen). I am very interested in knowing how you set up 4 monitors with 2 GPU's. Thanks!

    Read the article

  • Re-using port 443 for another service - is it possible?

    - by Donald Matheson
    The ultimate goal is to allow a remote data connection service to operate on port 443 on a the SQL server. The application accessing the connection is behind a firewall and it is because of the client's reluctance to open another port that I have been asked to try and get this working. The current environment is Windows 2003 R2 (SP2) and SQL Server 2005. IIS is not installed, but when I try and install the third party connection software (SequeLink) it won't as it reports something is still configured on/using port 443. Netstat does not show anything listening on the port and I've tried editing the system32\drivers\etc\services file removing any reference to port 443 and also using sc delete to delete the HTTP and HTTPFilter (HTTP SSL in services console) services to see if this would help. Rebooting after each change. What could still be using the port? Is what I'm trying even possible (I have my doubts but have to investigate every avenue)? Any help/pointers would be greatly appreciated.

    Read the article

  • Increase text size in Ubuntu 10.04 due to having large resolution/monitors

    - by Sridhar Ratnakumar
    I have 24" dual monitors with 1920x1080 resolution on both of them. Consequently the text appears so small. I use the following text-intensive applications frequently: Web browser (Google Chrome) IDE (Komodo) Terminal (Gnome Terminal) Email (Thunderbird) I can configure text size on IDE, Terminal and Email. But for Chrome, it is not a good idea to set proportional font size because often one wants to see the entire (not just proportional fonts) site to be zoomed. So I am asking: Is it possible to increase DPI in Ubuntu (much like on Windows) so as to increase the text size across all apps? OR Is it possible to set permanent 'zoom' in Google Chrome, using a third-party extension maybe? I am using Ubuntu 10.04 (Lucid Lynx)

    Read the article

  • bind9 dns proxy

    - by Zulakis
    We are offering multiple SSL-enabled services in our local network. To avoid certificate-warnings we bought certificates for server.ourdomain.tld and firewall.ourdomain.tld. We now created a zone in our local DNS-server in which we pointed the hosts to the corresponding private-ips. Now, each time another record for ourdomain.tld, like for example www.ourdomain.tld or alike are changed, we need to update it on both our public-dns-server AND the local dns-server. I would like our local bind-dns to serve all the information from our public-dns but serve different information for these 2 hosts. I know I could possibly have our private-ips in our public-dns but I don't want that for security reasons. The internet dns-server is being managed by a third party, while we have full control of the intranet one. Because of this I am looking for a solution which lets the intranet retrieve the records from the internet one.

    Read the article

  • Can't connect to Apple Time Capsule in home network using Home Plugs from Win 7 Machine

    - by Eugene
    I have the following home network setup with subnet 255.255.255.0 but recently moved my time capsule to a different location when I added a third Home Plug and can no longer ping or map a network drive to it from the Windows 7 Machine. However using Airport Utility on the Windows 7 machine I can manually configure the Time Capsule. Using a Macbook on WIFI Network 1 or 2 - I can backup to the time capsule, so its accessible via both the router wifi network and the time capsule wifi network. The Time Capsule is set to BRIDGE function - ie no NAT or DHCP server enabled. Any bright sparks out there that can help diagnose the problem? Router (192.168.1.254) WIFI Network 1 | | |---- Home Plug one |---- Home Plug Two | |---- Computer A Windows 7 (192.168.1.160) | |---- Printer (192.168.1.69) |---- Home Plug Three | |---- Apple Time Capsule (192.168.1.150) WIFI Network 2 |---- Smart TV (192.168.1.70) | |---- Apple TV (192.168.1.4)

    Read the article

  • In Visio 2010, how can I create a mandatory, non-identifying relationship between two database tables

    - by Cam Jackson
    I'm working in MS Visio 2010. This is the relevant part of my ERD: The relationship between Event and Adventure is correct: there's a foreign key from Event to Adventure, and that FK is part of Event's primary key. However, what I can't figure out is how to make the relationship line from Adventure to AccomodationType be the same, without making that relationship part of the PK of adventure. When I look at the 'Miscellaneous' properties of that relationship line, I want it to be: Cardinality: Zero or more Relationship type: Non-identifying Child has parent: Not optional (mandatory) But the checkbox for the third property is greyed out, and toggles between True/False as I make the relationship Non-identifying/Identifying. The only way I could figure out was to disconnect the two columns, from the 'Definition' tab, which then un-grey's the 'Optional' checkbox, but then I lose the foreign key property on the accomType column, and while the relationship symbols are correct, the line remains dotted. Any ideas, anyone?

    Read the article

  • Make windows vista file explorer act normally

    - by user25866
    Is there some file I can remove or something I can do to globally ensure that windows visa/xp/etc doesn't do annoying things? Annoying things: 1) Hide the file extension 2) All these "meta" columns I could care less about in "details" view (rating, album, date taken, Assistant's name, Artist, 35mm focal length, City, Other City, etc...). All I want are Name, size, date created, date modified, and file extension. MAYBE file chmod settings. 3) That garbage in the left pane known as "favorite links." (Documents, desktop, photos, music, etc...) 4) Switching between detail view, large icon view, thumbnail view, list view, and tiles when I goto differnt folders, all I want is detail view, with the same columns every time. That's it. I shouldn't have to get third party software to make my file system browseable, but if I need to so be it... Why are all these settings buried away? It feels like I have to apply them onto each folder every time.

    Read the article

  • Why does my $LD_LIBRARY_PATH get unset when using screen with bash?

    - by UltraNurd
    This is related to http://superuser.com/questions/27376/why-does-my-ld-library-path-get-unset-launching-terminal, but a different set of symptoms. First, /usr/bin/screen is setuid as per the other question. Second, the default shell on this system is /bin/tcsh for various historical reasons, and we're not allowed to chsh to /bin/bash, so I typically run bash manually immediately after login. Third, I almost always use screen, but I want ctrl-a ctrl-c in screen to create a new bash "tab", so I always invoke bash first. That is: {~} $ echo $SHELL /bin/tcsh {~} $ bash [~] echo $SHELL /bin/bash [~] screen -U [~] ...and when reconnecting: {~} $ echo $SHELL /bin/tcsh {~} $ screen -dUr [~] echo $SHELL /bin/bash [~] However, my $LD_LIBRARY_PATH is there in tcsh, there in bash, but empty once I run screen; it is still present if I just run screen from tcsh, but then I get new tcsh "tabs" when I use ctrl-a ctrl-c in screen. Any ideas?

    Read the article

  • SBS 08 Backup fail

    - by Bastien974
    Hi, I'm trying to backup my SBS 08 (only C:) with Windows Server Backup. It fails a few minutes after it started : Backup started at '08/12/2009 1:27:23 PM' failed as Volume Shadow copy operation failed for backup volumes with following error code '2155348022'. Please rerun backup once issue is resolved. In the EventViewer i have lots of error : VSS : 12289 SQLVDI : 1 MSSQL$MICROSOFT##SSEE : 18210 MSSQL$MICROSOFT##SSEE : 3041 SQLWRITER : 24583 All VSS ans SQL services are started. I have WSUS 3.0, Exchange 07. I don't have any third party backup software running at the same time. Thanks for your help !

    Read the article

  • Tips for teaching Linux to beginners?

    - by chiborg
    I will teach Linux to people of the ages 20-75 with no prior Linux knowledge. I want to teach some basic concepts (what's an OS, what's a file system) and some practical knowlede: How to install it, network configuration, set up email client, installing software with a packet manager, etc. I have held a system administrators course in the past, but was under the impression that my method of teaching was not adequate. I've explained what I was about to show, showed students on the projector, told them to repeat it on their computers and summarized what they should have learned. They could ask questions all the time. But I fear they remembered only one-third of the knowledge I taught them. I have two questions here: Are there better methods to teach this particular subject in a classroom equipped with computers? Are there some tricks that "slow me down" when I teach stuff that I know inside-out?

    Read the article

  • Windows server's HDD Spin down daily/nightly - Does it makes sense?

    - by Riccardo
    A Windows Server 2003 R2 has the following hard disk configuration: - 3 internal hard disks attached to a 3Ware unit, configured in Raid 1 + spare unit - 3 external USB backup disks: 2 Verbatim 1TB (Samsung HD103SI) + 1 Western Digital 1TB (WD10EADS) The server runs 365 days per year, h24, however: - at daytime the server/user usage is limited to the internal hard disks - at nighttime there's no user usage, apart from scheduled maintenance tasks, basically the Server will be idle from 7PM to 8AM. apart from nighly backups (few hours). I was wondering if: (a) it makes any sense let Windows manage power savings, allowing disks to spin down accordingly, ** OR** let the disks stay awlays-on, to avoid permature wearing, due to continuous spin up/down (b) leave internal disks always on, and force external disks to power down while idle (this requires third party tools, such as Verbatim's Green button utility) Your thoughts?

    Read the article

  • How do the routers communicate with each other ?

    - by Berkay
    Let's say that i want make a request a to a web page which is hosted in Europe (i live in USA).My packets only consist the IP address of the web page, first the domain name to ip address transformation is done, then my packets start their journey through to europe. i assume that MAC addresses never used in this situation? are they? First, my packets deal with many routers on way how these routers communicate with each other?, are router addresses added to my packet headers ? Second, is there a specific path router to router comminication or which conditions affect this route? Third to cross the Atlantic Ocean, are cables used or... ?

    Read the article

  • Recovering OST file without profile

    - by Philippe
    We have a Microsoft Exchange Server 2007 and offer this solution to many customers. Recently, a customer had his personal Exchange server crash (which is what made him our customer). He called some technician to see if he could repair his server before calling us, but this said tech wasn't able to do anything for them. Now that all his mailbox are on our server, he would like to transfer his old emails over to the new profile, but the tech deleted all profiles on the client machines while trying to repair his Exchange server. So my customer still has the OST files but they are not related to any profiles. Is there any way to re-attach them to a profile, or to convert them into PST files that he could then import into his new profile? The only thing I found were third party software that could to the conversion, but I am wondering if Microsoft has any tools that could re-attach the OST to a new profile. I have also tried the scanpst.exe and scanost.exe to no avail. Thank you

    Read the article

  • Cheap windows shared hosting service

    - by Elangovan
    Hi I have recently purchased a domain from go daddy, now I am looking for a cheap windows hosting. Following are my requirements Shared Windows hosting Should be cheap in price Should have at least one SQL server Db and one mysql db. Should support atleast asp.net 3.5 and php Will be good if it has support for asp.net mvc (no problem even if it is not available also) Should be able to install third party blog sites. Bandwidth, total space and performance are not very important. Silverlight is also an added advantage (no problem even if it is not available also). There should be no advertisement or banner added by the hosting company in the site. Should have support for subdomains

    Read the article

  • Worth it to move /var to physical disk vs logical?

    - by Tammer Ibrahim
    Brief question about partition layout. I use an SSD for /, /boot, /usr, & /home partitions. I'd like to move /var to a mechanical disk to minimize writes to the SSD. I'm mainly concerned about maximizing drive life rather than maximizing performance (although I obviously wouldn't want to cripple my server). My mechanical disks consist of two drives sharing LVM, and a third used for nightly rsync backups. I also have a bunch of old 2.5in hard disks lying around. My question is, should I simply create a new LVM volume '/var' on my primary data store, or would it be worth the increased energy consumption (in terms of maximizing the lifetime of the LVMed drives) to install a low volume 2.5in disk to use just for /var? On a more general level my question is about the trade offs of placing OS mounts on the same physical volumes as my data. Thanks for any help!

    Read the article

  • Best 2x 4GB RAM for a Thinkpad X200s

    - by Tommy Jakobsen
    Hi, I need 2x 4GB RAM, a total of 8GB, in my upcoming Thinkpad X200s laptop. Before buying, I would like your advice on which modules to choose. I've been looking at Corsair's Value Select (P/N: CM3X4GSD1066) RAM, because in my experience they produce good RAM modules. However, Corsair lists 7 clock cycles for their modules while Lenovo lists 5 clock cycles. What do you think? Is Lenovo modules the best choice? Are they the fastest/most stable, or is it the Corsair modules? Or modules from a third vendor? Thanks in advance!

    Read the article

  • Grafting LINQ onto C# 2 library

    - by P Daddy
    I'm writing a data access layer. It will have C# 2 and C# 3 clients, so I'm compiling against the 2.0 framework. Although encouraging the use of stored procedures, I'm still trying to provide a fairly complete ability to perform ad-hoc queries. I have this working fairly well, already. For the convenience of C# 3 clients, I'm trying to provide as much compatibility with LINQ query syntax as I can. Jon Skeet noticed that LINQ query expressions are duck typed, so I don't have to have an IQueryable and IQueryProvider (or IEnumerable<T>) to use them. I just have to provide methods with the correct signatures. So I got Select, Where, OrderBy, OrderByDescending, ThenBy, and ThenByDescending working. Where I need help are with Join and GroupJoin. I've got them working, but only for one join. A brief compilable example of what I have is this: // .NET 2.0 doesn't define the Func<...> delegates, so let's define some workalikes delegate TResult FakeFunc<T, TResult>(T arg); delegate TResult FakeFunc<T1, T2, TResult>(T1 arg1, T2 arg2); abstract class Projection{ public static Condition operator==(Projection a, Projection b){ return new EqualsCondition(a, b); } public static Condition operator!=(Projection a, Projection b){ throw new NotImplementedException(); } } class ColumnProjection : Projection{ readonly Table table; readonly string columnName; public ColumnProjection(Table table, string columnName){ this.table = table; this.columnName = columnName; } } abstract class Condition{} class EqualsCondition : Condition{ readonly Projection a; readonly Projection b; public EqualsCondition(Projection a, Projection b){ this.a = a; this.b = b; } } class TableView{ readonly Table table; readonly Projection[] projections; public TableView(Table table, Projection[] projections){ this.table = table; this.projections = projections; } } class Table{ public Projection this[string columnName]{ get{return new ColumnProjection(this, columnName);} } public TableView Select(params Projection[] projections){ return new TableView(this, projections); } public TableView Select(FakeFunc<Table, Projection[]> projections){ return new TableView(this, projections(this)); } public Table Join(Table other, Condition condition){ return new JoinedTable(this, other, condition); } public TableView Join(Table inner, FakeFunc<Table, Projection> outerKeySelector, FakeFunc<Table, Projection> innerKeySelector, FakeFunc<Table, Table, Projection[]> resultSelector){ Table join = new JoinedTable(this, inner, new EqualsCondition(outerKeySelector(this), innerKeySelector(inner))); return join.Select(resultSelector(this, inner)); } } class JoinedTable : Table{ readonly Table left; readonly Table right; readonly Condition condition; public JoinedTable(Table left, Table right, Condition condition){ this.left = left; this.right = right; this.condition = condition; } } This allows me to use a fairly decent syntax in C# 2: Table table1 = new Table(); Table table2 = new Table(); TableView result = table1 .Join(table2, table1["ID"] == table2["ID"]) .Select(table1["ID"], table2["Description"]); But an even nicer syntax in C# 3: TableView result = from t1 in table1 join t2 in table2 on t1["ID"] equals t2["ID"] select new[]{t1["ID"], t2["Description"]}; This works well and gives me identical results to the first case. The problem is if I want to join in a third table. TableView result = from t1 in table1 join t2 in table2 on t1["ID"] equals t2["ID"] join t3 in table3 on t1["ID"] equals t3["ID"] select new[]{t1["ID"], t2["Description"], t3["Foo"]}; Now I get an error (Cannot implicitly convert type 'AnonymousType#1' to 'Projection[]'), presumably because the second join is trying to join the third table to an anonymous type containing the first two tables. This anonymous type, of course, doesn't have a Join method. Any hints on how I can do this?

    Read the article

  • Problems with Xen installation

    - by Rodnower
    Hello, I have CentOS 5.4 installed. Now I'm trying to install Xen with out connecting to Internet (I have any driver for modem, so I search on Inernet only from Windows). All I have are 7 installation disks. First I done was to find kind of some add/remove programs wizard but it needed connection to Inernet. Second I try was to find Xen rpm on all disks and install it. But I fell on some dependency of some dependency. Third I attempted was to boot from first disk and do upgrade, but also it was unsuccessfully... So my question is: is there some way to install Xen from CentOS installation disks with out network? Thank you for ahead.

    Read the article

  • Logstash shipper & server on the samebox

    - by keftes
    I'm trying to setup a central logstash configuration. However I would like to be sending my logs through syslog-ng and not third party shippers. This means that my logstash server is accepting via syslog-ng all the logs from the agents. I then need to install a logstash process that will be reading from /var/log/syslog-clients/* and grabbing all the log files that are sent to the central log server. These logs will then be sent to redis on the same VM. In theory I need to also configure a second logstash process that will read from redis and start indexing the logs and send them to elasticsearch. My question: Do I have to use two different logstash processes (shipper & server) even if I am in the same box (I want one log server instance)? Is there any way to just have one logstash configuration and have the process read from syslog-ng --- write to redis and also read from redis --- output to elastic search ? Diagram of my setup: [client]-------syslog-ng--- [log server] ---syslog-ng <----logstash-shipper --- redis <----logstash-server ---- elastic-search <--- kibana

    Read the article

  • Serverlocation move and how can I Move the files

    - by Bernhard
    Hello together, I´ve a big problem. I have to move data from an old Webspace which is only accessibla by ftp. No we have a new root server which is accessible by ssh of course :-) No i Need to move all data from the old space but there is a lot of Gb of files. Is there a way to fetch all files directly from the old ftp to the storage and not over a third station (my local machine)? I´ve tried it with ftp but without success. I think I´ve used the wrong commands. Is there a way to etablish something like this including all files and directorys? Thank you in advance Bernhard

    Read the article

< Previous Page | 80 81 82 83 84 85 86 87 88 89 90 91  | Next Page >