Search Results

Search found 2484 results on 100 pages for 'maintain'.

Page 40/100 | < Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >

  • freebsd dev server on virtualbox over windows

    - by g_kaya
    I need a unixy environment for development purposes. I hate doing things on windows but it is more stable for daily use and I don't have a mac, so I'm having to use windows (7). I want to run freebsd in a virtual machine, configure it to be the localhost server, be able to connect using ssh (within my home-network) and be able to install vbox guest addons. If guest additions aren't the best, I can use solaris or linux flavours. I need no gui. I don't know anything about network stuff, so I need a detailed explanation from vise people here, or a nice doc to read. Edit : To be more specific as requested, I use following on unices: *django 1.4 *apache *python (2.7) *emacs *mysql *probably node.js *bash scripting I use windows to be able to do daily things easily, like connecting to my tablet, browsing and learning java. And I don't want to use linux as my desktop os, beacuse it gets broken a lot, it's annoying to maintain wlan problems and some more.

    Read the article

  • Git Repo to mantain the app configurations in several servers

    - by user62904
    Hi! I need to versioning in a GIT repository, configurations of a particular platform, spread across multiple servers. Take into account that in each of these servers there are completely different configurations, while the application is the same. What is the best way to do this? Create a branch for each server repository.git:conf -- [branch Server 1] repository.git:conf -- [branch Server 2] repository.git:conf -- [branch Server N] Note: This method seems to me, that is difficult to maintain because each change in the server configurations, I need to create subbranches which becomes confusing. Create a single repo with a different directory for each server repository.git:conf/Server 1 repository.git:conf/Server 2 repository.git:conf/Server N Note: This is easy to mantain Create a repo for each server repository_1.git:conf repository_2.git:conf repository_N.git:conf Note: This method requires me to create a branch for each new server. There are other methods, what are the best practices in this case? Should I use the one that I feel most comfortable? Tks, Gulden PT

    Read the article

  • Can I use static routing to allow me to use my public IP from my LAN?

    - by jnm2
    I would like to be able to use the same hostname to connect to my computer from my phone whether I'm at home or away. Currently I have to maintain duplicate entries for remote desktop, for instance. My router doesn't seem to have a NAT loopback option. I have two routers in fact, a cable modem which goes straight to my main router which does wireless. I can add to the static routing tables on each. Can I use this to loopback the public IP or do I need different routers?

    Read the article

  • Dependency diagramming / mapping tool [closed]

    - by Lars
    I am looking for a tool that allows me to easily create and maintain dependency maps of our mission critical servers, apps, processes, etc. It needs to be intuitive and easy to work with and be able to generate diagrams that clearly show the dependencies graphically. What would be some good tools for this? I have looked at videos for AssetGen Sysmap and BluePrint from Pathwaysystems.com, and they both seem to fit my needs, but there has got to be more good systems like them that I can look at. I want to make sure I pick the best system for our needs (and limited budget).

    Read the article

  • Open Source Outlook and Exchange Alternative (for Calendar and Tasks)

    - by Elmar Weber
    We're using an Exchange Server and Outlook in a private context. Since Exchange is getting more and more of a monster to maintain and is basically the only thing running on a Windows VM Server, I'm looking for an alternative with the following properties: Server based back-end that that can be installed / deployed on my machine and is not dependent on any third party services (e.g. Google Calendar). (Easy) import of existing Outlook calendars, either via some feature of Exchange or by ex- and importing from Outlook. Roughly the basic features of the Outlook calendar (categorizations, series, etc) Synchronization with the iPhone over wireless without a hitch (e.g. double or missing entries) Any recommendations?

    Read the article

  • Forward e-mail to multiple addresses with conditions

    - by Valera Leontyev
    I need to forward e-mails to different mail accounts by different conditions. The aim is to create mail notification scheme for my company. I'd like to setup server on dedicated mail domain for it. Is there any software that helps to get my aim (Linux)? Examples: 1) forward all e-mail sent to [email protected] to x@x, y@y, z@z (no conditions) 2) forward e-mail sent to [email protected] where subject contains '[finance]' to a@b and b@b 3) forward e-mail sent to [email protected] where subject contains '[fault]' to s@s and s2@s. Receivers' domains are different. P.S. Now we use Gmail filters to get this functionality, but it's unstable and hard to maintain.

    Read the article

  • Best way to partition 1 TB (Linux and Windows 7)

    - by Simon
    Is there an intelligent way to partition 1 TB and be prepared for resizing/adding/deleting partitions? I was thinking about LVM, but as far as I remember, Windows 7 can't be installed on logical volume right? For now my plan is: - ~150 GB for Windows 7 and other stuff (Visual Studio..., maybe I'll split it 100/50 or something like that) - simple NTFS - 850 GB = LVM - disk for Linux (Ubuntu) and other stuff virtual machines, etc. I'm mostly interested in how and what tools should I use to get easy in maintain partitions for both systems.

    Read the article

  • Saving Proxy Settings from different networks

    - by elcool
    I go to different clients and they all have their networks with different proxy settings. So I'm always having to change the proxy settings and saving the info in a notebook, and it becomes a pain after a while. So, any good way to save all those proxies and have them recognize the network I'm on and load up? Currently I maintain two: one in IE for clients, and none in Firefox for when I'm at home. And I think the last network I was on asked me to save the proxy and I think it went to system settings. But I don't know much about networking.

    Read the article

  • nginx redirects and rewrites

    - by ptheofan
    I'm closing a website but want to maintain a couple of urls working plus a static html file to serve as index. All old urls should redirect to root (/) except a couple of chosen locations. Here's an example of what I need to do All should give 301 permanent to / http:://www.domain.tld/whatever/anything/realy == 301 ==> http://www.domain.tld http:://www.domain.tld/blabla == 301 ==> http://www.domain.tld http:://www.domain.tld/ == 301 ==> http://www.domain.tld except for http://www.domain.tld/special.html == serve ==> special.html root should serve the defailt file (as specificed in index) http:://www.domain.tld == serve => somefile.html

    Read the article

  • Software to automate event notifications

    - by user6781
    I maintain a small web site and people can send an email to request information about new events. I would like to automate the process of updating people about new events via email. Is there some easy existing software package to do this? Where I can configure events with a date/description/etc... and have mails get sent automatically? The alternative is to build a MySql database and have some python script in a cron job that does it. I'd like to know if this is a solved problem though. I'm running on a shared host (dreamhost if it matters)

    Read the article

  • Manage Kickstart library with Puppet

    - by Tim Brigham
    I maintain a library of different kickstart configurations, mostly for CentOS 5 and 6. It has recently gotten to the point I want to deduplicate as much of this information as possible. I am aware of a couple options out there which can dynamically generate kickstart files. Not interested at this point unless I really need to do that route. I would like to create my Kickstart files using a template along the following line: deploy1-centos5.erb .... install=http://.../$arch/... repo=http://.../$arch/... .... My output naming schema is "deploy1-centos5-x86_64". I'd like to be able to create several kickstart files from a given template, one for 32 bit, one 64, ppc, etc. This would work perfectly if I could readily set the value of arch per each time the template is called to create a file. What is the most ready way to address this?

    Read the article

  • In a virtual machine monitor such as VMware’s ESXi Server, how are shadow page tables implemented?

    - by ali01
    My understanding is that VMMs such as VMware's ESXi Server maintain shadow page tables to map virtual page addresses of guest operating systems directly to machine (hardware) addresses. I've been told that shadow page tables are then used directly by the processor's paging hardware to allow memory access in the VM to execute without translation overhead. I would like to understand a bit more about how the shadow page table mechanism works in a VMM. Is my high level understanding above correct? What kind of data structures are used in the implementation of shadow page tables? What is the flow of control from the guest operating system all the way to the hardware? How are memory access translations made for a guest operating system before its shadow page table is populated? How is page sharing supported? Short of straight up reading the source code of an open source VMM, what resources can I look into to learn more about hardware virtualization?

    Read the article

  • How do I stop the natd log spam on Mac OS X with Internet Sharing?

    - by pukku
    Hi! I have InternetSharing enabled on my Mac (Leopard), so that my iPhone can get access to the internet in a wireless environment. Every second or so, I get the following error sent to system.log: 7/2/09 2:12:33 PM natd[20861] failed to write packet back (No route to host) Sometimes, the error is 7/2/09 2:12:33 PM natd[20861] failed to write packet back (Host is down) Is there some way to either fix the problem that is causing these errors (which I'm guessing is because the iPhone doesn't maintain a wireless connection when not in use) or to prevent them from being logged? Thanks, Ricky

    Read the article

  • How do I get a static IP address for my teapot?

    - by Joe
    I maintain this teapot: It returns a 418 error when it is pinged, and it is pinged regularly by people arriving from the relevant Wikipedia page. (For those interested, and there appear to be a few of you, the relevent story is here) It sits on a shelf in my office in the Computer Science department of a university and the support guys were kind enough to give it a dedicated IP address some years ago. My contract is coming to an end in the next few weeks and it's occurred to me that I'm going to have to do something with the teapot. I'd like to take it home but I have no clue how to explain to my home broadband supplier that I want a dedicated IP address coming to my house so that people can ping a teapot. Is there a reasonable way of having a server on a shelf in my house that people can ping via an IP address? What search terms can I use to find a solution to this problem?

    Read the article

  • How to use filegroups for DB split?

    - by Robin Jain
    In my project I have one DB used for everything. I want it to break into two databases. Static tables having look up values are to be stored in one DB and another DB would be having tables with dynamic data. My problem is that how would I use foreign key constraint in between those two DBs. Can someone help me out and suggest a way to proceed, better if I'm provided an example for the same. I thought of using synonyms for tables and then constraints on synonyms. but later I came to know that synonyms couldn't be used for constraints. I need to maintain relationships among the tables from both DB as the issue is with update, with a new release I just want to update look up tables and for the same I want to split my DB. I want to know how FileGroups could be used for this.

    Read the article

  • Apache logs other user read permissions

    - by user2344668
    We have several developers who maintain the system and I want them to easily read the log files in /var/log/httpd without needing root access. I set the read permission for 'other' users but when I run tail on the log files I get permission denied: [root@ourserver httpd]# chmod -R go+r /var/log/httpd [root@ourserver httpd]# ls -la drwxr--r-- 13 root root 4096 Oct 25 03:31 . drwxr-xr-x. 6 root root 4096 Oct 20 03:24 .. drwxr-xr-x 2 root root 4096 Oct 20 03:24 oursite.com drwxr-xr-x 2 root root 4096 Oct 20 03:24 oursite2.com -rw-r--r-- 1 root root 0 May 7 03:46 access_log -rw-r--r-- 1 root root 3446 Oct 24 22:05 error_log [me@ourserver ~]$ tail -f /var/log/httpd/oursite.com/error.log tail: cannot open `/var/log/httpd/oursite/error.log' for reading: Permission denied Maybe I'm missing something on how permissions work but I'm not finding any easy answers on it.

    Read the article

  • How to redirect a specific url through a proxy for multiple services?

    - by CrystalFire
    I have a website hosted on 000webhost.com for free. I am unable to connect directly to the site because Comcast has blocked a portion of 000webhost's servers for free accounts due to other people hosting malicious content. In order to maintain my website, I cannot use my computer to directly connect to the server. I am wondering if there is a way by which I can specifically forward attempts to access the server through a proxy, transparently. The current system that I am on is Windows, but I also have systems running Mac OSX and Linux, so solutions for any system could be fine. I've found answers which work for http, but I'm looking for a solution which will let me use all the other functions as well, such as ftp and ssh.

    Read the article

  • Setting up a reverse proxy [on hold]

    - by mrwooster
    I am looking for the best solution for setting up a very low maintenance reverse proxy for a production website (example.com). The setup is as follows: A blog with will be hosted on heroku, and will reside at example.com/blog A static info page which will be hosted on S3 and will reside at example.com/signup A dynamic content management system, provided and hosted by an external vendor which will respond to requests for any other pages. The two solutions which come to mind are: Use HAProxy Ask the external vendor to reverse proxy requests for /blog and /signup The obvious solution would be to use HAProxy, but, if at all possible, I would like to avoid having to setup and maintain another server (especially such a critical one). I came across a company called Snapt which offers hosted HAProxy solutions, but its more geared towards load-balancing than reverse proxying. Option 2 is a possibility, but gives us very little control over changes and configuration. I see a lot of sites hosting blogs on /blog so this must be fairly common practice, am I missing an obvious solution?

    Read the article

  • SCCM? Overkill?

    - by Le_Quack
    T. technician in a high school with around 1600 students 250 staff and 800+ client computers mostly running W7 I'm looking for a better way to manage clients (deploy software, track changes, inventory etc) I like the look of SCCM 2012 features but the case studies seem to be aimed at large multi-site infrastructural rather than a single mid sized site. Is SCCM suitable for a mid sized single site or is it aimed at much larger corporations, if so what would be more suitable Just a note about me and my situation. I work as a technician in a school part of a team of 3. My boss seems content with a network that works (just about) not a productive well maintained network that is easy to run and maintain. I'm still fairly early on in my I.T. career so sorry if I'm not up to speed on all products. EDIT: Thanks for all the help I'll take a look at SCE and SCCM and get some proposals drawn up to take to my boss/deputy head

    Read the article

  • Domain Validation in a CQRS architecture

    - by Jupaol
    Basically I want to know if there is a better way to validate my domain entities. This is how I am planning to do it but I would like your opinion The first approach I considered was: class Customer : EntityBase<Customer> { public void ChangeEmail(string email) { if(string.IsNullOrWhitespace(email)) throw new DomainException(“...”); if(!email.IsEmail()) throw new DomainException(); if(email.Contains(“@mailinator.com”)) throw new DomainException(); } } I actually do not like this validation because even when I am encapsulating the validation logic in the correct entity, this is violating the Open/Close principle (Open for extension but Close for modification) and I have found that violating this principle, code maintenance becomes a real pain when the application grows up in complexity. Why? Because domain rules change more often than we would like to admit, and if the rules are hidden and embedded in an entity like this, they are hard to test, hard to read, hard to maintain but the real reason why I do not like this approach is: if the validation rules change, I have to come and edit my domain entity. This has been a really simple example but in RL the validation could be more complex So following the philosophy of Udi Dahan, making roles explicit, and the recommendation from Eric Evans in the blue book, the next try was to implement the specification pattern, something like this class EmailDomainIsAllowedSpecification : IDomainSpecification<Customer> { private INotAllowedEmailDomainsResolver invalidEmailDomainsResolver; public bool IsSatisfiedBy(Customer customer) { return !this.invalidEmailDomainsResolver.GetInvalidEmailDomains().Contains(customer.Email); } } But then I realize that in order to follow this approach I had to mutate my entities first in order to pass the value being valdiated, in this case the email, but mutating them would cause my domain events being fired which I wouldn’t like to happen until the new email is valid So after considering these approaches, I came out with this one, since I am going to implement a CQRS architecture: class EmailDomainIsAllowedValidator : IDomainInvariantValidator<Customer, ChangeEmailCommand> { public void IsValid(Customer entity, ChangeEmailCommand command) { if(!command.Email.HasValidDomain()) throw new DomainException(“...”); } } Well that’s the main idea, the entity is passed to the validator in case we need some value from the entity to perform the validation, the command contains the data coming from the user and since the validators are considered injectable objects they could have external dependencies injected if the validation requires it. Now the dilemma, I am happy with a design like this because my validation is encapsulated in individual objects which brings many advantages: easy unit test, easy to maintain, domain invariants are explicitly expressed using the Ubiquitous Language, easy to extend, validation logic is centralized and validators can be used together to enforce complex domain rules. And even when I know I am placing the validation of my entities outside of them (You could argue a code smell - Anemic Domain) but I think the trade-off is acceptable But there is one thing that I have not figured out how to implement it in a clean way. How should I use this components... Since they will be injected, they won’t fit naturally inside my domain entities, so basically I see two options: Pass the validators to each method of my entity Validate my objects externally (from the command handler) I am not happy with the option 1 so I would explain how I would do it with the option 2 class ChangeEmailCommandHandler : ICommandHandler<ChangeEmailCommand> { public void Execute(ChangeEmailCommand command) { private IEnumerable<IDomainInvariantValidator> validators; // here I would get the validators required for this command injected, and in here I would validate them, something like this using (var t = this.unitOfWork.BeginTransaction()) { var customer = this.unitOfWork.Get<Customer>(command.CustomerId); this.validators.ForEach(x =. x.IsValid(customer, command)); // here I know the command is valid // the call to ChangeEmail will fire domain events as needed customer.ChangeEmail(command.Email); t.Commit(); } } } Well this is it. Can you give me your thoughts about this or share your experiences with Domain entities validation EDIT I think it is not clear from my question, but the real problem is: Hiding the domain rules has serious implications in the future maintainability of the application, and also domain rules change often during the life-cycle of the app. Hence implementing them with this in mind would let us extend them easily. Now imagine in the future a rules engine is implemented, if the rules are encapsulated outside of the domain entities, this change would be easier to implement

    Read the article

  • How to design a C / C++ library to be usable in many client languages?

    - by Brian Schimmel
    I'm planning to code a library that should be usable by a large number of people in on a wide spectrum of platforms. What do I have to consider to design it right? To make this questions more specific, there are four "subquestions" at the end. Choice of language Considering all the known requirements and details, I concluded that a library written in C or C++ was the way to go. I think the primary usage of my library will be in programs written in C, C++ and Java SE, but I can also think of reasons to use it from Java ME, PHP, .NET, Objective C, Python, Ruby, bash scrips, etc... Maybe I cannot target all of them, but if it's possible, I'll do it. Requirements It would be to much to describe the full purpose of my library here, but there are some aspects that might be important to this question: The library itself will start out small, but definitely will grow to enormous complexity, so it is not an option to maintain several versions in parallel. Most of the complexity will be hidden inside the library, though The library will construct an object graph that is used heavily inside. Some clients of the library will only be interested in specific attributes of specific objects, while other clients must traverse the object graph in some way Clients may change the objects, and the library must be notified thereof The library may change the objects, and the client must be notified thereof, if it already has a handle to that object The library must be multi-threaded, because it will maintain network connections to several other hosts While some requests to the library may be handled synchronously, many of them will take too long and must be processed in the background, and notify the client on success (or failure) Of course, answers are welcome no matter if they address my specific requirements, or if they answer the question in a general way that matters to a wider audience! My assumptions, so far So here are some of my assumptions and conclusions, which I gathered in the past months: Internally I can use whatever I want, e.g. C++ with operator overloading, multiple inheritance, template meta programming... as long as there is a portable compiler which handles it (think of gcc / g++) But my interface has to be a clean C interface that does not involve name mangling Also, I think my interface should only consist of functions, with basic/primitive data types (and maybe pointers) passed as parameters and return values If I use pointers, I think I should only use them to pass them back to the library, not to operate directly on the referenced memory For usage in a C++ application, I might also offer an object oriented interface (Which is also prone to name mangling, so the App must either use the same compiler, or include the library in source form) Is this also true for usage in C# ? For usage in Java SE / Java EE, the Java native interface (JNI) applies. I have some basic knowledge about it, but I should definitely double check it. Not all client languages handle multithreading well, so there should be a single thread talking to the client For usage on Java ME, there is no such thing as JNI, but I might go with Nested VM For usage in Bash scripts, there must be an executable with a command line interface For the other client languages, I have no idea For most client languages, it would be nice to have kind of an adapter interface written in that language. I think there are tools to automatically generate this for Java and some others For object oriented languages, it might be possible to create an object oriented adapter which hides the fact that the interface to the library is function based - but I don't know if its worth the effort Possible subquestions is this possible with manageable effort, or is it just too much portability? are there any good books / websites about this kind of design criteria? are any of my assumptions wrong? which open source libraries are worth studying to learn from their design / interface / souce? meta: This question is rather long, do you see any way to split it into several smaller ones? (If you reply to this, do it as a comment, not as an answer)

    Read the article

  • How to design a high-level application protocol for metadata syncing between devices and server?

    - by Jaanus
    I am looking for guidance on how to best think about designing a high-level application protocol to sync metadata between end-user devices and a server. My goal: the user can interact with the application data on any device, or on the web. The purpose of this protocol is to communicate changes made on one endpoint to other endpoints through the server, and ensure all devices maintain a consistent picture of the application data. If user makes changes on one device or on the web, the protocol will push data to the central repository, from where other devices can pull it. Some other design thoughts: I call it "metadata syncing" because the payloads will be quite small, in the form of object IDs and small metadata about those ID-s. When client endpoints retrieve new metadata over this protocol, they will fetch actual object data from an external source based on this metadata. Fetching the "real" object data is out of scope, I'm only talking about metadata syncing here. Using HTTP for transport and JSON for payload container. The question is basically about how to best design the JSON payload schema. I want this to be easy to implement and maintain on the web and across desktop and mobile devices. The best approach feels to be simple timer- or event-based HTTP request/response without any persistent channels. Also, you should not have a PhD to read it, and I want my spec to fit on 2 pages, not 200. Authentication and security are out of scope for this question: assume that the requests are secure and authenticated. The goal is eventual consistency of data on devices, it is not entirely realtime. For example, user can make changes on one device while being offline. When going online again, user would perform "sync" operation to push local changes and retrieve remote changes. Having said that, the protocol should support both of these modes of operation: Starting from scratch on a device, should be able to pull the whole metadata picture "sync as you go". When looking at the data on two devices side by side and making changes, should be easy to push those changes as short individual messages which the other device can receive near-realtime (subject to when it decides to contact server for sync). As a concrete example, you can think of Dropbox (it is not what I'm working on, but it helps to understand the model): on a range of devices, the user can manage a files and folders—move them around, create new ones, remove old ones etc. And in my context the "metadata" would be the file and folder structure, but not the actual file contents. And metadata fields would be something like file/folder name and time of modification (all devices should see the same time of modification). Another example is IMAP. I have not read the protocol, but my goals (minus actual message bodies) are the same. Feels like there are two grand approaches how this is done: transactional messages. Each change in the system is expressed as delta and endpoints communicate with those deltas. Example: DVCS changesets. REST: communicating the object graph as a whole or in part, without worrying so much about the individual atomic changes. What I would like in the answers: Is there anything important I left out above? Constraints, goals? What is some good background reading on this? (I realize this is what many computer science courses talk about at great length and detail... I am hoping to short-circuit it by looking at some crash course or nuggets.) What are some good examples of such protocols that I could model after, or even use out of box? (I mention Dropbox and IMAP above... I should probably read the IMAP RFC.)

    Read the article

  • Same source, multiple targets with different resources (Visual Studio .Net 2008)

    - by Mike Bell
    A set of software products differ only by their resource strings, binary resources, and by the strings / graphics / product keys used by their Visual Studio Setup projects. What is the best way to create, organize, and maintain them? i.e. All the products essentially consist of the same core functionality customized by graphics, strings, and other resource data to form each product. Imagine you are creating a set of products like "Excel for Bankers", Excel for Gardeners", "Excel for CEOs", etc. Each product has the the same functionality, but differs in name, graphics, help files, included templates etc. The environment in which these are being built is: vanilla Windows.Forms / Visual Studio 2008 / C# / .Net. The ideal solution would be easy to maintain. e.g. If I introduce a new string / new resource projects I haven't added the resource to should fail at compile time, not run time. (And subsequent localization of the products should also be feasible). Hopefully I've missed the blindingly-obvious and easy way of doing all this. What is it? ============ Clarification(s) ================ By "product" I mean the package of software that gets installed by the installer and sold to the end user. Currently I have one solution, consisting of multiple projects, (including a Setup project), which builds a set of assemblies and create a single installer. What I need to produce are multiple products/installers, all with similar functionality, which are built from the same set of assemblies but differ in the set of resources used by one of the assemblies. What's the best way of doing this? ------------ The 95% Solution ----------------- Based upon Daminen_the_unbeliever's answer, a resource file per configuration can be achieved as follows: Create a class library project ("Satellite"). Delete the default .cs file and add a folder ("Default") Create a resource file in the folder "MyResources" Properties - set CustomToolNamespace to something appropriate (e.g. "XXX") Make sure the access modifier for the resources is "Public". Add the resources. Edit the source code. Refer to the resources in your code as XXX.MyResources.ResourceName) Create Configurations for each product variant ("ConfigN") For each product variant, create a folder ("VariantN") Copy and Paste the MyResources file into each VariantN folder Unload the "Satellite" project, and edit the .csproj file For each "VariantN/MyResources" <Compile> or <EmbeddedResource> tag, add a Condition="'$(Configuration)' == 'ConfigN'" attribute. Save, Reload the .csproj, and you're done... This creates a per-configuration resource file, which can (presumably) be further localized. Compile error messages are produced for any configuration that where a a resource is missing. The resource files can be localized using the standard method (create a second resources file (MyResources.fr.resx) and edit .csproj as before). The reason this is a 95% solution is that resources used to initialize forms (e.g. Form Titles, button texts) can't be easily handled in the same manner - the easiest approach seems to be to overwrite these with values from the satellite assembly.

    Read the article

  • After-meeting Free Pizza Social is back to Fladotnet's West Palm Beach .Net User Group

    - by Sam Abraham
    Sherlock Staffing is bringing back the free Pizza/Soda after-meeting social to Fladotnet's West Palm Beach .Net User Group. Group members will have ample time to network and share experiences while enjoying pizza and soda after each meeting. Alex Funkhouser, Sherlock Staffing's President and Chief Talent Agent, is a continuous supporter of the .Net community with Sherlock Staffing maintaining a strong presence in every user group and quickly stepping-in as sponsors to meet any arising community need. In addition to providing the Free Pizza and Soda, Sherlock Staffing will also maintain on-site presence to bring to members of the West Palm Beach .Net User Group the latest insider view on the Job Market and keep the group posted with available opportunities. Alex can be reached at: [email protected]. Check out Sherlock Staffing's Website at: http://www.sherstaff.com About Sherlock Staffing SherStaff is the premier staffing and consulting source for technical talent in Florida and beyond. The company provides recruiting and consulting services to both Fortune 1000 companies and to job candidates in a wide range of technology areas of expertise including the Microsoft Technologies, Oracle, WebSphere, Java/J2EE, and open source/Linux based technologies.  The primary focus is recruiting application developers, network engineers and database administrators. The company prides itself on the long term relationships established with both employers and employees to ensure placement of the best quality candidates in the top quality jobs.

    Read the article

  • Domain registration and DNS, what am I actually paying for? [on hold]

    - by jozxyqk
    Long story short I'm quite confused as to exactly what is offered by domain registration and dns service sites. When I go to the url "http://google.com", my PC connects to a name server and gets the IP for "google.com", then connects to the IP and says, give me the page for "http://google.com". AFAIK there are many name servers and they all cache these bits of information in some hierarchical network, but ultimately a DNS record must come from a single source (not sure what this is called). There are different kinds of records, that might not an IP but an alias/redirect to other records for example. Lets say I want my own domain name for some server. Maybe it even has a static IP but I want a nicer thing for people to remember, or my ISP assigns dynamic IPs and I want a URL that always works, or my website is hosted on a shared machine so the browser needs to send "http://mydnsname.com" to the webserver to distinguish it from other requests to the same IP but for different sites. Registering a domain costs a small amount of money per year. Where does this money go, not that I'm complaining :P? Is that really all it costs to maintain the entire DNS system of nameservers? If I just register the domain and nothing else, what do I get? Is that just reserving a name or hosting WHOIS information or have I paid for a dns recrord to be hosted? Can a domain alone have a record, such as an IP or be an alias to another? A bunch of sites out there offer other services, in addition to domain registration (I'm assuming they register the domain through another party for me). One example is "dynamic DNS" (DDNS), but isn't this just a regular DNS record that's updated regularly? Does it cost extra to update more often? Without a DDNS, can a DNS record still point to an IP? I've also seen the term "managed DNS" and have no idea where that fits in.

    Read the article

< Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >