Search Results

Search found 11206 results on 449 pages for 'tag cloud'.

Page 371/449 | < Previous Page | 367 368 369 370 371 372 373 374 375 376 377 378  | Next Page >

  • How to configure a longer version Number in artifactory

    - by claudine
    The version-numbers for our jars have to be longer them x.x.x. We would rather need x.x.x.x to integrate some old-fashioned self-made mechanism. This is, because we tag our software with x.x.x and as soon as we have a delivery to a customer one specific jar has to be build exactly at this point of time to fit to another backend, which communicates with our program. For that reason this one jar has the version 2.3.4.1, when generated and in next delivery of the same Version it is build and named 2.3.4.2. Now artifactory cannot handle this an doesn't save more than x.x.x.2 in some cases. So we thought of maybe edit the regular expression in the maven repository layout (see attached Screenshot) Because testing the path in the field below shows, that it cannot handle the version number. Of course for the rest of our jars still x.x.x has to work.. For Example here is the maven-metadata.xml <?xml version="1.0" encoding="UTF-8"?> <metadata> <groupId>com.firm</groupId> <artifactId>someid</artifactId> <version>1.5.1</version> <versioning> <latest>1.5.1</latest> <release>1.5.1</release> <versions> <version>1.4.62</version> </versions> <lastUpdated>20120926073942</lastUpdated> </versioning> </metadata> The folder structure looks like: someid 1.4.62 1.4.62.1 1.4.62.2 1.4.62.3 If we deploy an new artifact version (1.4.62.1), the maven-metadata.xml contains the 1.4.62.1 version. But the artifactory overrides the version number (1.4.62.x) to (1.4.62) after an unspecified time. It seems that the artifactory only support major, minor and revision numbers, and deletes the buildnumber. Now we looking for a solution do disable this behavior. We use the JFrog Artifactory version 2.5.0 (rev. 13086). Any ideas, maybe? Thanks in andvance

    Read the article

  • Is my work on a developers test being taken advantage of?

    - by CodeWarrior
    I am looking for a job and have applied to a number of positions. One of them responded, I had a pretty lengthy phone interview (perhaps an hour +) and they then set me up with the developers test. I was told that this test is estimated to take between 6 and 8 hours and that, provided it met with their approval, I could be paid for my work on it. That gave me some pause, but I endeavored. The dev test took place on a VM accessed via RDP. The task was to implement a search page in a web project that requests data from the server, displays it on the screen in a table, has a pretty complicated search filtering scheme (there are about 15 statuses and when sending the search to the server you can search by these statuses) in addition to the string/field search. They want some SVG icons to change color on certain data values, they want some data to be represented differently than how it is in the database, etc. Loooong story short, this took one heck of a lot longer than 6-8 hours. Much of it was due to the very poor VM that I was running on (Visual Studio 2013 took 10 minutes to load, and another 15 minutes to open the 3 GB ginormous solution). After completing, I was told to commit my changes to source control... Hmm, OK. I get an email back that they thought that the SVGs could have their color changed differently, they found a bug in this edge-case, there was an occasional problem with this other thing that I never experienced, etc. So I am 13-14 hours into this thing now and I have to do bug fixes. I do them, and they come back with some more. This is all apparently going into a production application. I noticed some anomalies in the code that was already in there where it looked like other people had coded all of one functionality and not anything else that I could find. Am I just being used for cheap labor? Even if they pay me the promised 50 dollars and hour for 6 hours, I have committed like 18 hours to this thing now. If I bug fix all of the stuff they keep coming up with, I will have worked at least 16 hours for free. I have taken a number of dev tests. I have never taken one where I worked on code that was destined for production. I have never taken one where I implemented a feature that was in the pipeline for development (it was planned for, and I implemented it through the course of the test). And I have never taken one that took 4 rounds and a total of 20+ hours. I get the impression that they are using their dev test to field some of the functionality, that they don't have time for in their normal team, on the cheap. Also, I wouldn't mind a 'devtest' tag.

    Read the article

  • Azure - Unable To Get SQL Azure Invitation Code!

    - by Goober
    Scenario I'm trying to convert my Silverlight Business Application over to the cloud with the help of Azure. I have been following this link from Brad Abrams blog. Both the links to Windows Azure and SQL Azure crash out in Google Chrome, they work in Internet Explorer, but it's literally one of the worst user experiences I've ever had. The Problem I'm asked to sign in to Microsoft connect with my Live ID. I do so, I'm then asked to register!? - I do so. I'm then sent a verification email which I verify. I'm then signed out!?!?! When I sign back in, it repeats the process.... ANY IDEAS!??! Edit/Update: Finally managed to get signed up/in to connect. From here I was able to get hold of an invitation code to Windows Azure. Now I need an invitation code for SQL Azure. I cannot see ANYWHERE that advertises a way of getting this SQL Azure code, the only thing that I have seen is some text saying that there "may be a delay" in receiving codes due to volume of interest, which quite frankly I find hard to believe......... It's so far been 3 days now.This officially Sucks! If I have any more news I'll post back here.

    Read the article

  • Windows Azure : Storage Client Exception Unhandled

    - by veda
    I am writing a code for upload large files into the blobs using blocks... When I tested it, it gave me an StorageClientException It stated: One of the request inputs is out of range. I got this exception in this line of the code: blob.PutBlock(block, ms, null); Here is my code: protected void ButUploadBlocks_click(object sender, EventArgs e) { // store upladed file as a blob storage if (uplFileUpload.HasFile) { name = uplFileUpload.FileName; byte[] byteArray = uplFileUpload.FileBytes; Int64 contentLength = byteArray.Length; int numBytesPerBlock = 250 *1024; // 250KB per block int blocksCount = (int)Math.Ceiling((double)contentLength / numBytesPerBlock); // number of blocks MemoryStream ms ; List<string>BlockIds = new List<string>(); string block; int offset = 0; // get refernce to the cloud blob container CloudBlobContainer blobContainer = cloudBlobClient.GetContainerReference("documents"); // set the name for the uploading files string UploadDocName = name; // get the blob reference and set the metadata properties CloudBlockBlob blob = blobContainer.GetBlockBlobReference(UploadDocName); blob.Properties.ContentType = uplFileUpload.PostedFile.ContentType; for (int i = 0; i < blocksCount; i++, offset = offset + numBytesPerBlock) { block = Convert.ToBase64String(BitConverter.GetBytes(i)); ms = new MemoryStream(); ms.Write(byteArray, offset, numBytesPerBlock); blob.PutBlock(block, ms, null); BlockIds.Add(block); } blob.PutBlockList(BlockIds); blob.Metadata["FILETYPE"] = "text"; } } Can anyone tell me how to solve it...

    Read the article

  • Is make -j distcc possible to scale over 5 times?

    - by holmes
    Since distcc cannot keep states and just possible to send jobs and headers and let those servers to use only the data just sent and preprocess and compile, I think the lastest distcc has problem in scalability. In my local build environment which has appx. 10,000 c/c++ files to build, I could only make 2 times faster than not using distcc (but using make -j) when having 20 build servers. What do you think is the problem? If anyone has achieved scalability more than 10 - 20 times using make -j and distcc, please let me know. The following product claims that it is impossible to scale make -j and distcc faster than 5 times. http://www.electric-cloud.com/products/electricaccelerator.php I think this can be improved by: Letting the distccd server to maintain sessions Tied to those sessions, they will cache their own header directories Preprocess will be done demand base from the distccd server This will be done through a LD_PRELOADed library libdistcc.so which will replace stat/open syscalls and fetches the header files over network. ... Has anyone done this kind of thing?

    Read the article

  • Navigating through a sea of hype

    - by wouldLikeACrystalBall
    This is a vague, open question, so if you have no interest in these, please leave now. A few years ago it seemed everyone thought the death of desktop software was imminent. Web applications were the future. Everyone would move to cloud-based software-as-a-service systems, and developing applications for specific end-user platforms like Windows would soon become something of a ghetto. Joel's "How Microsoft Lost the API War" was but one of many such pieces sounding the death knell for this way of software development. Flash-forward to 2010, and the hype is all around mobile devices, particularly the iPhone. Software-as-a-Service vendors--even small ones such as YCombinator startups--go out of their way to build custom applications for the iPhone and other smart phone devices; applications that can be quite sophisticated, that run only on specific hardware and software architectures and are thus inherently incompatible. Now some of you are probably thinking, "Well, only the decline of desktop software was predicted; mobile devices aren't desktops." But the term was used by those predicting its demise to mean laptops also, and really any platform capable of running a browser. What was promised was a world where HTML and related standards would supplant native applications and their inherent difficulties. We would all code to the browser, not the OS. But here we are in 2010 with the AppStore bulging and development for the iPad just revving up. A few days ago, I saw someone on Hacker News claim that the future of computing was entirely in small, portable devices. Apparently the future is underpowered, requires dexterous thumbs and induces near-sightedness. How do those who so vehemently asserted one thing now assert the opposite with equal vehemence, without making even the slightest admission of error? And further, how are we as developers supposed to sift through all of this? I bought into the whole web-standards utopianism that was in vogue back in '06-'07 and now feel like it was a mistake. Is there some formula one can apply rather than a mere appeal to experience?

    Read the article

  • Resources related to data-mining and gaming on social networks

    - by darren
    Hi all I'm interested in the problem of patterning mining among players of social networking games. For example detecting cheaters of a game, given a company's user database. So far I have been following the usual recipe for a data mining project: construct a data warehouse that aggregates significant information select a classifier, and train it with a subsectio of records from the warehouse validate classifier with another test set lather, rinse, repeat Surprisingly, I've found very little in this area regarding literature, best practices, etc. I am hoping to crowdsource the information gathering problem here. Specifically what I'm looking for: What classifiers have worked will for this type of pattern mining (it seems highly temporal, users playing games, users receiving rewards, users transferring prizes etc). Are there any highly agreed upon attributes specific to social networking / gaming data? What is a practical amount of information that should be considered? One problem I've run into is data overload, where queries and data cleansing may take days to complete. Related to point above, what hardware resources are required to produce results? I've found it difficult to estimate the amount of computing power I will require for production use. It has become apparent that a white box in the corner does not have enough horse-power for such a project. Are companies generally resorting to cloud solutions? Are they buying clusters? Basically, any resources (theoretical, academic, or practical) about implementing a social networking / gaming pattern-mining program would be very much appreciated. Thanks.

    Read the article

  • Best practice for structuring a new large ASP.NET MVC2 plus EF4 VS2010 solution?

    - by Nick
    Hi, we are building a new web application using Microsoft ASP.NET MVC2 and Entity Framework 4. Although I am sure there is not one right answer to my question, we are struggling to agree a VS2010 solution structure. The application will use SQL Server 2008 with a possible future Azure cloud version. We are using EF4 with T4 POCOs (model-first) and accessing a number of third-party web-services. We will also be connecting to a number of external messaging systems. UI is based on standard ASP.NET (MVC) with jQuery. In future we may deliver a Silverlight/WPF version - as well as mobile. So put simply, we start with a VS2010 blank solution - then what? I have suggested 4 folders Data (the EF edmx file etc), Domain (entities, repositories), Services (web-services access), Presentation (web ui etc). However under Presentation, creating the ASP.NET MVC2 project obviously creates it's own Models folder etc and it just doesn't seem to fit too well in this proposed structure. I'm also missing a business layer (or does this sit in the domain?). Again I am sure there is no one right way to do it, but I'd really appreciate your views on this. Thanks

    Read the article

  • Silverlight 4, Out of browser, Printing, Automatic updates

    - by minal
    I have a very critial business application presently running using Winforms. The application is a very core UI shell. It accepts input data, calls a webservice on my server to do the computation, displays the results on the winforms app and finally send a print stream to the printer. Presently the application is deployed using Click-once. Moving forward, I am trying to contemplate wheather I should move the application into a Silverlight application. Couple of reasons I am thinking silverlight. Gives clients the feel that it is a cloud based solution. Can be accessed from any PC. While the clickonce app is able to do this as well, they have to install an app, and when updates are available they have to click "Yes" to update. The application presently has a drop down list of customers, this list has expanded to over 3000 records. Scrolling through the list is very painful. With Silverlight I am thinking of the auto complete ability. Out of the browser - this will be handy for those users who use the app daily. I haven't used Silverlight previous hence looking for some expert advice on a few things: Printing - does silverlight allow sending raw print data to the printer. The application prints to a Zebra Thermal label printer. I have to send raw bytes to the printer with the commands. Can this be done with SL, or will it always prompt the "Print" dialog? Out of browser - when SL apps are installed as out of browser, how to updates come through, does the app update automatically or is the user prompted to opt for update?

    Read the article

  • opacity and zIndex not getting set when hovered

    - by Catfish
    I'm messing around with a jquery carousel script and i'm trying to get it so when you hover over an image, the size will be doubled(which i have working) and the opacity will be 100. The script is here http://steph.net23.net/ImageCarousel/ This is the part i've added to double the width and height but the opacity is not taking effect. The original script came from here http://www.devirtuoso.com/2009/08/how-to-create-a-3d-tag-cloud-in-jquery/ $('#list a img').hover( function() { clearInterval(go); $(this).css('height', '200px'); $(this).css('width', '400px'); $(this).css('margin-left', '-100px'); $(this).css('opacity', '100'); var opac = $(this).css('opacity'); $(this).css('zIndex', '0'); var z = $(this).css('zIndex'); console.log("opacity = "+opac); console.log("zindex = "+z); }, function() { go = setInterval(render, 20); $(this).css('height', '100px'); $(this).css('width', '200px'); $(this).css('margin-left', '0'); });

    Read the article

  • WCF Service in Azure with ClaimsIdentity over SSL

    - by Sunil Ramu
    Hello , Created a WCF service as a WebRole using Azure and a client windows application which refers to this service. The Cloud Service is refered to a certificate which is created using the "Hands On Lab" given in windows identity foundation. The Web Service is hosted in IIS and it works perfect when executed. I've created a client windows app which refers to this web service. Since WIF Claims identity is used, I have a claimsAuthorizationManager Class, and also a Policy class with set of defilned policies. The Claims is set in the web.config file. When I execute the windows app as the start up project, the app prompts for authentication, and when the account credentials are given as in the config file, it opens a new "Windows Card Space" Window and Says "Incoming Policy Failed". When I close the window the System throws and Exception The incoming policy could not be validated. For more information, please see the event log. Event Log Details Incoming policy failed validation. No valid claim elements were found in the policy XML. Additional Information: at System.Environment.get_StackTrace() at Microsoft.InfoCards.Diagnostics.InfoCardTrace.BuildMessage(InfoCardBaseException ie) at Microsoft.InfoCards.Diagnostics.InfoCardTrace.TraceAndLogException(Exception e) at Microsoft.InfoCards.Diagnostics.InfoCardTrace.ThrowHelperError(Exception e) at Microsoft.InfoCards.InfoCardPolicy.Validate() at Microsoft.InfoCards.Request.PreProcessRequest() at Microsoft.InfoCards.ClientUIRequest.PreProcessRequest() at Microsoft.InfoCards.Request.DoProcessRequest(String& extendedMessage) at Microsoft.InfoCards.RequestFactory.ProcessNewRequest(Int32 parentRequestHandle, IntPtr rpcHandle, IntPtr inArgs, IntPtr& outArgs) Details: System Provider [ Name] CardSpace 3.0.0.0 EventID 267 [ Qualifiers] 49157 Level 2 Task 1 Keywords 0x80000000000000 EventRecordID 6996 Channel Application EventData No valid claim elements were found in the policy XML. Additional Information: at System.Environment.get_StackTrace() at Microsoft.InfoCards.Diagnostics.InfoCardTrace.BuildMessage(InfoCardBaseException ie) at Microsoft.InfoCards.Diagnostics.InfoCardTrace.TraceAndLogException(Exception e) at Microsoft.InfoCards.Diagnostics.InfoCardTrace.ThrowHelperError(Exception e) at Microsoft.InfoCards.InfoCardPolicy.Validate() at Microsoft.InfoCards.Request.PreProcessRequest() at Microsoft.InfoCards.ClientUIRequest.PreProcessRequest() at Microsoft.InfoCards.Request.DoProcessRequest(String& extendedMessage) at Microsoft.InfoCards.RequestFactory.ProcessNewRequest(Int32 parentRequestHandle, IntPtr rpcHandle, IntPtr inArgs, IntPtr& outArgs)

    Read the article

  • Error mounting CloudDrive snapshot in Azure

    - by Dave
    Hi, I've been running a cloud drive snapshot in dev for a while now with no probs. I'm now trying to get this working in Azure. I can't for the life of me get it to work. This is my latest error: Microsoft.WindowsAzure.Storage.CloudDriveException: Unknown Error HRESULT=D000000D ---> Microsoft.Window.CloudDrive.Interop.InteropCloudDriveException: Exception of type 'Microsoft.WindowsAzure.CloudDrive.Interop.InteropCloudDriveException' was thrown. at ThrowIfFailed(UInt32 hr) at Microsoft.WindowsAzure.CloudDrive.Interop.InteropCloudDrive.Mount(String url, SignatureCallBack sign, String mount, Int32 cacheSize, UInt32 flags) at Microsoft.WindowsAzure.StorageClient.CloudDrive.Mount(Int32 cacheSize, DriveMountOptions options) Any idea what is causing this? I'm running both the WorkerRole and Storage in Azure so it's nothing to do with the dev simulation environment disconnect. This is my code to mount the snapshot: CloudDrive.InitializeCache(localPath.TrimEnd('\\'), size); var container = _blobStorage.GetContainerReference(containerName); var blob = container.GetPageBlobReference(driveName); CloudDrive cloudDrive = _cloudStorageAccount.CreateCloudDrive(blob.Uri.AbsoluteUri); string snapshotUri; try { snapshotUri = cloudDrive.Snapshot().AbsoluteUri; Log.Info("CloudDrive Snapshot = '{0}'", snapshotUri); } catch (Exception ex) { throw new InvalidCloudDriveException(string.Format( "An exception has been thrown trying to create the CloudDrive '{0}'. This may be because it doesn't exist.", cloudDrive.Uri.AbsoluteUri), ex); } cloudDrive = _cloudStorageAccount.CreateCloudDrive(snapshotUri); Log.Info("CloudDrive created: {0}", snapshotUri, cloudDrive); string driveLetter = cloudDrive.Mount(size, DriveMountOptions.None); The .Mount() method at the end is what's now failing. Please help as this has me royally stumped! Thanks in advance. Dave

    Read the article

  • Why does Fabric display the disconnect from server message for almost 2 minutes?

    - by Matthew Rankin
    Fabric displays Disconnecting from username@server... done. for almost 2 minutes prior to showing a new command prompt whenever I issue a fab command. This problem exists when using Fabric commands issued to both an internal server and a Rackspace cloud server. Below I've included the auth.log from the server, and I didn't see anything in the logs on my MacBook. Any thoughts as to what the problem is? Server's SSH auth.log with LogLevel VERBOSE Apr 21 13:30:52 qsandbox01 sshd[19503]: Accepted password for mrankin from 10.10.100.106 port 52854 ssh2 Apr 21 13:30:52 qsandbox01 sshd[19503]: pam_unix(sshd:session): session opened for user mrankin by (uid=0) Apr 21 13:30:52 qsandbox01 sudo: mrankin : TTY=unknown ; PWD=/home/mrankin ; USER=root ; COMMAND=/bin/bash -l -c apache2ctl graceful Apr 21 13:30:53 qsandbox01 sshd[19503]: pam_unix(sshd:session): session closed for user mrankin Server Configuration OS: Ubuntu 9.10 OpenSSH: Ubuntu package version 1.5.1p1-6ubuntu2 Client Configuration OS: Mac OS X 10.6.3 Fabric ver 0.9 Vritualenv ver 1.4.7 pip ver 0.7 Thoughts on Cause of the Issue I don't know how long the problem has existed. However, I know that at one point I didn't have this problem. Things that have changed since then are that I have recreated my virtualenv's using virtualenv 1.4.7, virtualenvwrapper 2.1, and pip 0.7. Not sure if this is related, but it is a thought since I run my fabfiles from within a virtualenv.

    Read the article

  • Testing a wide variety of computers with a small company

    - by Tom the Junglist
    Hello everyone, I work for a small dotcom which will soon be launching a reasonably-complicated Windows program. We have uncovered a number of "WTF?" type scenarios that have turned up as the program has been passed around to the various not-technical-types that we've been unable to replicate. One of the biggest problems we're facing is that of testing: there are a total of three programmers -- only one working on this particular project, me -- no testers, and a handful of assorted other staff (sales, etc). We are also geographically isolated. The "testing lab" consists of a handful of VMWare and VPC images running sort-of fresh installs of Windows XP and Vista, which runs on my personal computer. The non-technical types try to be helpful when problems arise, we have trained them on how to most effectively report problems, and the software itself sports a wide array of diagnostic features, but since they aren't computer nerds like us their reporting is only so useful, and arranging remote control sessions to dig into the guts of their computers is time-consuming. I am looking for resources that allow us to amplify our testing abilities without having to put together an actual lab and hire beta testers. My boss mentioned rental VPS services and asked me to look in to them, however they are still largely very much self-service and I was wondering if there were any better ways. How have you, or any other companies in a similar situation handled this sort of thing? EDIT: According to the lingo, our goal here is to expand our systems testing capacity via an elastic computing platform such as Amazon EC2. At this point I am not sure suggestions of beefing up our unit/integration testing are going to help very much as we are consistently hitting walls at the systems testing phase. Has anyone attempted to do this kind of software testing on a cloud-type service like EC2? Tom

    Read the article

  • How can I use PHP Mail() function within PHP-FPM? On Nginx?

    - by TheBlackBenzKid
    I have searched everywhere for this and I really want to resolve this. In the past I just end up using an SMTP service like SendGrid for PHP and a mailing plugin like SwiftMailer. However I want to use PHP. Basically my setup (I am new to server setup, and this is my personal setup following a tutorial) Nginx Rackspace Cloud PHP 5.3 PHP-FPM Ubuntu 11.04 My phpinfo() returns this about the Mail entries: mail.log no value mail.add_x_header On mail.force_extra_parameters no value sendmail_from no value sendmail_path /usr/sbin/sendmail -t -i SMTP localhost smtp_port 25 Can someone help me to as why Mail() will not work - my script is working on all other sites, it is a normal mail command. Do I need to setup logs or enable some PHP port on the server? My Sample script <? # FORMS VARS // $to = $customers_email; // $to = $customers_email; $to = $_GET["customerEmailFromForm"]; $subject = "Thank you for contacting Real-Domain.com"; $message = " <html> <head> </head> <body> Thanks, your message was sent and our team will be in touch shortly. <img src='http://cdn.com/emails/thank_you.jpg' /> </body> </html> "; $headers = "MIME-Version: 1.0" . "\r\n"; $headers .= "Content-type:text/html;charset=iso-8859-1" . "\r\n"; $headers .= 'From: <[email protected]>' . "\r\n"; // SEND MAIL mail($to,$subject,$message,$headers); ?> Thanks

    Read the article

  • How to stop ejabberd from using mnesia

    - by Eldad Mor
    I'm trying to establish a procedure for restoring my database from a crashed server to a new server. My server is running Ejabberd as an XMPP server, and I configured it to use postgresql instead of mnesia - or so I thought. My procedure goes something like "dump the contents of the original DB, run the new server, restore the contents of the DBs using psql, then run the system". However, when I try running Ejabberd again I get a crash: =CRASH REPORT==== 3-Dec-2010::22:05:00 === crasher: pid: <0.36.0> registered_name: [] exception exit: {bad_return,{{ejabberd_app,start,[normal,[]]}, {'EXIT',"Error reading Mnesia database"}}} in function application_master:init/4 Here I was thinking that my system is running on PostgreSQL, while it seems I was still using Mnesia. I have several questions: How can I make sure mnesia is not being used? How can I divert all the ejabberd activities to PGSQL? This is the modules part in my ejabberd.cfg file: {modules, [ {mod_adhoc, []}, {mod_announce, [{access, announce}]}, % requires mod_adhoc {mod_caps, []}, {mod_configure,[]}, % requires mod_adhoc {mod_ctlextra, []}, {mod_disco, []}, {mod_irc, []}, {mod_last_odbc, []}, {mod_muc, [ {access, muc}, {access_create, muc}, {access_persistent, muc}, {access_admin, muc_admin}, {max_users, 500} ]}, {mod_offline_odbc, []}, {mod_privacy_odbc, []}, {mod_private_odbc, []}, {mod_pubsub, [ % requires mod_caps {access_createnode, pubsub_createnode}, {plugins, ["default", "pep"]} ]}, {mod_register, [ {welcome_message, none}, {access, register} ]}, {mod_roster_odbc, []}, {mod_stats, []}, {mod_time, []}, {mod_vcard_odbc, []}, {mod_version, []} ]}. What am I missing? I am assuming the crash is due to the mnesia DB being used by Ejabberd, and since it's out of sync with the PGSQL DB, it cannot operate correctly - but maybe I'm totally off track here, and would love some direction. EDIT: One problem solved. Since I'm using amazon cloud, I needed to hardcode the ERLANG_NODE so it won't be defined by the hostname (which changes on reboot). This got my ejabberd running, but still I wish to stop using mnesia, and I wonder what part of ejabberd is still using it and how can I found it.

    Read the article

  • Hybrid EAV/CR model via WCF (and statically-typed language)?

    - by Pat
    Background I'm working on the architecture for a cloud-based LOB application, using Silverlight for the client, WCF, ASP.NET/C# for server and SQL Server for storage. The data model requires some flexibility per user (ability to add custom properties and define validation rules for them, for example), and a hybrid EAV/CR persistence model on the server side will suit nicely. Problem I need an efficient and maintainable technology and approach to handle the transformation from the persisted EAV model to/from WCF (and similarly allow the client to bind to the resulting data - DataGrid is a key UI element)? Admission: I don't yet know enough about WCF to understand if it supports ExpandoObject directly, but I suspect it will. Options I started off looking at WCF RIA services, but quickly discovered they're heavily dependent upon both static type data and compile-time code generation. Neither of these appeal. The options I'm considering include: Using WCF RIA services and pass the data over the network directly in EAV form (i.e. Dictionary), and handle the binding issue purely on the client side (like this) Using a dynamic language (probably IronPython) to handle both ends of the communication, with plumbing to generate the necessary CLR type data on the client to allow binding, and transform to/from EAV form on the server (spam preventer stopped me from posting a URL here, I'll try it in a comment). Dynamic LINQ (CreateClass() and friends), although I'm way out of my depth there and don't know what the limitations on that approach might be yet. I'm interested in comments on these approaches as well as alternative approaches that might solve the problem. Other Notes The Silverlight client will not be the only consumer of the service, making me slightly uncomfortable with option #1 above. While the data model is flexible, it's not expected to be modified heavily. For argument's sake, we could assume that we might have 25 distinct data models active at a given time, with something like 10-20 unique data fields/rules each. Modifications to the data model will happen infrequently (typically when a new user is initially configured).

    Read the article

  • Is their a definitive list for the differences between the current version of SQL Azure and SQL Serv

    - by Aim Kai
    I am a relative newbie when it comes to SQL Azure!! I was wondering if there was a definitive list somewhere regarding what is and is not supported by SQL Azure in regards to SQL Server 2008? I have had a look through google but I've noticed some of the blog posts are missing things which I have found through my own testing: For example, quite a lot is summarised in this blog entry http://www.keepitsimpleandfast.com/2009/12/main-differences-between-sql-azure-and.html Common Language Runtime (CLR) Database file placement Database mirroring Distributed queries Distributed transactions Filegroup management Global temporary tables Spatial data and indexes SQL Server configuration options SQL Server Service Broker System tables Trace Flags which is a repeat of the MSDN page http://msdn.microsoft.com/en-us/library/ff394115.aspx I've noticed from my own testing that the following seem to have issues when migrating from SQL Server 2008 to the Azure: XML Types (the msdn does mention large custom types - I guess it may include this?? even if the data schema is really small?) Multi-part views I've been using SQL Azure Migration Wizard v3.1.8 to migrate local databases into the cloud. I was wondering if anyone could point to a list or give me any information till when these features are likely to be included in SQL Azure.

    Read the article

  • Old desktop programmer wants to create S+S project

    - by Craig
    I have an idea for a product that I want to be web-based. But because I live in Brasil, the internet is not always available so there needs to be a desktop component that is available for when the internet is down. Also, I have been a SQL programmer, a desktop application programmer using dBase, VB and Pascal, and I have created simple websites using HTML and website creation tools, such as Frontpage. So from my research, I think I have the following options; PHP, Ruby on Rails, Python or .NET for the programming side. MySQL for the DB. And Apache, or possibly IIS, for the webserver. I will probably start with a local ISP provider for the cloud servce. But then maybe move to something more "robust" and universal in the future, ie. Amazon, or Azure, or something along that line. My question then is this. What would you recommend for something like this? I'm sure that I have not listed all of the possibilities, but the ones I have researched and thought of. Thanks everyone, Craig

    Read the article

  • SVN Serve, Missing a Directory

    - by Ryan Smith
    I'm sure this is an asinine question, and I blame myself for not fully understanding how the SVNSERVE process works. I have an SVN repo, but it needs to be moved to a server within a clients cloud. I did this a while back and ran into the issue of the SVNSERVE.exe process not getting set to the right directory. I have the SVNSERVE.exe process running as a windows service and pointing to the right directory. There are two other repos there that are serving out fine in the same directory. I copied out the new directory just like I did with the others, but I'm getting the error "No repository found". I thought that SVNSERVE just looked at that directory and served out the repositories that were there, but I have had a hard time finding more information about that. I thought it was a Windows permission problem, but I set the whole folder to be full control to EVERYONE, so that's not it. I feel horrible I didn't fully understand this problem the first time I fought it, but it's late on a Sunday night and clients are yelling. Anyone know what I'm missing? Thanks. EDIT: It's specific to the repository. I tested the same process with some of the other repos we have on our server and when I copied them up, they worked just as expected. This bug is breaking me and I wish I could provide more details, but that's all I know. I'm going to try to do an SVN Dump instead of an XCopy and see how that goes. I'll let you know.

    Read the article

  • How to setup Continuous Integration and Continuous Deployment for Django projects?

    - by ycseattle
    Hello, I am researching about how to set up CI and continuous deployment for a small team project for a Django based web application. Here are needs: Developer check in the code into a hosted SVN server (unfuddle.com) A CI server detects new checkin, check out the source, build, run functional tests. If tests all passed, deploy the code to the webserver on Amazon EC2. For now, the CI server is also responsible to run the functional tests. I figured out that I can use Husdon as the CI server, use Selenium to run functional tests, and use Fabric to deploy the build to remote web server in Amazon cloud. I am new to Django development and not very familiar with opensource tools. My questions are: I can find some information to integrate hudson with selenium, but I couldn't find much information on how to integrate Fabric to Hudson as well. Is this setup viable? Do you see problems? How do I integrate and deploy database changes? Most likely in the early stage we will change database schema very often with code changes. I used to use Visual Studio and the database project made it very simple to deploy. I wonder if there is "established, well-supported" way to do that. Thanks!!

    Read the article

  • Thoughts on GoGrid vs EC2

    - by Jason
    I am currently hosting my SaaS application at GoGrid (Microsoft stack). Here's what I have: Database Server - physical box, 12 GB RAM, 2 X Quad Core CPU (2.13 GHz Xeon E5506) 2 Web / App servers - cloud servers, 2 GB RAM, 2 VCPUs 300 GB monthly bandwidth I am paying around $900 / month for this. My web / app servers are busting at the seams and need to be upgraded to 4 GB of RAM. I also need a firewall, and GoGrid just added this service for an additional $200. After the upgrade, I will be paying around $1,400. I started looking at Amazon EC2, specifically this config: Database server - "High Memory Double Extra Large Instance" - 34 GB RAM, 13 EC2 compute units 2 Web / App servers - "Large Instance" - 7.5 GB RAM, 4 EC2 compute units If I go with 1 year reserved instances, my upfront cost would be $4,500 and my monthly would be $700. This comes to $1,075 / month when amortized. Amazon also includes a firewall for free. Here are my questions: Do any of you have experience running a database (especially SQL Server) on an EC2 instance? How did it perform compared to a dedicated machine? One of my major concerns is with disk I/O. Amazon's description of a compute unit is fairly vague. Any ideas on how the CPU performance on the database servers would compare? I am hoping that the Amazon solution will provide significantly better performance than my current or even improved GoGrid setup. Having a virtual database server would also be nice in terms of availability. Right now I would be in serious trouble if I had any hardware issues. Thanks for any insight...

    Read the article

  • Display pdf file inline in Rails app

    - by Martas
    Hi, I have a pdf file attachment saved in the cloud. The file is attached using attachment_fu. All I do to display it in the view is: <%= image_tag @model.pdf_attachment.public_filename %> When I load the page with this code in the browser, it does what I want: it displays the attached pdf file. But only on Mac. On Windows, browsers will display a broken image placeholder. Chrome's Developer Tools report: "Resource interpreted as image but transferred with MIME type application/pdf." I also tried sending the file from controller: in PdfAttachmentController: def send_pdf_attachment pdf_attachment = PdfAttachment.find params[:id] send_file pdf_attachment.public_filename, :type => pdf_attachment.content_type, :file_name => pdf_attachment.filename, :disposition => 'inline' end in routes.rb: map.send_pdf_attachment '/pdf_attachments/send_pdf_attachment/:id', :controller => 'pdf_attachments', :action => 'send_pdf_attachment' and in the view: <%= send_pdf_attachment_path @model.pdf_attachment %> or <%= image_tag( send_pdf_attachment_path @model.pdf_attachment ) %> And that doesn't display the file on Mac (I didn't try on Windows), it displays the path: pdf_attachments/send_pdf_attachment/35 So, my question is: what do I do to properly display a pdf file inline? Thanks martin

    Read the article

  • How to achieve high availability?

    - by tanyehzheng
    My boss wants to have a system that takes into concern of continent wide catastrophic event. He wants to have two servers in US and two servers in Asia (1 login server and 1 worker server in each continent). In the event that earthquake breaks the connection between the two continents, both should work alone. When the connection is revived, they should sync each other back to normal. External cloud system not allowed as he has no confidence. The system should take into account of scalability which means addition of new servers should be easy to configure. The servers should be load balanced. The connection between the servers should be very secure(encrypted and send through SSL although SSL takes care of encryption). The system should let one and only one user log in with one account. (beware of latency between continent and two users sharing account may reach both login server at the same time) Please help. I'm already at the end of my wit. Thank you in advance.

    Read the article

  • Excessive httpd processes to stack up on my Rails + Apache2 + Passenger production setup?

    - by LeoAlmighty
    I have a Rails + Apache2 + Postgres + Passenger application running in production mode in OSX Snow Leopard. The application serves as a data warehouse for another application in the cloud so I'm constantly getting API calls to my OSX production build. After a recent reboot, I'm finding a ton of httpd processes stacking up and eventually requiring an apache reboot. I haven't changed any settings, everything was running fine before. Any ideas on the best way to troubleshoot this? $ ps -ef|grep httpd 0 6203 1 0 0:00.20 ?? 0:00.47 /usr/sbin/httpd -D FOREGROUND 70 6222 6203 0 0:00.05 ?? 0:00.11 /usr/sbin/httpd -D FOREGROUND 70 6224 6203 0 0:00.31 ?? 0:00.50 /usr/sbin/httpd -D FOREGROUND 70 6233 6203 0 0:00.05 ?? 0:00.10 /usr/sbin/httpd -D FOREGROUND 70 6234 6203 0 0:00.43 ?? 0:00.64 /usr/sbin/httpd -D FOREGROUND 70 6243 6203 0 0:00.02 ?? 0:00.03 /usr/sbin/httpd -D FOREGROUND 70 6319 6203 0 0:00.08 ?? 0:00.16 /usr/sbin/httpd -D FOREGROUND 70 6334 6203 0 0:00.02 ?? 0:00.05 /usr/sbin/httpd -D FOREGROUND 70 6469 6203 0 0:00.04 ?? 0:00.08 /usr/sbin/httpd -D FOREGROUND 70 6487 6203 0 0:00.36 ?? 0:00.48 /usr/sbin/httpd -D FOREGROUND 70 6593 6203 0 0:00.36 ?? 0:00.48 /usr/sbin/httpd -D FOREGROUND 70 6709 6203 0 0:00.04 ?? 0:00.08 /usr/sbin/httpd -D FOREGROUND 70 6718 6203 0 0:00.04 ?? 0:00.10 /usr/sbin/httpd -D FOREGROUND 70 6834 6203 0 0:00.01 ?? 0:00.03 /usr/sbin/httpd -D FOREGROUND 70 6852 6203 0 0:00.00 ?? 0:00.00 /usr/sbin/httpd -D FOREGROUND 70 6853 6203 0 0:00.01 ?? 0:00.02 /usr/sbin/httpd -D FOREGROUND

    Read the article

< Previous Page | 367 368 369 370 371 372 373 374 375 376 377 378  | Next Page >