Assuming that it's even possible, what would be your recomendations to make a bundle compatible between different platform releases? Specially between R3 and R4.
I'm moving my application from 3.x to 4.x as I prepare for the app store and found that I need to have 2 copies of all my custom pngs - my question is how can I determine what img to show and when. Example - how do I know to show the Find.png vs the [email protected]
Is it "safe" or "correct" to look for 4.x specific apis or does the iphone have a way to determine what platform you are on at runtime?
Thank you in advance
Hello, I am building a static-website (as in, to change a page, we change the HTML and there is no DB or anything). Well, it will have a number of pages and I don't want to copy and paste the HTML navigation and layout code around everywhere.
So what would be the best platform to use in this situation so I can have all my layout and "common" HTML markup all in one place?
Anybody has experience with Magento community version and Appach OFBiz? Could you please share your feeling with me? I am trying to find a free ecommerce platform to start with. OFBiz is using Java. Don't know what's the language Magento is using. thanks,
I'm writing a System::Wrapper module to abstract away from CORE::system and the qx operator. I have a serial method that attempts to connect command1's output to command2's input. I've made some progress using named pipes, but POSIX::mkfifo is not cross-platform.
Here's part of what I have so far (the run method at the bottom basically calls system):
package main;
my $obj1 = System::Wrapper->new(
interpreter => 'perl',
arguments => [-pe => q{''}],
input => ['input.txt'],
description => 'Concatenate input.txt to STDOUT',
);
my $obj2 = System::Wrapper->new(
interpreter => 'perl',
arguments => [-pe => q{'$_ = reverse $_}'}],
description => 'Reverse lines of input input',
output => { '>' => 'output' },
);
$obj1->serial( $obj2 );
package System::Wrapper;
#...
sub serial {
my ($self, @commands) = @_;
eval {
require POSIX; POSIX->import();
require threads;
};
my $tmp_dir = File::Spec->tmpdir();
my $last = $self;
my @threads;
push @commands, $self;
for my $command (@commands) {
croak sprintf
"%s::serial: type of args to serial must be '%s', not '%s'",
ref $self, ref $self, ref $command || $command
unless ref $command eq ref $self;
my $named_pipe = File::Spec->catfile( $tmp_dir, int \$command );
POSIX::mkfifo( $named_pipe, 0777 )
or croak sprintf
"%s::serial: couldn't create named pipe %s: %s",
ref $self, $named_pipe, $!;
$last->output( { '>' => $named_pipe } );
$command->input( $named_pipe );
push @threads, threads->new( sub{ $last->run } );
$last = $command;
}
$_->join for @threads;
}
#...
My specific questions:
Is there an alternative to POSIX::mkfifo that is cross-platform? Win32 named pipes don't work, as you can't open those as regular files, neither do sockets, for the same reasons.
The above doesn't quite work; the two threads get spawned correctly, but nothing flows across the pipe. I suppose that might have something to do with pipe deadlocking or output buffering. What throws me off is that when I run those two commands in the actual shell, everything works as expected.
I'm using the following code to hide stderr on Linux/OSX for a Python library I do not control that writes to stderr by default:
f = open("/dev/null","w")
zookeeper.set_log_stream(f)
Is there an easy cross platform alternative to /dev/null? Ideally it would not consume memory since this is a long running process.
What are the best practices on writing a cross platform library in C++?
My development environment is Eclipse CDT on Linux, but my library should have the possibility to compile natively on Windows either (from Visual C++ for example).
I need to write an extremely lightweight program (trying to get below 8Kb) that performs some simple math. The language also needs to be platform independent. Which language do you think would work the best? (Oh, and no frameworks allowed.)
Why did Sun (now owned by Oracle, I know) develop the Java Plaform? How does it make business sense? It seems to me like it would be a very expensive project (also, any ideas on how much they spent/are spending to develop/maintain the platform?). Are they making money by selling support or something?
Background:
Currently, we use Rackspace cloud servers. We have no intention to stop using them, but would like to look into setting up a cluster of physical servers (probably desktop computers in the $400 range with 8gb memory each) to offset some of our load and work as a secondary, more powerful, less reliable system. To put things in perspective, we can buy comparable desktop computers for the same price as we pay in one month to rent them on Rackspace Cloud.
I understand that this is generally a dumb idea. However, in this particular instance, the server cluster is needed for its computation power. It is not mission-critical, it does not host a consumer-facing website, and if it goes down for a day or two, its not really a problem.
Currently, we have access to business class verizon fios. If I understand correctly, we can get at least 25 dedicated IP addresses with this service, this should be enough.
Requirements:
Each server runs Linux Centos 6.3
Some of the servers run Python and execute processes from a task queue (Redis or RabbitMQ)
Some of the servers are capable of serving static files and Python driven REST APIs
Some of the servers host a Cassandra database cluster
One or more of the servers are a Redis database servers
One or more of the servers are PostgreSQL servers
Questions:
What kind of router or switch is needed?
We would like the computers to be able to communicate effectively with each other via internal IP addresses. This is especially important for communicating with servers hosting Redis that need to be able to respond to requests very quickly. Are there special switches or routers that need to be used to connect the servers together?
Are Desktop computers ok for this?
We have found that we are mostly RAM-bottle necked, I understand that some servers have highly superior CPUs, but I'm not sure we need CPU power as much as we need RAM, which is cheap in Desktop computers.
Will we have problems with the WIFI cards in the desktops or any other unexpected hardware limitation?
What tools should be used to "image" the servers.
For example, when we get an installation right for a Redis server or Cassandra node, are there tools that come with Linux Centos 6.3 to image the server to a USB drive or something like that?
Or do we need to use some other software for this?
What other things are we missing that we should be concerned about?
Thanks so much!
My business has a rather unique problem. We work in China and we want to implement a file server paradigm which does not store any files locally, but rather in a server overseas. Applications would be saved onto our local machines, but data would be loaded directly into memory from the cloud, e.g. I load a docx into word at the beginning of the day, saving periodically to the cloud as I work on it, and turn off my computer at night, with nothing saved locally. Considering recent events, we worry about being raided by the Chinese authorities, and although all our data is encrypted, it would not be hard for the authorities to force us to give up the keys. So the goal is not to have anything compromising physically in China.
We have about 20 computers, and we need an authenticated, encrypted connection with this overseas file server. A system with Active-Directory-like permissions would be best, so that only management can read or write to certain files, or workers can only access files that relate to their projects, and to which all access can be cut off should the need arise. The file server itself would also need to be encrypted. And for convenience, it would be nice if this system was integrated with each computer's file explorer (like skydrive or dropbox does, but, again, without saving a copy locally), rather than through a browser.
I can't find any solution online. Does anyone know of a service that does this? Otherwise I'll have to do it myself (which kinda sounds fun, but I don't really have the time), and I'm not sure where to start. Amazon maybe. But the protocols that offices would use on their intranet typically aren't encrypted; we need all traffic securely tunneled out of the country. Each computer already has a VPN to a server in California, but I'm unsure whether it would be efficient to pipe file transfers through it. Let me know if anyone has any ideas.
And this is my first post; feel free say whether this question is inappropriate/needs to be posted elsewhere.
The Rackspace cloud server tech tells my CentOS 5.4 VPS (Xen) runs "CentOS with an Ubuntu kernel"
Could someone explain, in plain terms, what "CentOS with an Ubuntu kernel" means and if there are any disadvantages (performance, mgmt) between that and running CentOS with a CentOS kernel?
Thanks
I am trying to set up a failover secondary MySQL server that is a mirror of my primary MySQL server using DRBD. The problem is that I am on a rackspace cloud server and I need a second partition on both the primary and secondary servers that I will replicate with DRBD. Rackspace does not allow me to create a second partition. I am left with the default single partition. How can I mirror using DRBD?
If you were launching a new app today, with all the choices what would you choose?
Cloud Hosting (Heroku, AppFog)
VPS Hosting (just about anybody)
Dedicated Servers (The Planet, RackSpace, etc.)
I know this can be a very subjective question, but let's just go with the broad strokes here. Lets say you had an app, you don't know how it's going to do, but you want to be prepared for if it does take off, what would you go with?
I've installed Ubuntu Enterprise Cloud on a server and I'm able to bring up an instance from a image and the instance shows that is running. I see the IPs allocated to that instance but for some reason I can't access it via SSH.
euca-describe-groups shows:
GROUP admin default default group
PERMISSION admin default ALLOWS tcp 22 22 FROM CIDR 0.0.0.0/0
I'm on the same network as the instance so I'm sure is not an networking problem (like routers, switches etc.).
Any ides?
Puppet requires certificates between the client (puppet) being managed and the server (puppetmaster). You can run manually on the client and then go onto the server to sign the certificate, but how do you automate this process for clusters / cloud machines?
This question is quite general, but most specifically I'm interested in knowing if virtual machine running Ubuntu Enterprise Cloud (that's Xen-based) will be any slower than the same physical machine without any virtualization. How much (1%, 5%, 10%)?
Did anyone measure performance difference of web server or db server (virtual VS physical)?
I'm about to launch a new site that allows user to both upload/stream audio and video and I don't know anything about the server side of things. My original plan was to just use a dedicated server through Hostgator but from what I'm reading, Cloud hosting or Load balanced clustered is the best way to go for what Im trying to do. All the articles seem to have an agenda to sell you on an affiliate web host so how do I really need to do this?
Specifically I'm looking for techniques for scaling a web application which has no central database server, in the cloud, but general advice is great.
I have come across GlusterFS, which looks great, but I'm not yet clear how it fits into the architecture of a web application. This also is interesting to me.
Thanks for the advice and links.
I have problem with installing apache on the ubuntu server running on virtual machine. (one of the so called cloud hosting)
Installation went smooth apache is started but I can't access it through http://84.51.250.58[this is not a link] (just to see first "It works!" page) nor I can ping let say google.com using shell from remote viewer.
It's brand new installation, it should work or am I missing something?
Client asked if we can host our application for them on Amazon Cloud. The app has database running on MS SQL Server which is approximately 20 Gb in size.
We need to update the database almost every night and approximately 75% of all data is overwritten each time.
Any idea whether Amazon EC2 can reliably handle a load like that?
Hi,
I am implementing a tag cloud system based on this recommendation. (However, I am not using foreign keys)
I am allowing up to 25 tags. My question is, how can I handle editing on the items? I have this item adding/editing page:
Title:
Description:
Tags: (example data) computer, book, web, design
If someone edits an item details, do I need to delete all the tags from Item2Tag table first, then add the new elements? For instance, someone changed the data to this:
Tags: (example data) computer, book, web, newspaper
Do I need to delete all the tags from Item2Tag table, and then add these elements? It seems inefficient, but I could find a more efficient way.
The other problem is with this way is, if someone edits description but does not change the tags box, I still need to delete all the elements from Item2Tag table and add the same element.
I am not an experienced PHP coder, so could you suggest a better way to handle this? (pure PHP/MySQL solution is preferable)
Thanks in advance,
Hi,
I have a Google App engine application that I want to work differently depending upon if it is running in my local dev environment (i.e. with dev_appserver.py) as against running in actual GAE cloud.
Currently I use a flag variable that I manually toggle to achieve that. But I am sure one day I will forget to change it and will lead to problem. So I would like to know if there is an API or some other way to figure out where the GAE app is actually running?
Thanks.
hi,
I want to build a tag cloud like this one in my Flex application.
See image: http://dl.dropbox.com/u/72686/tagCloud.png
At the moment I have the tags (that are mx.controls.LinkButtons) added at the same position and having different sizes according to values (stored in an ArrayComponent).
In my visualization, the orange tags are supposed to be listed vertically in the middle. The gray tags should be at different distances (according to stored numeric values).
I want to avoid overlapping and cluttering.
How do you suggest to compute x and y of the gray tags taking care about:
the distances from orange tags
avoid overlapping between them
thanks
I am attempting to build this example. However I am getting the error
error MSB8020: The builds tools for WindowsApplicationForDrivers8.1
(Platform Toolset = 'WindowsApplicationForDrivers8.1') cannot be
found. To build using the WindowsApplicationForDrivers8.1 build tools,
either click the Project menu or right-click the solution, and then
select "Update VC++ Projects...". Install
WindowsApplicationForDrivers8.1 to build using the
WindowsApplicationForDrivers8.1 build tools.
I have already installed WDK 8.1 .exe andwhen I right click on the solution I cant find any thing that states "Update VC++ Projects"
Any suggestions on how I may resolve this issue and build it. ? I am running Windows 7 and VS2012 Pro.