Search Results

Search found 32252 results on 1291 pages for 'software services'.

Page 173/1291 | < Previous Page | 169 170 171 172 173 174 175 176 177 178 179 180  | Next Page >

  • Tellago Technology Days: Enterprise Mobile Backend as a Service

    - by gsusx
    Last week, as part of Tellago's Technology Update, I delivered a presentation about the modern enterprise mobility powered by cloud-based, mobile backend as a service models. During the presentation we covered some of the most common enterprise mBaaS patterns that can be implemented using current technologies. Below you can find the slide deck I used during the presentation. Feel free to take a look and send me some feedbck....(read more)

    Read the article

  • running jar in a terminal using axis2

    - by Emilio
    I'm trying to run in the command line a java application distributed in a jar file. It call an axis2 web service, so the jar contains a /axis2client directory with rampart.mar security module. It works fine while I run it in netbeans, but it throws an exception if I try to run it in a terminal using this command: java -jar myfile.jar The Exception: org.apache.axis2.AxisFault: invalid url: //file:/home/xxx/Desktop/myfile.jar!/axis2client/ (java.net.MalformedURLException: no protocol: //file:/home/xxx/Desktop/myfile.jar) As you can see, it's trying to use the /axis2client directory inside the jar, as when I run it in Netbeans, but It fails with a MalformedURLException. I think it's something about the protocol 'file:', probably '//file:/' must be 'file:///'. The problem is that I cannot change this call to the directory because the method that loads the /axis2client directory it's not mine, it's from another library that use my project and include all the axis2 support. So, any idea?? Thanks in advance lads!

    Read the article

  • Accessing resources on localhost using domain credentials

    - by jas
    I'm trying to set up Team Foundation Server 2010, Sharepoint Server 2010 and Report Server 2008R2. I apologize for how long my question/problem is but I'm really lost on where to even look so am being as descriptive as possible in hopes that I'm making sense. The goal: Since developers can be inside or outside the firewall there needs to be a single http point of entry to TFS that works regardless of which side of the firewall you are and needs to work with external access to SharePoint and Report Server. Meaning we have it set up in DNS so buildserver.mydomain.com: points to the build service box which contains all of the services listed at the top of this post and specific services are defined/located by the port number. This is working great on every machine inside and out except for from the build server itself. All services must be able to work using external URLs. If I use http:// buildserver.mydomain.com:4800/tfs (the external URL) from my notebook which is behind the firewall I'm able to login with my domain credentials as expected. If the other developer points to the same URL from their home which isn't on the domain they are also able to login using their domain credentials. However if I am directly on buildserver and call SharePoint, TFS or Reporting Server from (i.e. http:// buildserver.mydomain.com:4800) itself using the external URL, I am prompted for a username and password. Entering my domain credentials results in another prompt to enter my credentials again. It will prompt three times regardless of which credentials are used (I have rights as a domain admin) and then after the third prompt directs me to a blank white page as though access was denied. There are no errors displayed on the page and nothing ends up in the event viewer. From buildserver if i use just the host name (the internal URL), then I'm prompted a single time for credentials and it works. i.e. http:// buildserver:4800/tfs works from the server itself. The behavior is identical for any service requiring authentication. Meaning from the box itself Sharepoint Central Admin, SharePoint WebApp, TFS, TFS Web Access, Report Server and Report Manager all fail using the external URL but will succeed if called using the interal URL. So the problem comes into play when configuring all of the services to work together. The only way to configure TFS is locally from the server which means I must point to the internal reporting server url (http:// buildserver:4800/reports and reportServer respectively instead of http:// buildserver.domainname.com:4800 like they need to be) since external URLs aren't working from itself. If I configure TFS to use the internal URL for Report Server then creating team projects or working in the SharePoint site for the team project fails for anyone not inside the domain since their machines have no idea who http:// buildserver:/reports even is or how to resolve them. I have configured Sharepoint with Alternate Access Mappings as well as set up Report Server to listen for external URLs. The external URLs simply aren't working when called from the server itself. I hope this makes sense. Thanks for taking the time to read this rather verbose plea for help.

    Read the article

  • How do I restrict my kids' computing time?

    - by Takkat
    Access to our computer (not only to the internet) needs to be restricted for the accounts of my kids (7, 8) until they are old enough to manage this by themselves. Until then we need to be able to define the following: the hours of the day when computing is o.k. (e.g. 5 - 9 pm) the days of the week when computing is not o.k. (e.g. mondays to fridays) the amount of time allowed per day (e.g. 2 hours) In 11.10 all of the following that used to do the job don't work any more: Timekpr: for 11.10 not available through the ppa. The installed version from 11.04 does not work in 11.10. Timoutd: command line alternative, but in 11.10 removed from the repositories. Gnome Nanny: Looks great but repeatedly crashes to force restarting X-server. So we can't use or recommed this program at the moment. Are there any other alternatives?

    Read the article

  • Installing DotNetNuke using WebMatrix

    - by Chris Hammond
    Last week Microsoft released a new tool called WebMatrix, a tool for developing web applications and easily installing existing web applications. You can learn more about WebMatrix by visiting http://www.microsoft.com/web/webmatrix/ . What does this have to do with DotNetNuke ? Well WebMatrix makes installing DotNetNuke very easy! Even easier than before when just using the Web Platform Installer also from Microsoft. To be honest, using the Web Platform Installer alone unfortunately doesn’t work...(read more)

    Read the article

  • Impressions on jQuery Mobile

    - by Jeff
    For the uninitiated, jQuery Mobile is a sweet little client framework that turns regular HTML into something more touch and mobile friendly. It results in a user interface that has bigger targets, rounded corners and simple skinning capability. When it was announced that ASP.NET MVC 4 would include support for a mobile-sensitive view engine, offering up alternate views for clients that fit the mobile profile, I was all over that. Combined with jQuery Mobile, it brought a chance to do some experimentation. I blitzed through the views in POP Forums and converted them all to mobile views. (For the curious, this first pass can be found here on CodePlex, while a more recent update that uses RC 2 of jQuery Mobile v1.1.0 is running on the demo site.) Initially, it was kind of a mixed bag. The jQuery demo site also acts as documentation, and it’s reasonably complete. I had no problem getting up a lot of basic views quickly, splitting out portions of some pages as subpages that they quickly load in. The default behavior in the older version was to slide the pages in, which looked a little weird when you were using a back button. They’ve since changed it so the default transition is a fade in/out. Because you’re dealing with Web pages, I don’t think anyone is really under the illusion that you’re not using a native app, so I don’t know that this matters. I’ve tested extensively on iPad and Windows Phone, and to be honest, I’ve encountered a lot of issues. On Windows Phone, there is some kind of inconsistency that prevents the proper respect for the viewport settings. The text background on text fields (for labeling) doesn’t work, either. On both platforms, certain in-DOM page navigation links work only half of the time. Is this an issue of user error? Probably, but that’s what’s frustrating about it. Most of what you accomplish with this framework involves decorating various elements with CSS classes. There isn’t any design-time safety to speak of to make sure that you’re doing it right. I think the issues can be overcome, but there are some trade-offs to consider. The first is download size. Yes, the scripts and CSS do get cached, but that first hit will cost nearly 40k for the mobile parts. That’s still a lot when you’re on some crappy AT&T EDGE network, or hotel Wi-Fi. Then you have to ask yourself, do you really want your app to look like it’s native to iOS? I’m not saying that’s a bad thing, because consistent UI is good, but you will end up feeling a whole lot of sameness, and maybe you don’t want that. I did some experimentation to try and Metro-ize the jQuery Mobile theme, and it’s kind of a mixed bag. It mostly works, but you get some weirdness on badges and with buttons that I’m not crazy about. It probably just means you need to keep tweaking. At this point, I’m a little torn about whether or not I’ll use it for POP Forums or one of the sites I’m working on. The benefits are pretty strong, but figuring out where I’m doing it wrong is proving a little time consuming.

    Read the article

  • Working with Legacy code #3 : Build a safety net.

    - by andrewstopford
    The first port of call in changing legacy code is a safety net, without one your fingers will get burnt. Make your safety net a high level functional test over the major areas of the application. Automate the test, plug it into your CI builds and run it every night. The test should act as a final fail safe as you work.

    Read the article

  • How many questions is it appropriate to ask as an intern?

    - by Casey Patton
    So, I just started an internship, and I'm worried that I'm asking too many questions. I've been assigned a mentor who has been assigning me projects and helping me learn all the company's technologies and methodologies. However, there's so much new material for me to learn while doing this project that I have a lot of questions. I generally ask questions over instant messages or E-mail (those are the primary modes of communication for my company). I'm trying to be careful not to ask too many questions: I don't want to come off as annoying or dumb. How many questions is appropriate to ask? Once an hour? More? Less? Keep in mind, my mentor is also a fellow programmer that has his own responsibilities.

    Read the article

  • How can I download Youtube videos?

    - by Abhijit Navale
    First i tried with youtube-dl and all the times ( and all days) for all videos it gives this same error: youtube-dl http://www.youtube.com/watch?v=6zWwTTAc7O8 [youtube] Setting language [youtube] 6zWwTTAc7O8: Downloading video info webpage [youtube] 6zWwTTAc7O8: Extracting video information ERROR: format not available for video Then I tried minitube latest version. but it just cant open the video. it just keeps trying to open video. it is unable to even play or download any video. Also in old days, whenever i play video in youtube.com that was automatically was saved in my /tmp. But that is also not happening these days. What can I use for downloading Youtube videos? I am using Lucid 64 bit.

    Read the article

  • synchronization web service methodologies or papers

    - by Grady Player
    I am building a web service (PHP+JSON) to sync with my iphone app. The main goals are: Backup Provide a web view for printing / sorting, manipulating. allow a group sync up and down. I am aware of the logic problems with all of these items, Ie. if one person deletes something, do you persist this change to other users, collisions, etc. I am looking for just any book or scholarly work, or even words of wisdom to address common issues. when to detect changes of data with hashes, vs modified dates, or combination. how do address consolidation of sequential ID's originating on different client nodes (can be sidestepped in my context, but it would be interesting.) dealing with collisions (is there a universally safe way to do so?). general best practices. how to structure the actual data transaction (ask for whole list then detect changes...)

    Read the article

  • #DAX Query Plan in SQL Server 2012 #Tabular

    - by Marco Russo (SQLBI)
    The SQL Server Profiler provides you many information regarding the internal behavior of DAX queries sent to a BISM Tabular model. Similar to MDX, also in DAX there is a Formula Engine (FE) and a Storage Engine (SE). The SE is usually handled by Vertipaq (unless you are using DirectQuery mode) and Vertipaq SE Query classes of events gives you a SQL-like syntax that represents the query sent to the storage engine. Another interesting class of events is the DAX Query Plan , which contains a couple...(read more)

    Read the article

  • What's the best Bittorrent client for 12.10 excluding utorrent?

    - by Brenton Horne
    The reason why I've excluded utorrent is because utorrent server doesn't appear to work for me with the details of such difficulties available in the question How do I install uTorrent?. Running through wine is an annoying solution to the problem as whenever I run it that way I have to manually relocate torrent-ed files as they are undiscoverable in the location utorrent saves it by default. I'd appreciate a decently-sized compare and contrast answer.

    Read the article

  • unit/integration testing web service proxy client

    - by cori
    I'm rewriting a PHP client/proxy library that provides an interface to a SOAP-based .Net webservice, and in the process I want to add some unit and integration tests so future modifications are less risky. The work the library I'm working on performs is to marshall the calls to the web service and do a little reorganizing of the responses to present a slightly more -object-oriented interface to the underlying service. Since this library is little else than a thin layer on top of web service calls, my basic assumption is that I'll really be writing integration tests more than unit tests - for example, I don't see any reason to mock away the web service - the work that's performed by the code I'm working on is very light; it's almost passing the response from the service right back to its consumer. Most of the calls are basic CRUD operations: CreateRole(), CreateUser(), DeleteUser(), FindUser(), &ct. I'll be starting from a known database state - the system I'm using for these tests is isolated for testing purposes, so the results will be more or less predictable. My question is this: is it natural to use web service calls to confirm the results of operations within the tests and to reset the state of the application within the scope of each test? Here's an example: One test might be createUserReturnsValidUserId() and might go like this: public function createUserReturnsValidUserId() { // we're assuming a global connection to the service $newUserId = $client->CreateUser("user1"); assertNotNull($newUserId); assertNotNull($client->FindUser($newUserId); $client->deleteUser($newUserId); } So I'm creating a user, making sure I get an ID back and that it represents a user in the system, and then cleaning up after myself (so that later tests don't rely on the success or failure of this test w/r/t the number of users in the system, for example). However this still seems pretty fragile - lots of dependencies and opportunities for tests to fail and effect the results of later tests, which I definitely want to avoid. Am I missing some options of ways to decouple these tests from the system under test, or is this really the best I can do? I think this is a fairly general unit/integration testing question, but if it matters I'm using PHPUnit for the testing framework.

    Read the article

  • Suggestions for connecting .NET WPF GUI with Java SE Server aoo

    - by Sam Goldberg
    BACKGROUND We are building a Java (SE) trading application which will be monitoring market data and sending trade messages based on the market data, and also on user defined configuration parameters. We are planning to provide the user with a thin client, built in .NET (WPF) for managing the parameters, controlling the server behavior, and viewing the current state of the trading. The client doesn't need real-time updates; it will instead update the view once every few seconds (or whatever interval is configured by the user). The client has about 6 different operations it needs to perform with the server, for example: CRUD with configuration parameters query subset of the data receive updates of current positions from server It is possible that most of the different operations (except for receiving data) are just different flavors of managing the configuration parameters, but it's too early in our analysis for us to be sure. To connect the client with the server, we have been considering using: SOAP Web Service RESTful service building a custom TCP/IP based API (text or xml) (least preferred - but we use this approach with other applications we have) As best as I understand, pros and cons of the different web service flavors are: SOAP pro: totally automated in .NET (and Java), modifying server side interface require no code changes in communication layer, just running refresh on Web Service reference to regenerate the classes. con: more overhead in the communication layer sending more text, etc. We're not using J2EE container so maybe doesn't work so well with J2SE REST pro: lighter weight, less data. Has good .NET and Java support. (I don't have any real experience with this, so don't know what other benefits it has.) con: client will not be automatically aware if there are any new operations or properties added (?), so communication layer needs to be updated by developer if server interface changes. con: (both approaches) Server cannot really push updates to the client at regular intervals (?) (However, we won't mind if client polls the server to get updates.) QUESTION What are your opinions on the above options or suggestions for other ways to connect the 2 parts? (Ideally, we don't want to put much work into the communication layer, because it's not the significant part of the application so the more off-the-shelf and automated the better.)

    Read the article

  • Why not expose a primary key

    - by Angelo Neuschitzer
    In my education I have been told that it is a flawed idea to expose actual primary keys (not only DB keys, but all primary accessors) to the user. I always thought it to be a security problem (because an attacker could attempt to read stuff not their own). Now I have to check if the user is allowed to access anyway, so is there a different reason behind it? Also, as my users have to access the data anyway I will need to have a public key for the outside world somewhere in between. Now that public key has the same problems as the primary key, doesn't it?

    Read the article

  • Enabling support of EUS and Fusion Apps in OUD

    - by Sylvain Duloutre
    Since the 11gR2 release, OUD supports Enterprise User Security (EUS) for database authentication and also Fusion Apps. I'll plan to blog on that soon. Meanwhile, the R2 OUD graphical setup does not let you configure both EUS and FusionApps support at the same time. However, it can be done manually using the dsconfig command line. The simplest way to proceed is to select EUS from the setup tool, then manually add support for Fusion Apps using dsconfig using the commands below: - create a FA workflow element with eusWfe as next element: dsconfig create-workflow-element \           --set enabled:true \           --set next-workflow-element:Eus0 \           --type fa \           --element-name faWfe - modify the workflow so that it starts from your FA workflow element instead of Eus: dsconfig set-workflow-prop \           --workflow-name userRoot0 \           --set workflow-element:faWfe  Note: the configuration changes may slightly differ in case multiple databases/suffixes are configured on OUD.

    Read the article

  • Good alternative to NetLimiter?

    - by Harsh
    There is a program NetLimiter for windows. While I was using Windows it was very useful for me to find out the IP address of the person who was downloading from me, or to know IP address of any person on LAN who was using DC++ with some nick. And after that I can easily know the computer name of that person using nbtstat. I was wondering if there is any tool for Ubuntu using which I can find out the IP address of person who is downloading from me or from whom I am downloading on LAN. I am on university LAN and we are using PtokaX and DC++ for file sharing on LAN. people sometimes put some offencive stuff on open chat on DC++ using some Nick and I don't know how to trace them while I am using Ubuntu.

    Read the article

  • What do I need to develop a PHP extension in lampp?

    - by Fernando Costa
    Actually I'm dealing with a trouble in my system, I have to delivery the system to clients and it was built in PHP, JS, ShellScript and SQL. I would like to encrypt the code or obfuscate it from eyes of others! Then someone from the community told me about build my own PHP extension, it sounds to me as a great idea, since it will not be with the main code of the system. But I have a problem doing this way, if a programmer get in the extensions and find it, all the hard work has gone. Then I'm here to ask again about this matter. What is the best way to hide my Business Logic from third parties? I know that has stuffs like IonCube, Zend Guard, and many others. But I'm looking something that I can build myself. Is PHP extension the right way to follow? Or some Half SaaS system, with dependencies (Business Logic) in a remote server? About the environment OS: Kernel Linux 2.6.37.1-1.2 - LAMPP (Apache 2.2, MySQL 5.5 PHP 5.3.8) In php generally the extension is located at /php/ext/ but in lampp I have no idea where it is, I just found a folder /opt/lampp/lib/php/extensions/ is that right place?

    Read the article

  • How would you gather client's data on Google App Engine without using Datastore/Backend Instances too much?

    - by ruslan
    I'm relatively new to StackExchange and not sure if it's appropriate place to ask design question. Site gives me a hint "The question you're asking appears subjective and is likely to be closed". Please let me know. Anyway.. One of the projects I'm working on is online survey engine. It's my first big commercial project on Google App Engine. I need your advice on how to collect stats and efficiently record them in DataStore without bankrupting me. Initial requirements are: After user finishes survey client sends list of pairs [ID (int) + PercentHit (double)]. This list shows how close answers of this user match predefined answers of reference answerers (which identified by IDs). I call them "target IDs". Creator of the survey wants to see aggregated % for given IDs for last hour, particular timeframe or from the beginning of the survey. Some surveys may have thousands of target/reference answerers. So I created entity public class HitsStatsDO implements Serializable { @Id transient private Long id; transient private Long version = (long) 0; transient private Long startDate; @Parent transient private Key parent; // fake parent which contains target id @Transient int targetId; private double avgPercent; private long hitCount; } But writing HitsStatsDO for each target from each user would give a lot of data. For instance I had a survey with 3000 targets which was answered by ~4 million people within one week with 300K people taking survey in first day. Even if we assume they were answering it evenly for 24 hours it would give us ~1040 writes/second. Obviously it hits concurrent writes limit of Datastore. I decided I'll collect data for one hour and save that, that's why there are avgPercent and hitCount in HitsStatsDO. GAE instances are stateless so I had to use dynamic backend instance. There I have something like this: // Contains stats for one hour private class Shard { ReadWriteLock lock = new ReentrantReadWriteLock(); Map<Integer, HitsStatsDO> map = new HashMap<Integer, HitsStatsDO>(); // Key is target ID public void saveToDatastore(); public void updateStats(Long startDate, Map<Integer, Double> hits); } and map with shard for current hour and previous hour (which doesn't stay here for long) private HashMap<Long, Shard> shards = new HashMap<Long, Shard>(); // Key is HitsStatsDO.startDate So once per hour I dump Shard for previous hour to Datastore. Plus I have class LifetimeStats which keeps Map<Integer, HitsStatsDO> in memcached where map-key is target ID. Also in my backend shutdown hook method I dump stats for unfinished hour to Datastore. There is only one major issue here - I have only ONE backend instance :) It raises following questions on which I'd like to hear your opinion: Can I do this without using backend instance ? What if one instance is not enough ? How can I split data between multiple dynamic backend instances? It hard because I don't know how many I have because Google creates new one as load increases. I know I can launch exact number of resident backend instances. But how many ? 2, 5, 10 ? What if I have no load at all for a week. Constantly running 10 backend instances is too expensive. What do I do with data from clients while backend instance is dead/restarting? Thank you very much in advance for your thoughts.

    Read the article

  • How do I install kivy?

    - by aspasia
    I was trying to install Kivy (by following the instructions here). I downloaded and installed all packages where the installation process went through without giving me any errors. However, when later I enter below command; sudo easy_install kivy It looked like it was going to work but it ends with an error by displaying following lines, which I don't comprehend: Detected compiler is unix /tmp/easy_install-BtOA_u/Kivy-1.8.0/kivy/graphics/texture.c:8:22: fatal error: pyconfig.h: No such file or directory #include "pyconfig.h" ^ compilation terminated. error: Setup script exited with error: command 'x86_64-linux-gnu-gcc' failed with exit status 1 I saw a similar question asked as; Problem with kivy installation. However, this didn't work for me though the question suggests installing libgles-mesa-dev-lts-raring which I did as below; sudo apt-get install libgles-mesa-dev-lts-raring which then gave below; E: Unable to locate package libgles-mesa-dev-lts-raring (sorry for being so specific and perhaps obvious, but I'm in the early stage of learning my way around linux). This user was running Ubuntu 12.04, and most other questions related to this I've seen came from people with a different release from mine, which has led me to believe that that is the reason why the suggestions to those didn't solve my problem. I'm using Ubuntu 13.10

    Read the article

  • consume a .net webservice using jQuery

    - by Babunareshnarra
    Implementation shows the way to consume web service using jQuery. The client side AJAX with HTTP POST request is significant when it comes to loading speed and responsiveness.Following is the service created that return's string in JSON.[WebMethod][ScriptMethod(ResponseFormat = ResponseFormat.Json)]public string getData(string marks){    DataTable dt = retrieveDataTable("table", @"              SELECT * FROM TABLE WHERE MARKS='"+ marks.ToString() +"' ");    List<object> RowList = new List<object>();    foreach (DataRow dr in dt.Rows)    {        Dictionary<object, object> ColList = new Dictionary<object, object>();        foreach (DataColumn dc in dt.Columns)        {            ColList.Add(dc.ColumnName,            (string.Empty == dr[dc].ToString()) ? null : dr[dc]);        }        RowList.Add(ColList);    }    JavaScriptSerializer js = new JavaScriptSerializer();    string JSON = js.Serialize(RowList);    return JSON;}Consuming the webservice $.ajax({    type: "POST",    data: '{ "marks": "' + val + '"}', // This is required if we are using parameters    contentType: "application/json",    dataType: "json",    url: "/dataservice.asmx/getData",    success: function(response) {               RES = JSON.parse(response.d);        var obj = JSON.stringify(RES);     }     error: function (msg) {                    alert('failure');     }});Remember to reference jQuery library on the page.

    Read the article

  • How to achieve a loosely coupled REST API but with a defined and well understood contract?

    - by BestPractices
    I am new to REST and am struggling to understand how one would properly design a REST system to both allow for loose coupling but at the same time allow a consumer of a REST API to understand the API. If, in my client code, I issue a GET request for a resource and get back XML, how do I know what to do with that xml? e.g. if it contains <fname>John</fname><lname>Smith</lname> how do I know that these refer to the concept of "first name", "last name"? Is it up to the person writing the REST API to define in documentation some place what each of the XML fields mean? What if producer of the API wants to change the implementation to be <firstname> instead of <fname>? How do they do this and notify their consumers that this change occurred? Or do the consumers just encounter the error and then look at the payload and figure out on their own that it changed? I've read in REST in Practice that using a WADL tool to create a client implementation based on the WADL (and hide the fact that you're doing a distributed call) is an "anti-pattern". But I was planning to do this-- at least then I would have a statically typed API call that, if it changed, I would know at compile time and not at run time. Why is this a bad thing to generate client code based on a WADL? And how do I know what to do with the links that returned in the response of a POST to a REST API? What defines this contract and gives true meaning to what each link will do? Please help! I dont understand how to go from statically-typed or even SOAP/RPC to REST!

    Read the article

  • DotNetNuke 6 beta released

    - by Chris Hammond
    DotNetNuke 6 is coming, DotNetNuke 6 is coming! That’s right, we’re getting close, close enough that we had our first “beta” for DNN6 today. While we’ve had a couple of CTP (community technology preview) releases, the beta today has quite a bit of things wrapped up and addressed. There are a number of new things coming in DotNetNuke 6, and rather than try to explain them all I’ll point you to Joe Brinkman’s blog post from this morning . The biggest thing is that most of the Admin and Module settings...(read more)

    Read the article

  • Are there any good music mixers available, equivalent to Windows "MP3 Tunes"?

    - by RobinJ
    In Windows my dad used to have a program called MP3 Tunes. I have tried running it with Wine, and it worked. But strange things kept happening to the program, so it's not a reliable way to play music. Basically I just want to have 2 players (in a single window) with these features: Preloading tracks in a player without immediately starting them. Fading from one track to the other. A timer on each player. These features are also desired, but not required: Microphone input. Prelistening before loading a song in a player (through a seperate sound card). Pitch/Tempo control. Just being able to browse folders in the filesystem (without things like a music library). Here are some screenshots of the program to clarify what I'm looking for:

    Read the article

< Previous Page | 169 170 171 172 173 174 175 176 177 178 179 180  | Next Page >