Search Results

Search found 1314 results on 53 pages for 'vital al kapus'.

Page 38/53 | < Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >

  • Creating Dependencies Only to be able to Unit Test

    - by arin
    I just created a Manager that deals with a SuperClass that is extended all over the code base and registered with some sort of SuperClassManager (SCM). Now I would like to test my Manager that is aware of only the SuperClass. I tried to create a concrete SCM, however, that depends on a third party library and therefore I failed to do that in my jUnit test. Now the option is to mock all instances of this SCM. All is good until now, however, when my Manager deals with the SCM, it returns children of the SuperClass that my Manager does not know or care about. Nevertheless, the identities of these children are vital for my tests (for equality, etc.). Since I cannot use the concrete SCM, I have to mock the results of calls to the appropriate functions of the SCM, however, this means that my tests and therefore my Manager need to know and care about the children of the SuperClass. Checking the code base, there does not seem to be a more appropriate location for my test (that already maintains the appropriate real dependencies). Is it worth it to introduce unnecessary dependencies for the sake of unit testing?

    Read the article

  • First run notepad with my.cfg and only then start the service

    - by Viv Coco
    Hi all, I install along with my application: 1) a service that starts and stops my application as needed 2) a conf file that contains actually the user data and that will be shown to the user to modify as needed (I give the user the chance to change it by running notepad.exe with my conf file during installing) The problem is that in my code the service I install starts before the user had the chance to modify the conf file. What I would like is: 1) first the user gets the chance to change the conf file (run notepad.exe with the conf file) 2) only afterward start the service <Component Id="MyService.exe" Guid="GUID"> <File Id="MyService.exe" Source="MyService.exe" Name="MyService.exe" KeyPath="yes" Checksum="yes" /> <ServiceInstall Id='ServiceInstall' DisplayName='MyService' Name='MyService' ErrorControl='normal' Start='auto' Type='ownProcess' Vital='yes'/> <ServiceControl Id='ServiceControl' Name='MyService' Start='install' Stop='both' Remove='uninstall'/> </Component> <Component Id="my.conf" Guid="" NeverOverwrite="yes"> <File Id="my.cfg" Source="my.cfg_template" Name="my.cfg" KeyPath="yes" /> </Component> [...] <Property Id="NOTEPAD">Notepad.exe</Property> <CustomAction Id="LaunchConfFile" Property="NOTEPAD" ExeCommand="[INSTALLDIR]my.cfg" Return="ignore" Impersonate="no" Execute="deferred"/> <!--Run only on installs--> <InstallExecuteSequence> <Custom Action='LaunchConfFile' Before='InstallFinalize'>(NOT Installed) AND (NOT UPGRADINGPRODUCTCODE)</Custom> </InstallExecuteSequence> What am I doing wrong in the above code and how could I change it in order to achieve what I need? (first run notepad with my conf file and then start the service). TIA, Viv

    Read the article

  • Unresolved external symbol - MySQL API C++

    - by Zack_074
    Hey I've had nonstop problems with SQL. I'm trying to get some experience because I know it's a vital part of the industry. I got it working with C#, but now I'm working on connecting to a database in c++. I have the project properly linked and what not. Here's my code and the errors I'm getting. #include "stdafx.h" #include <mysql.h> #include <iostream> MYSQL mysql; MYSQL_RES result; using namespace std; int _tmain(int argc, _TCHAR* argv[]) { mysql_init(&mysql); if(!mysql_real_connect(&mysql, "localhost", "root", "angel552002", "MyDatabse", 0, NULL, 0)) { printf("Failed to connect"); } return 0; } and the errors: Error 1 error LNK2001: unresolved external symbol _mysql_real_connect@32 c:\Users\Zack-074\documents\visual studio 2010\Projects\MySql\MySql\MySql.obj Error 2 error LNK2001: unresolved external symbol _mysql_init@4 c:\Users\Zack-074\documents\visual studio 2010\Projects\MySql\MySql\MySql.obj Error 3 error LNK1120: 2 unresolved externals c:\users\zack-074\documents\visual studio 2010\Projects\MySql\Debug\MySql.exe 1 I really appreciate the help.

    Read the article

  • How to restart a wcf server from within a client?

    - by djerry
    Hey guys, I'm using Wcf for server - client communication. The server will need to run as a service, so there's no GUI. The admin can change settings using the client program and for those changes to be made on server, it needs to restart. This is my server setup NetTcpBinding binding = new NetTcpBinding(SecurityMode.Message); Uri address = new Uri("net.tcp://localhost:8000"); //_svc = new ServiceHost(typeof(MonitoringSystemService), address); _monSysService = new MonitoringSystemService(); _svc = new ServiceHost(_monSysService, address); publishMetaData(_svc, "http://localhost:8001"); _svc.AddServiceEndpoint(typeof(IMonitoringSystemService), binding, "Monitoring Server"); _svc.Open(); MonitoringSystemService is a class i'm using to handle client - server comm. It looks like this: [CallbackBehavior(ConcurrencyMode = ConcurrencyMode.Reentrant)] [ServiceBehavior(InstanceContextMode = InstanceContextMode.Single, MaxItemsInObjectGraph = 2147483647)] public class MonitoringSystemService : IMonitoringSystemService {} So i need to call a restart method on the client to the server, but i don't know how to restart (even stop - start) the server. I hope i'm not missing any vital information. Thanks in advance.

    Read the article

  • Objective-C Getter Memory Management

    - by Marian André
    I'm fairly new to Objective-C and am not sure how to correctly deal with memory management in the following scenario: I have a Core Data Entity with a to-many relationship for the key "children". In order to access the children as an array, sorted by the column "position", I wrote the model class this way: @interface AbstractItem : NSManagedObject { NSArray * arrangedChildren; } @property (nonatomic, retain) NSSet * children; @property (nonatomic, retain) NSNumber * position; @property (nonatomic, retain) NSArray * arrangedChildren; @end @implementation AbstractItem @dynamic children; @dynamic position; @synthesize arrangedChildren; - (NSArray*)arrangedChildren { NSArray* unarrangedChildren = [[self.children allObjects] retain]; NSSortDescriptor* sortDescriptor = [[NSSortDescriptor alloc] initWithKey:@"position" ascending:YES]; [arrangedChildren release]; arrangedChildren = [unarrangedChildren sortedArrayUsingDescriptors:[NSArray arrayWithObject:sortDescriptor]]; [sortDescriptor release]; [unarrangedChildren release]; return [arrangedChildren retain]; } @end I'm not sure whether or not to retain unarrangedChildren and the returned arrangedChildren (first and last line of the arrangedChildren getter). Does the NSSet allObjects method already return a retained array? It's probably too late and I have a coffee overdose. I'd be really thankful if someone could point me in the right direction. I guess I'm missing vital parts of memory management knowledge and I will definitely look into it thoroughly.

    Read the article

  • .Net lambda expression-- where did this parameter come from?

    - by larryq
    I'm a lambda newbie, so if I'm missing vital information in my description please tell me. I'll keep the example as simple as possible. I'm going over someone else's code and they have one class inheriting from another. Here's the derived class first, along with the lambda expression I'm having trouble understanding: class SampleViewModel : ViewModelBase { private ICustomerStorage storage = ModelFactory<ICustomerStorage>.Create(); public ICustomer CurrentCustomer { get { return (ICustomer)GetValue(CurrentCustomerProperty); } set { SetValue(CurrentCustomerProperty, value); } } private int quantitySaved; public int QuantitySaved { get { return quantitySaved; } set { if (quantitySaved != value) { quantitySaved = value; NotifyPropertyChanged(p => QuantitySaved); //where does 'p' come from? } } } public static readonly DependencyProperty CurrentCustomerProperty; static SampleViewModel() { CurrentCustomerProperty = DependencyProperty.Register("CurrentCustomer", typeof(ICustomer), typeof(SampleViewModel), new UIPropertyMetadata(ModelFactory<ICustomer>.Create())); } //more method definitions follow.. Note the call to NotifyPropertyChanged(p => QuantitySaved) bit above. I don't understand where the "p" is coming from. Here's the base class: public abstract class ViewModelBase : DependencyObject, INotifyPropertyChanged, IXtremeMvvmViewModel { public event PropertyChangedEventHandler PropertyChanged; protected virtual void NotifyPropertyChanged<T>(Expression<Func<ViewModelBase, T>> property) { MvvmHelper.NotifyPropertyChanged(property, PropertyChanged); } } There's a lot in there that's not germane to the question I'm sure, but I wanted to err on the side of inclusiveness. The problem is, I don't understand where the 'p' parameter is coming from, and how the compiler knows to (evidently?) fill in a type value of ViewModelBase from thin air? For fun I changed the code from 'p' to 'this', since SampleViewModel inherits from ViewModelBase, but I was met with a series of compiler errors, the first one of which statedInvalid expression term '=>' This confused me a bit since I thought that would work. Can anyone explain what's happening here?

    Read the article

  • Moving UIScrollView in App

    - by jsetting32
    Currently, I am attempting to create the same effect as Yahoo! Weather's App where the vital day information is at the bottom of the page on the top of a UIScrollView, that's contained by a UIView. I am having a hard time thinking about how this is going to happen or how I should implement this. If the user taps on the top of the UIScrollView which is located near the bottom of the laoded UIView, and starts to scroll up (/), the UIScrollView's frame should be moved to the TOP of the current UIView's frame. So the UIScrollView's y-value should change to UIView's (self.view.frame.origin.y) if the user starts scrolling UP on the UIScrollView which is located on the UIView's y-pixel ~280. Here's what the UIViewController should look like in the beginning of loading the ViewController... Then once the user slides his finger from the bottom to the top of the screen... this should happen........ And when the user scrolls to the top of the UIScrollView with all the content within it... the view should go back to the start picture shown... How is this done? I was thinking several UIGestureRecognizer's and Instantiating the UIscrollview at the lower part of the UIView... _weatherView = [[UIScrollView alloc] initWithFrame:CGRectMake(self.view.frame.origin.x, self.view.frame.origin.y + 250, self.view.bounds.size.width, self.view.bounds.size.height - 44)]; _weatherView.contentSize = CGSizeMake(self.view.bounds.size.width, self.view.bounds.size.height * 4); _weatherView.backgroundColor = [UIColor clearColor]; [self.view addSubview:_weatherView]; The adding some UIGestureRecognizer delegate method.... But anyone have any ideas on the UIGestureRecognizer delegate method? And how it should be implemented? I can write the psuedo-code but I am having problems finding the delegate methods :P Thank you!!! ---- Break Time.... :)

    Read the article

  • App crashes every second time a tableview row is selected in navigation controller setup

    - by Thaurin
    Disclaimer first: I'm pretty new to Objective-C and the retain model. I've been developing in a garbage collected .NET environment for the last five years, so I've been spoiled. I'm still learning. I'm having my iPhone app crash with EXC_BAD_ACCESS. It happens in a navigtation controller/tableview setup. When I select a row the first time, no problems. It switches in the child controller without problems. I go back and select the same row again. Program then proceeds to crash. Every other row works fine, but every second time a row is accessed, it's a crash. I've pinpointed the location where this happens. The child controller (which is a class that I reuse for every row of the same type) that's being switched into has an array of NSString's representing the rows that will be displayed. I set it before pushing the child viewcontroller. It's there where this apparently happens. I'm having a hard time debugging this problem, still wrestling with xcode and all. I fear there may be some vital information missing here, but maybe there is something you recognize here.

    Read the article

  • Set service dependencies after install

    - by Dennis
    I have an application that runs as a Windows service. It stores various things settings in a database that are looked up when the service starts. I built the service to support various types of databases (SQL Server, Oracle, MySQL, etc). Often times end users choose to configure the software to use SQL Server (they can simply modify a config file with the connection string and restart the service). The problem is that when their machine boots up, often times SQL Server is started after my service so my service errors out on start up because it can't connect to the database. I know that I can specify dependencies for my service to help guide the Windows service manager to start the appropriate services before mine. However, I don't know what services to depend upon at install time (when my service is registered) since the user can change databases later on. So my question is: is there a way for the user to manually indicate the service dependencies based on the database that they are using? If not, what is the proper design approach that I should be taking? I've thought about trying to do something like wait 30 seconds after my service starts up before connecting to the database but this seems really flaky for various reasons. I've also considered trying to "lazily" connect to the database; the problem is that I need a connection immediately upon start up since the database contains various pieces of vital info that my service needs when it first starts. Any ideas?

    Read the article

  • Setting up a web development/build environment

    - by Eric
    Hello all, My current project has a development web server and live web server. Developers make changes to files on the dev server and test them (by going to the dev address) and make changes as necessary. When the file or files are ready to go, they are copied to the live server. There is no version control. As you might expect, there are some problems with this model: It's hard to keep track of what other programmers have done. It's hard to keep track of what files should be copied to the live server. There is no version control. I'm in a position to make nearly any change I like, but I want it to be the right one! I have been turning this over in my head for quite a while, and I have a solution that might be okay. But I want SO's opinion. Certainly version control needs to be added. But how should it work with the existing codebase and where should the developers be testing? How can anyone know what needs to be moved to the live server? What other details need to be addressed? How would you attack this problem? Supplementary information: The website is vital, but not mission critical. A small amount of downtime is acceptable. There are very few developers. (Right now, only 4.) History: Before I started, the project used Visual Source Safe. This was a sufficiently bad experience that they quit using it and abandoned version control. The project is an ASP.NET (C#) website. This seems like a question that may have a complicated answer. Thanks for thinking about it!

    Read the article

  • Android: Adding data to Intent fails to load Activity

    - by DroidIn.net
    I have a widget that supposed to call an Activity of the main app when the user clicks on widget body. My setup works for a single widget instance but for a second instance of the same widget the PendingIntent gets reused and as result the vital information that I'm sending as extra gets overwritten for the 1st instance. So I figured that I should pass widget ID as Intent data however as soon as I add Intent#setData I would see in the log that 2 separate Intents are appropriately fired but the Activity fails to pick it up so basically Activity will not come up and nothing happens (no error or warning ether) Here's how the activity is setup in the Manifest: <activity android:name=".SearchResultsView" android:label="@string/search_results" <intent-filter> <action android:name="bostone.android.search.RESULTS" /> <category android:name="android.intent.category.DEFAULT" /> </intent-filter> </activity> And here's code that is setup for handling the click Intent di = new Intent("bostone.android.search.RESULTS"); di.setFlags(Intent.FLAG_ACTIVITY_NEW_TASK); // if line below is commented out - the Activity will start di.setData(ContentUris.withAppendedId(Uri.EMPTY, widgetId)); di.putExtra("URL", url); views.setOnClickPendingIntent(R.id.widgetContent, PendingIntent.getActivity(this, 0, di, 0)); The main app and the widget are packaged as 2 separate APK each in its own package and Manifest

    Read the article

  • Quantifying the amount of change in a git diff?

    - by Alex Feinman
    I use git for a slightly unusual purpose--it stores my text as I write fiction. (I know, I know...geeky.) I am trying to keep track of productivity, and want to measure the degree of difference between subsequent commits. The writer's proxy for "work" is "words written", at least during the creation stage. I can't use straight word count as it ignores editing and compression, both vital parts of writing. I think I want to track: (words added)+(words removed) which will double-count (words changed), but I'm okay with that. It'd be great to type some magic incantation and have git report this distance metric for any two revisions. However, git diffs are patches, which show entire lines even if you've only twiddled one character on the line; I don't want that, especially since my 'lines' are paragraphs. Ideally I'd even be able to specify what I mean by "word" (though \W+ would probably be acceptable). Is there a flag to git-diff to give diffs on a word-by-word basis? Alternately, is there a solution using standard command-line tools to compute the metric above?

    Read the article

  • class modifier issues in C# with "private" classes

    - by devoured elysium
    I had a class that had lots of methods: public class MyClass { public bool checkConditions() { return checkCondition1() && checkCondition2() && checkCondition3(); } ...conditions methods public void DoProcess() { FirstPartOfProcess(); SecondPartOfProcess(); ThirdPartOfProcess(); } ...process methods } I identified two "vital" work areas, and decided to extract those methods to classes of its own: public class MyClass { private readonly MyClassConditions _conditions = new ...; private readonly MyClassProcessExecution = new ...; public bool checkConditions() { return _conditions.checkConditions(); } public void DoProcess() { _process.DoProcess(); } } In Java, I'd define MyClassConditions and MyClassProcessExecution as package protected, but I can't do that in C#. How would you go about doing this in C#? Setting both classes as inner classes of MyClass? I have 2 options: I either define them inside MyClass, having everything in the same file, which looks confusing and ugly, or I can define MyClass as a partial class, having one file for MyClass, other for MyClassConditions and other for MyClassProcessExecution. Defining them as internal? I don't really like that much of the internal modifier, as I don't find these classes add any value at all for the rest of my program/assembly, and I'd like to hide them if possible. It's not like they're gonna be useful/reusable in any other part of the program. Keep them as public? I can't see why, but I've let this option here. Any other? Name it! Thanks

    Read the article

  • iPhone: Which are the most useful techniques for faster Bluetooth?

    - by Mike Howard
    Hi. I'm adding peer-to-peer bluetooth using GameKit to an iPhone shoot-em-up, so speed is vital. I'm sending about 40 messages a second each way, most of them with the faster GKSendDataUnreliable, all serializing with NSCoding. In testing between a 3G and 3GS, this is slowing the 3G down a lot more than I'd like. I'm wondering where I should concentrate my efforts to speed it up. How much slower is GKSendDataReliable? For the few packets that have to get there, would it be faster to send a GKSendDataUnreliable and have the peer send an acknowledgement so I can send again if I don't get the Ack within, say, 100ms? How much faster would it be to create the NSData instance using a regular C array rather than archiving with the NSCoding protocol? Is this serialization process (for about a dozen floats) just as slow as you'd expect from an object creation/deallocation overhead, or is something particularly slow happening? I heard that (for example) sending four seperate sets of data is much, much slower, than sending one piece of data four times the size. Would I make a significant saving by sending separate packets of data that wouldn't always go together in the same packet when they happen at the same time? Are there any other bluetooth performance secrets I've missed? Thanks for your help.

    Read the article

  • Which are the most useful techniques for faster Bluetooth?

    - by Mike Howard
    Hi. I'm adding peer-to-peer bluetooth using GameKit to an iPhone shoot-em-up, so speed is vital. I'm sending about 40 messages a second each way, most of them with the faster GKSendDataUnreliable, all serializing with NSCoding. In testing between a 3G and 3GS, this is slowing the 3G down a lot more than I'd like. I'm wondering where I should concentrate my efforts to speed it up. How much slower is GKSendDataReliable? For the few packets that have to get there, would it be faster to send a GKSendDataUnreliable and have the peer send an acknowledgement so I can send again if I don't get the Ack within, say, 100ms? How much faster would it be to create the NSData instance using a regular C array rather than archiving with the NSCoding protocol? Is this serialization process (for about a dozen floats) just as slow as you'd expect from an object creation/deallocation overhead, or is something particularly slow happening? I heard that (for example) sending four seperate sets of data is much, much slower, than sending one piece of data four times the size. Would I make a significant saving by sending separate packets of data that wouldn't always go together in the same packet when they happen at the same time? Are there any other bluetooth performance secrets I've missed? Thanks for your help.

    Read the article

  • log activity. intrusion detection. user event notification ( interraction ). messaging

    - by Julian Davchev
    Have three questions that I somehow find related so I put them in same place. Currently building relatively large LAMP system - making use of messaging(activeMQ) , memcache and other goodies. I wonder if there are best practices or nice tips and tricks on howto implement those. System is user aware - meaning all actions done can be bind to particular logged user. 1. How to log all actions/activities of users? So that stats/graphics might be extracted later for analysing. At best that will include all url calls, post data etc etc. Meaning tons of inserts. I am thinking sending messages to activeMQ and later cron dumping in DB and cron analysing might be good idea here. Since using Zend Framework I guess I may use some request plugin so I don't have to make the log() call all over the code. 2.How to log stuff so may be used for intrusion detection? I know most things might be done on http level using apache mods for example but there are also specific cases like (5 failed login attempts in a row (leads to captcha) etc etc..) This also would include tons of inserts. Here I guess direct usage of memcache might be best approach as data don't seem vital to be permanantly persisted. Not sure if cannot use data from point 1. 3.System will notify users of some events. Like need approval , something broke..whatever.Some events will need feedback(action) from user, others are just informational. Wonder if there is common solutions for needs like this. Example: Based on occuring event(s) user will be notifed (user inbox for example) what happend. There will be link or something to lead him to details of thingy that happend and take action accordingly. Those seem trivial at first look but problem I see if coding it directly is becoming really fast hard to maintain.

    Read the article

  • How to have variables with dynamic data types in Java?

    - by Nazgulled
    Hi, I need to have a UserProfile class that it's just that, a user profile. This user profile has some vital user data of course, but it also needs to have lists of messages sent from the user friends. I need to save these messages in LinkedList, ArrayList, HashMap and TreeMap. But only one at a time and not duplicate the message for each data structure. Basically, something like a dynamic variable type where I could pick the data type for the messages. Is this, somehow, possible in Java? Or my best approach is something like this? I mean, have 2 different classes (for the user profile), one where I host the messages as Map<K,V> (and then I use HashMap and TreeMap where appropriately) and another class where I host them as List<E> (and then I use LinkedList and ArrayList where appropriately). And probably use a super class for the UserProfile so I don't have to duplicate variables and methods for fields like data, age, address, etc... Any thoughts?

    Read the article

  • Built in background-scheduling system in .NET?

    - by Lasse V. Karlsen
    I ask though I doubt there is any such system. Basically I need to schedule tasks to execute at some point in the future (usually no more than a few seconds or possibly minutes from now), and have some way of cancelling that request unless too late. Ie. code that would look like this: var x = Scheduler.Schedule(() => SomethingSomething(), TimeSpan.FromSeconds(5)); ... x.Dispose(); // cancels the request Is there any such system in .NET? Is there anything in TPL that can help me? I need to run such future-actions from various instances in a system here, and would rather avoid each such class instance to have its own thread and deal with this. Also note that I don't want this (or similar, for instance through Tasks): new Thread(new ThreadStart(() => { Thread.Sleep(5000); SomethingSomething(); })).Start(); There will potentially be a few such tasks to execute, they don't need to be executed in any particular order, except for close to their deadline, and it isn't vital that they have anything like a realtime performance concept. I just want to avoid spinning up a separate thread for each such action.

    Read the article

  • implementing user tracking (logging) in Rails 3

    - by seth.vargo
    Hi, I'm creating a Rails application in which logging individual user actions is vital. Every time a user clicks a url, I want to log the action along with all parameters. Here is my current implementation: class CreateActivityLogs < ActiveRecord::Migration create_table :activity_logs do |t| t.references :user t.string :ip_address t.string :referring_url t.string :current_url t.text :params t.text :action t.timestamps end end   class ActivityLog < ActiveRecord::Base belongs_to :user end In a controller, I'd like to be able to do something like the following: ... ActivityLog::log @user.id, params, 'did foo with bar' ... I'd like to have the ActivityLog::log method automatically get the IP address, referring url, and current url (I know how to do this already) and create a new record in the table. So, my questions are: How do I do this? How do I use ActivityLog without having to create an instance everytime I want to log? Is this the best way? Some people have argued for a flat-file log for this kind of logging - however, I want admins to be able to see a user's activity in the backend as well, so I thought a database solution may be better?

    Read the article

  • Failure retrieving contents of directory

    - by Bondye
    Currently I have a couple of websites. My problem is that if I login on 1 specific domain with any of my programs (using notepadd++, FileZilla and Netbeans) the program stops at the content listing. I had it correctly running, (I'm working on a project on this domain for more than a year now) and suddenly I broke it somehow. This only happens on 1 specific domain, all other domains (from other hosts) are working. My colleague (next to me with same ip address) is able to login on this domain. Notepadd++ says: Failure retrieving contents of directory Filezilla says: Failed to retrieve directory listing Netbean popups: Upload files on save failed. (Because I have the setting upload on save enabled.) What I tried: First I thought it's my firewall, I disabled firewall but no result. Also notice that all other domain are working. Maby a blacklist with my ip address? No my colleague has the same ip address. Could anyone help me on this? Notepad++ Log [NppFTP] Everything initialized -> TYPE I Connecting -> Quit 220 ProFTPD 1.3.3e Server ready. -> USER username 331 Password required for domain -> PASS *HIDDEN* 230 User username logged in -> TYPE A 200 Type set to A -> MODE S 200 Mode set to S -> STRU F 200 Structure set to F -> CWD /domains/domain.nl/ 250 CWD command successful Connected -> CWD /domains/domain.nl/ 250 CWD command successful -> PASV 227 Entering Passive Mode (194,247,31,xx,137,xx). -> LIST -al Failure retrieving contents of directory /domains/domain.nl/ Filezilla log Status: Verbinden met 194.247.xx.xx:21... Status: Verbinding aangemaakt, welkomstbericht afwachten... Antwoord: 220 ProFTPD 1.3.3e Server ready. Commando: USER username Antwoord: 331 Password required for username Commando: PASS ******** Antwoord: 230 User username logged in Commando: SYST Antwoord: 215 UNIX Type: L8 Commando: FEAT Antwoord: 211-Features: Antwoord: MDTM Antwoord: MFMT Antwoord: LANG en-US;ja-JP;zh-TW;it-IT;fr-FR;zh-CN;ru-RU;bg-BG;ko-KR Antwoord: TVFS Antwoord: UTF8 Antwoord: AUTH TLS Antwoord: MFF modify;UNIX.group;UNIX.mode; Antwoord: MLST modify*;perm*;size*;type*;unique*;UNIX.group*;UNIX.mode*;UNIX.owner*; Antwoord: PBSZ Antwoord: PROT Antwoord: REST STREAM Antwoord: SIZE Antwoord: 211 End Commando: OPTS UTF8 ON Antwoord: 200 UTF8 set to on Status: Verbonden Status: Mappenlijst ophalen... Commando: PWD Antwoord: 257 "/" is the current directory Commando: TYPE I Antwoord: 200 Type set to I Commando: PASV Antwoord: 227 Entering Passive Mode (194,247,31,xx,xxx,xx). Commando: MLSD Fout: Verbinding verloren Fout: Ontvangen van mappenlijst is mislukt Sorry that it's dutch.

    Read the article

  • ¿Oficina sin papeles?

    - by [email protected]
    Recientemente hemos organizado un evento de Digitalización para mostrar algunos de los últimos productos de Oracle en éste área.Siempre tendemos a pensar que en España estamos retrasados en estas tecnologías y que el mercado no está preparado para eliminar el papel. En algunos casos es cierto, pero también nos hemos llevado sorpresas con clientes extremadamente avanzados en la gestión electrónica del papel.Para los clientes que no tienen una solución corporativa ya desplegada, nuestra oferta de Imaging les parece completa e integrada, porque les permite digitalizar el papel en el punto más cercano a su recepción y posteriormente realizar todo el trámite interno de forma digital.Este proceso es el que se muestra en la siguiente imágen: Sobre todo en el entorno financiero los clientes ya tienen grandes infraestructuras desplegadas (algunos con funcionalidades muy sofisticadas que han desarrollado a medida durante estos últimos años).En estos casos, su interés está centrado en 2 capacidades clave de nuestros productos: La digitalización distribuidaEl OCR inteligenteCuando ya disponemos de una infraestructura de digitalización centralizada, tenemos varios puntos de mejora con los que conseguir mayores ratios de ahorro en la gestión del papel. Uno de ellos es digitalizar en origen, de forma que ahorraremos en logística de desplazamiento y almacenamiento de papel (reducimos valijas) y en velocidad de arranque de los procesos (desde el momento de la recepción).El hecho de poder hacer esto sólo con un explorador de internet es muy novedoso para los clientes.El no instalar ninguna pieza de software de cliente parece que es un requisito que muchos clientes estaban demandando desde hace tiempo. De hecho, estamos realizando demos en vivo con un escáner del cliente (solo necesitamos el driver de windows para ese escáner). El resultado es sorprendente porque mostramos cómo: escaneamos con sólo un explorador de internet; el documento escaneado, con sus metadatos, se incorporan al gestor documental; y se dispara su workflow de aprobación.Hacer esto en segundos es algo que genera mucho interés en los clientes de cara a acelerar la gestión de muchos de sus trámites en papel.Por último, lo más novedoso de la oferta es el OCR inteligente. Hay quien ya tiene absolutamente operativas sus infraestructuras de digitalización con todas estas capacidades, y buscan un paso más allá con el reconocimiento inteligente de todos los metadatos posibles.El beneficio es rápido, claramente cuantificable y muy alto. El software de OCR inteligente se basa en lógica difusa y nos permite definir los umbrales de validación totalmente adecuados a nuestros factores de confianza. Es decir, configuramos el umbral para que cuando el software acepta un acierto tengamos la seguridad total de que dichos metadatos se han reconocido perfectamente. En caso contrario, el software lanza una validación manual.¿Qué pasa si conseguimos que para determinados documentos, el 40%, 50%, 60% o incluso el 70% u 80% de ellos fueran procesados 100% automáticamente?. El ahorro es inmenso, la reducción del tiempo de proceso también, y la integración con nuestras infraestructuras de digitalización es muy sencilla (basta con desviar unos cuantos documentos de un tipo concreto a Oracle Forms Recognition y evaluar el resultado).Os animo a que veáis estos productos y consigamos hacer realidad la reducción de papel.

    Read the article

  • ADNOC talks about 50x increase in performance

    - by KLaker
    If you are still wondering about how Exadata can revolutionise your business then I would recommend watching this great video which was recorded at this year's OpenWorld. First a little background...The Abu Dhabi National Oil Company for Distribution (ADNOC) is an integrated energy company that was founded in 1973. ADNOC Distribution markets and distributes petroleum products and services within the United Arab Emirates and internationally. As one of the largest and most innovative government-owned petroleum companies in the Arab Gulf, ADNOC Distribution is renowned and respected for the exceptional quality and reliability of its products and services. Its five corporate divisions include more than 200 filling stations (a number that is growing at 8% annually), more than 150 convenience stores, 10 vehicle inspection stations, as well as wholesale and retail sales of bulk fuel, gas, oil, diesel, and lubricants. ADNOC selected Oracle Exadata Database Machine after extensive research because it provided them with a single platform that can run mixed workloads in a single unified machine: "We chose Oracle Exadata Database Machine because it.offered a fully integrated and highly engineered system that was ready to deploy. With our infrastructure running all the same technology, we can operate any type of Oracle Database without restrictions and be prepared for business growth," said Ali Abdul Aziz Al-Ali, IT division manager, ADNOC Distribution. ".....we could consolidate our transaction processing and business intelligence onto one platform. Competing solutions are just not capable of doing that." - Awad Ahmed Ali El-Sidiq, Senior Database Administrator, ADNOC Distribution In this new video Awad Ahmen Ali El Sidddig, Senior DBA at ADNOC, talks about the impact that Exadata has had on his team and the whole business. ADNOC is using our engineered systems to drive and manage all their workloads: from transaction systems to payments system to data warehouse to BI environment. A true Disk-to-Dashboard revolution using Engineered Systems. This engineered approach is delivering 50x improvement in performance with one queries running 100x faster! The IT has even revolutionised some of their data warehouse related processes with the help of Exadata and now jobs that were taking over 4 hours now run in a few minutes.  To watch the video click on the image below which will take you to our Oracle YouTube page: (if the above link does not work, click here: http://www.youtube.com/watch?v=zcRpxc6u5Ic) Now that queries are running 100x faster and jobs are completing in minutes not hours, what is next for the IT team at ADNOC? Like many of our customers ADNOC is now looking to take advantage of big data to help them better align their business operations with customer behaviour and customer insights. To help deliver this next level of insight the IT team is looking at the new features in Oracle Database 12c such as the new in-memory feature to deliver even more performance gains.  The great news is that Awad Ahmen Ali El Sidddig was awarded DBA of the Year - EMEA within our Data Warehouse Global Leaders programme and you can see the badge for this award pop-up at the start of video. Well done to everyone at ADNOC and thanks for spending the time with us at OOW to create this great video.

    Read the article

  • Desigual Extiende Uso de Oracle ® ATG Web Commerce para potenciar su expansión internacional en línea

    - by Noelia Gomez
    Normal 0 21 false false false ES X-NONE X-NONE MicrosoftInternetExplorer4 Desigual, la empresa de moda internacional, ha extendido el uso de Oracle® ATG Web Commerce para dar soporte a su expansión creciente de sus capacidades comerciales de manera internacional y para ayudar a ofrecer un servicio de compra más personalizado a más clientes de manera global. Desigual eligió primero Oracle ATG Web Commerce en 2006 para lanzar su plataforma B2B y automatizar sus ventas a su negocio completo de ventas, Entonces, en Octubre de 2010, Desigual lanzó su plataforma B2C usando Oracle ATG Web Commerce, y ahora ofrece operaciones online en nueve países y 11 lenguas diferentes. Para dar soporte a esta creciente expansión de sus operaciones comerciales y de merchandising en otras geografías, Desigual decidió completar su arquitectura existente con Oracle ATG Web Commerce Merchandising y Oracle ATG Web Commerce Service Center. Además, Desigual implementará Oracle Endeca Guided Search para permitir a los clientes adaptarse de manera más eficiente con su entorno comercial y encontrar rápidamente los productos más relevantes y deseados. Desigual usará las aplicaciones de Oracle para permitir a los usuarios del negocio ganar el control sobre cómo ofrece la compañía una experiencia al cliente más personalizada y conectada a través de los diferentes canales, promoviendo ofertas personalizadas a cada cliente, priorizando los resultados de búsqueda e integrando las operaciones de la web con el contact center sin problemas para aumentar la satisfacción y mejorar los resultados de las conversaciones. Desde que se lanzara en 2002, el minorista español ha crecido rápidamente y ahora ofrece su original moda en sus 200 tiendas propias , 7000 minoristas autorizados y 1700 tiendas de concesión en 55 países. Infórmese con mayor profundidad de nuestras soluciones Oracle Customer Experience aquí. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Slide Men&uacute; con Jquery &amp; Asp.net

    - by Jason Ulloa
    En este post, trabajaremos una parte que en ocasiones se nos hace un “mundo”, la creación de menús en nuestras aplicaciones web. Nuestro objetivo será evitar la utilización de elementos que puedan ocasionar que la página se vuelva un poco lenta, para ello utilizaremos jquery que viene siendo una herramienta muy semejante a ajax para crear nuestro menú. Para crear nuestro menús de ejemplo necesitaremos de tres elementos: 1. CSS, para aplicar los estilos. 2. Jquery para realizar las animaciones. 3. Imágenes para armar los menús. Nuestro primer Paso: Será agregar la referencias a nuestra página, para incluir los CSS y los Scripts. 1: <link rel="stylesheet" type="text/css" href="Styles/jquery.hrzAccordion.defaults.css" /> 2: <link rel="stylesheet" type="text/css" href="Styles/jquery.hrzAccordion.examples.css" /> 3: <script type="text/javascript" src="JS/jquery-1.3.2.js"></script> 1:  2: <script type="text/javascript" src="JS/jquery.easing.1.3.js"> 1: </script> 2: <script type="text/javascript" src="JS/jquery.hrzAccordion.js"> 1: </script> 2: <script type="text/javascript" src="JS/jquery.hrzAccordion.examples.js"> </script> Nuestro segundo paso: Será la definición del html que contendrá los elementos de tipo imagen y el texto. 1: <li> 2: <div class="handle"> 3: <img src="images/title1.png" /></div> 4: <img src="images/image_test.gif" align="left" /> 5: <h3> 6: Contenido 1</h3> 7: <p> 8: Contenido de Ejemplo 1.<br> 9: <br> 10: Agregue todo el contenido aquí</p> 11: </li> En el código anterior, hemos definido un elemento que contendrá una imagen que se mostrará dentro del menú una vez desplegado. Una etiqueta H3 de html que tendrá el Título y un elemento <p> para definir el parrado de texto. Como vemos es algo realmente sencillo. Si queremos agregar mas elementos, será nada mas copiar el div anterior y agregar nuevo contenido. Al final, nuestro menú debe lucir algo así: Por último, les dejo el ejemplo para descargar

    Read the article

  • Mejores prácticas de Recursos Humanos: Cross Company Mentoring

    - by Fabian Gradolph
    Una de las cosas positivas de trabajar en una gran organización como Oracle es la posibilidad de participar en iniciativas de gran alcance que normalmente no están disponibles en muchas empresas. Ayer se presentó, junto con American Express y CocaCola, la tercera edición del programa Cross Company Mentoring, una iniciativa en la que las tres empresas colaboran facilitando mentores (profesionales experimentados) para promover el desarrollo profesional de individuos de talento en las tres empresas. La originalidad del programa estriba en que los mentores colaboran con el desarrollo de los profesionales de las otras empresas participantes y no sólo con los propios. La presentación inicial fue realizada por Alfredo García-Valverde, presidente de American Express en España. Posteriormente, Julia B. López, de American Express, y Rosa María Arias, de Oracle (en ese orden en la foto), han detallado en qué consiste la iniciativa, además de hacer balance de la edición anterior. Aunque este programa -complementario de los que ya funcionan en las tres empresas- está disponible para hombres y mujeres, hay que destacar que buena parte de su razón de ser está en potenciar el papel de mujeres profesionales de talento en las compañías. En términos generales, todas las grandes organizaciones se encuentran con un problema similar en el desarrollo del talento femenino. Independientemente del número de mujeres que formen parte de la plantilla de la empresa, lo cierto es que su número decrece de forma drástica cuando hablamos de los puestos directivos. La ruptura de ese "techo de cristal" es una prioridad para las empresas, tanto por motivos de simple justicia social, como por aprovechar al máximo todo el potencial del talento que ya existe dentro de las organizaciones, evitando que el talento femenino se "pierda" por no poder facilitar las oportunidades adecuadas para su desarrollo. La iniciativa de Cross Company Mentoring tiene unos objetivos bien definidos. En primer lugar, desarrollar el talento con un método innovador que permite conocer las mejores prácticas en otras empresas y aprovechar el talento externo. Adicionalmente, como ha señalado Julia López, es un método que nos fuerza a salir de la zona de confort, de las prácticas tradicionalmente aceptadas dentro de cada organización y que difícilmente se ponen en cuestión. El segundo objetivo es que el Mentee, el máximo beneficiario del programa, aprenda de la experiencia de profesionales de gran trayectoria para desarrollar sus propias soluciones en los retos que le plantee su carrera profesional. El programa que se ha presentado ahora, la tercera edición, arrancará en el próximo mes y estará vigente hasta finales de año. Seguro que tendrá tanto éxito como en las dos ediciones anteriores.

    Read the article

< Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >