Postgresql Jobs

93 were found based on your criteria {{ paging.total|number:0 }} were found based on your criteria

show all
  • Hourly ({{ jobTypeController.getFacetCount("0")|number:0}})
  • Fixed Price ({{ jobTypeController.getFacetCount("1")|number:0}})
Fixed-Price - Intermediate ($$) - Est. Budget: $100 - Posted
So currently we are a seller on eBay what I'm looking to do is simplify the process so we can get pull sheets for our puller in the warehouse and light room. Basically what happens is this: 1. Orders come in on a daily basis from 2 eBay stores. 2. We download what's called "Awaiting Shipments" (Awaiting Shipments Excel Sheet) 3. From the 2 "Awaiting Shipment" files for both stores we make 3 separate Pull Sheet files. One for all the Large items (Fenders, Bumper Covers, Grilles, Radiator Supports and a couple others) one for the hoods, and one for the small items (Lights, Mirrors, Radiators, AC Condensers). All these items can be portioned out based on our classes. ie Lights is class 19 hoods is class 10 and so on. 4. The information used to populate the pull sheets come from 4 different files. 2 text files (Location File Excel Sheet, Class File Excel Sheet), the "Awaiting Shipments Excel Sheet" File that is pulled off of eBay, and the SKU Key. The general concept is simple but some data needs to be incorporated and manipulated in order for the files to also include BIN locations, Quantity, Type, Custom Label, Item Title, and Sales Order # along with some columns to the side for the puller to mark either Y or N to indicate whether they pulled the item or not. (Sample Pull Excel Sheet) The goal is to simplify this process as much as possible to where anyone who comes in can generate a pull sheet without going through any of the steps. Currently we have to make the document to include all the additional information so we're looking to simplify this so we don't need someone with knowledge of excel or any other program to be able to come in and make the pull sheets. A couple things to note the Bin location file changes on a daily basis depending on which BIN the product was put back up into. Looking in "Sample Pull Excel Sheet" you'll see line items 39,40,41, and 43 are highlighted. These line items are highlighted because they are part of an order. Meaning a customer has purchase multiple items from our store in which those items need to be combined for shipment. The multiple item purchases need to be highlighted. The columns also need to prioritize the location column, the UOH column, and the Order # column for the Sample Pull Excel Sheet. "SKU Key Excel Sheet" shows all the products and the items that are pairs. Sometimes a customer will order from one of our listings a pair of headlights. If you view Cell B80 you'll see the SKU 000.001x. The x on the end of the part number indicates a pair. In this case SKUs 0.001 + 0.002 = 000.001x and so forth. Another example would be for cell B90 that reads 000.011x. In this case 0.011 (A90) and 0.012 (A91) = 000.011x (B90). I don't think this is important but some lights the right side is the same as the left side. 2.059 (A194) + 2.059 (A195) = 002.059x (B194). No one Sales order needs to be populated twice on to any pull sheet. Once the order is populated once and printed once for use we don't need it anymore. This is all I can think of if there are any questions or concerns please let me know. Otherwise please offer your suggestions and pricing for the job. The person selected for the job will be contingent upon 2 things pricing and simplicity of the solution. Also please don't auto generate a sales pitch to me those offers will be rejected. Thanks! PS: I've defaulted a fixed price of $100 but feel free to make an offer for more. As long as the proposition is proportional compared to the value it provides I have no issues. I've also attached an excel file with the sheets of what the information looks like.
Skills: PostgreSQL Programming Linear Programming Microsoft Access Programming Microsoft Excel
Fixed-Price - Intermediate ($$) - Est. Budget: $1,350 - Posted
Looking for experienced developer, for ecommerce B2C marketplace ie listing of events and tickets. DB (User and Item) is postgreSQL. Items already there. Need system for: i) Users (consumer) to register, surveyed, tracked; during navigation of Items; recommendation engine ii) Super Users (Merchants) to register, navigate to events and upload category, price, quantity iii) dashboard for Super Users and Admin iv) allow SuperUser, Admin to load/manual add Items, Tickets v) payment processes Design of viewing ie navigation/listing is already done and quite specific.
Skills: PostgreSQL Programming
Fixed-Price - Intermediate ($$) - Est. Budget: $170 - Posted
hi there, I need modest hybrid (item-based) recommendation engine linking Users to Items in a database, which are mostly crawled with a small portion (<5%) crowdsourced. Marketplace [this is the main focus of the project] 1) User profile entered via i) registration ie preference/survey (item ranking). logged via fb/google accounts; ii) online behaviour [this is not the main focus of the project] 2) Items are classified with many attributes and assumed cleaned and structured. some but most expected to lack rankings by Users. Items are time-sensitive (perish over time) but can still be used for recommendation reference point. -> Match Users with Items. ie predict User's ranking on items
Skills: PostgreSQL Administration Django Ecommerce Platform Development Pandas
Hourly - Intermediate ($$) - Est. Time: Less than 1 week, Less than 10 hrs/week - Posted
I have an CentOS 7 server ( let’s call it SSD1), running Drupal apps on top of Docker. 3 weeks after install, I have noticed that drupal is slower, at postgresql level ( via drupal devel query log). Even importing the same image Docker to a SATA server ( let’s call it SATA2), that is suppose to be slower, perform better that the initial SSD server. ... In order to further isolate the problem, I have installed vanilla drupal directly on 3 bare metal servers. ( SSD1, SATA2, SSD3) Initially, I have tested with Vanilla Drupal 7 Devel query log, but, in order to avoid false positive, I have installed also Vanilla Drupal 8 which comes with WebProfiler from Symfony. The benchmark sites are installed on Postgresql and MariaDB also, so the performance issue isn’t specific to a certain database engine. ... In order to avoid false positive due to this layer, I have investigated also the slow query logs for Postgresql, that confirms SSD1 being twice slower. BTW, I have the initial logs after SSD1 install, that confirms that SSD1 used to be fast.
Skills: PostgreSQL Administration CentOS Linux System Administration MySQL Administration