Postgresql offline limited multi-master in Postgres

Site A will be generating a set of records. Nightly they will backup their database and ftp it to Site B. Site B will not be modifying those records at all, but will be adding more records and other tables will be creating FK's to Site A's records. So, essentially, I need to setup a system to take all the incremental changes from Site A's dump (mostly inserts and updates, but some deletes possible) and apply them at Site B. At this point, we're using Postgres 8.3, but could upgrade if valuab

PostgreSQL generate_series() with SQL function as arguments

I have a SQL function called get_forecast_history(integer,integer) that takes two arguments, a month and a year. The function returns a CUSTOM TYPE created with: CREATE TYPE fcholder AS (y integer, m integer, product varchar, actual real); The first line of the function definition is: CREATE OR REPLACE FUNCTION get_forecast_history(integer, integer) RETURNS SETOF fcholder AS $$ Calling: SELECT * FROM get_forecast_history(10, 2011); For example produces the following table (the result

How can I execute ALTER DATABASE $current_database in PostgreSQL

I'm trying to do this in a SQL script that I feed into psql: ALTER DATABASE dbname SET SEARCH_PATH TO myschema,public but I need dbname to be dynamically set to the current database rather than hard coded. Is this possible in PostgreSQL? I tried this but it doesn't work: ALTER DATABASE (select current_database()) SET SEARCH_PATH TO myschema,public;

Postgresql What is a good cms that is postgres compatible, open source and either php or python based?

Common features of cms, with admin tools to help manage / moderate community have a large member base on very basic site where members provide us contact info and info about their professional characteristics. About to expand to build new community site (to migrate our member base to) where the users will be able to msg each other, post to forums, blog, share private group discussions, and members will be sent inivitations to earn compensation for their expertise. Profile pages, job postings,

Postgresql How to access plpgsql composite type array components

Let's say I've created a composite type in Postgresql: CREATE TYPE custom_type AS (x integer y integer); I need to use it in a function as an array: ... DECLARE customVar custom_type[]; BEGIN .... My question is: how do I access custom_type's specific components? For example, I want to (re)assign 'x' for the third element in custom_type array...

Postgresql Table indexes for Text[] array columns

I have a PostgreSQL database table with text[] (array) columns defined on it. I'm using these columns to search for a specific record in the database in this way: select obj from business where ((('street' = ANY (address_line_1) and 'a_city' = ANY (city) and 'a_state' = ANY (state)) or ('street' = ANY (address_line_1) and '1234' = ANY (zip_code))) and ('a_business_name' = ANY (business_name) or 'a_website' = ANY (website_url) or array['123'] && phone_numbers)) T

Create a constant in Postgresql

Suppose that I have this query: select * from myTable where myTable.myCol in (1,2,3) I would like to do that: with allowed_values as (1,2,3) select * from myTable where myTable.myCol in allowed_values It gives me a Syntax Error in the first row, can you help me fixing it?

Postgresql: create a table and delete if exist

I'm running a batch of postgres queries from a python script. Some queries are as follow: create table xxx [...] Usually I get the following error: psycopg2.ProgrammingError: relation "xxx" already exists I know that i can manually delete the xxx table, but i ask me if there are a way to avoid this error. Something like delete xxx table if exist. Thanks

OpenShift: How to connect to postgresql from my PC

I have an openshift app, and I just installed a postgresql DB on the same cartridge. I have the postgresql DB installed but now I want to connect to the DB from my PC so I can start creating new tables. Using port forwarding I found my IP for the postgresql db to be 127.3.146.2:5432 under my webaccount I see my Database: txxx User: admixxx Password: xxxx Then Using RazorSQl I try to setup a new connection keeps coming as user password incorrect. If I try and use the local IP to connect

Postgresql Postgres/full text search showing a preview of part of a document

I'm using postgres 9.3 with full text search and I'm running a query like select * from jobs where fts @@ plainto_tsquery('pg_catalog.english','search term'); I'm getting the proper results, however, I'd like to be able to get a portion of the search results that match the terms searched. The FTS column is just a to_tsvector() of the description column. What I'd like to do is show a short excerpt of the description, with the terms highlighted. Any ideas on how I'd achieve this?

Postgresql installing postgres via puppet with customer data_directory

I'm trying to install postgres 9.1 on an ubuntu 12.04 machine using puppet v3.4.3 and puppetlabs/postgresql module v3.3.0. I want the data_directory to point to a large disk I've mounted. If I change the datadir property of postgresql::globals it doesn't seem to do anything. The postgres.conf file still has data_directory pointing to /var/lib/postgresql/9.1/main Then I tried also using postgresql::server::config_entry to change the data_directory param in postgres.config but that gives the foll

Postgresql Storing a timestamp as a default value in hstore

I am trying to store the current timestamp as a default value in an hstore. I tried using now() but all that is stored is "last_collected"=>"now()". Here is what I have: '"level"=>"1", "last_collected"=>now()'::hstore Is there a correct way to do this or is it even possible? Thanks!

PostgreSQL: Compute number of hours in the day on daylight savings time

I'm trying to come up with a query that will properly count that there are 25 hours on daylight savings. My table has a column of type timestampz called hourly_timestamp. The incorrect answer I have so far looks like this: select EXTRACT(epoch FROM tomorrow-today)/3600 from( select date_trunc('day', timezone('America/New_York', hourly_timestamp) as today , date_trunc('day', timezone('America/New_York', hourly_timestamp))) +

Postgresql Importing data from csv files to postgres table without creating table first

I have tens of csv files with over a hundred columns in each. I need to upload these files to postgres tables so to process them and transfer the data to relational tables. I don't want to process each file manually to extract the column names as this might be a repetitive process. Neither the pgAdmin import tool nor the COPY function processes the first row to create columns of the table. So what would be the best approach to handle this issue?

How can I execute a least cost routing query in postgresql, without temporary tables?

How can I execute a telecoms least cost routing query in PostgreSQL? The purpose is generate a result set with ordered by the lowest price for the carriers. The table structure is below SQL Fiddle CREATE TABLE tariffs ( trf_tariff_id integer, trf_carrier_id integer, trf_prefix character varying, trf_destination character varying, trf_price numeric(15,6), trf_connect_charge numeric(15,6), trf_billing_interval integer, trf_minimum_interval integer ); For instanc

Postgresql execute of dynamic sql does not set found in psql

I execute an sql using dynamic sql in a trigger as the trigger will run across multiple tables. The sql will select from a table and check if there are results do nothing if no results insert into a table. it works wit the first psql select but the found variable is true even though it is empty create function test() returns trigger $Body$ execute 'select * from ' || quote_ident(TG_TABLE_NAME) || '_table1 where condition'; if not found then insert into x values(anything); end if; execut

Moving window average in postgreSQL

I have a data set in a csv file that contains dates, category and values. However, the dates might have gaps. E.g. Date | Category | Value 2016-01-01 Category A 6 2016-01-02 Category A 7 2016-01-03 Category A 4 2016-01-01 Category B 4 2016-01-01 Category C 16 2016-01-02 Category C 8 2016-01-02 Category D 5 I imported the data in a table in PostgreSQL. I need to calculate a rolling average for the past 7 days for each category (lets s

Postgresql Error while installing PostGIS on RHEL_5?

I am trying to install PostGIS on a RHEL_5 system and failing at it miserably. The way I am trying to install it is as follows. I copy over all the artifacts of postgresql server as in the bin, lib, share directories of postgresql server in a directory called pgql and I place it in the root directory i.e /pgsql is the directory which contains things like pg_config and all the other libs and bins that one gets by installing postgresql using the standard installation. All the dependencies of PostG

PostgreSQL array of elements that each are a foreign key

I am attempting to create a DB for my app and one thing I'd like to find the best way of doing is creating a one-to-many relationship between my Users and Items tables. I know I can make a third table, ReviewedItems, and have the columns be a User id and an Item id, but I'd like to know if it's possible to make a column in Users, let's say reviewedItems, which is an integer array containing foreign keys to Items that the User has reviewed. If PostgreSQL can do this, please let me know! If not,

Insert into table from select distinct query in postgresql

I have a table with 33 columns that has several duplicates so i am trying to remove all the duplicates this way because this select distinct query has the correct number of data. CREATE TABLE students ( school char(2),sex char(1),age int,address char(1),famsize char(3), Pstatus char(1),Medu int,Fedu int,Mjob varchar,Fjob varchar,reason varchar, guardian varchar,traveltime int,studytime int,failures char(1), schoolsup varchar,famsup varchar,paid varchar,activities varchar, nursery varchar,hi

Postgresql Full text search when words have the same start

I'm new to full text search engine, I have a table filled with french and english words, but I'm having wierd issues when I try to implement the requests. The goal is to do a search engine with auto completion. Currently there the where are made with ILIKE but I heard that it's not 'scalable' Let's say that I have a book table : CREATE TABLE t_books ( title TEXT ); INSERT INTO t_books VALUES('Admin'); INSERT INTO t_books VALUES('Administratif'); INSERT INTO t_books VALUES('Adminare');--

Postgresql Insert Primary Key for every Foreign Key inserted

In below tables, films table is a sub class of shows table. For every new row inserted (via web interface) into table shows I want to insert a row into films: For example, INSERT INTO shows title VALUES ('title'); This will add a new row in shows with showid = next value in sequence, and title = 'title'... What I want is to get showid value (from shows) and insert it into new row in films table. How can I do that? CREATE SEQUENCE shows_showid_seq; CREATE TABLE "shows" ( "showid" BIGINT

Postgresql migration to pgsql, error in a function with select into

i am migrate from sql-server to pgsql. i use a script created in java for do that but when i try to convert this procedure(sql-server) to a function and testing in the bd(pgsql) the console show error in the part #paso1. this is the code in sql-server: /****** Object: StoredProcedure [dbo].[paBalanceClasificado] Script Date: 30/11/2017 16:38:42 ******/ SET ANSI_NULLS OFF GO SET QUOTED_IDENTIFIER OFF GO CREATE PROCEDURE [dbo].[paBalanceClasificado] @empresa int, @fecha1 smalldatetime, @fech

How to safely configure postgresql data folder with docker?

version: '3' services: db: image: postgres volumes: - ./postgres-data:/var/lib/postgresql/data web: build: . command: python3 manage.py runserver 0.0.0.0:8000 volumes: - .:/code ports: - "8000:8000" depends_on: - db Is this configuration safe to work with PostgreSQL using docker ? Do I need any more configurations to make it safer (eg: .dockerignore) ? Is theire a risk that volumes work on two way binding and that may cause data loose ?

difference between these two postgreSQL start commands?

Setting up psql for a second time and I came across a guide that told me to use this line to start the server: pg_ctl -D /usr/local/var/postgres start. When before I was taught to use this line: postgres -D /usr/local/var/postgres I was wondering what the difference was between the two and if there are advantages to one over the other?

Postgresql: Create FUNCTION that loads data from url

Postgres Version: 10 I would like to create a function that takes the url as parameter, executes the curl command and returns the loaded data. Something like this: CREATE OR REPLACE FUNCTION get_content_from_url(url text) RETURNS text LANGUAGE plpgsql AS $$ BEGIN RETURN PROGRAM 'curl ' + url; END $$; Is this possible? How do I accomplish this? UPDATE: I also tried following plsh function: CREATE FUNCTION load_file_content (text) RETURNS text AS ' #!/bin/bash curl $1 ' LANGUAGE plsh; B

Joining multiple nested queries in Postgresql

I want to join information from 4 tables. Its super complicated, and I'm not a guru with subqueries. Would appreciate some help, if anyone can understand this. I have a product table, and I want to look up the dealer (in a dealer info table) and join the results. Then I need to join the results on the product.owner, to a table called accounts (on account.name). I think I worked it out as this: > d as (SELECT device_id,dealer_id,owner_id from products) i as (LEFT > JOIN dealer ON public

Postgresql Window function without ORDER BY

There is a window function without ORDER BY in OVER () clause. Is there a guarantee that the rows will be processed in the order specified by the ORDER BY expression in SELECT itself? For example: SELECT tt.* , row_number() OVER (PARTITION BY tt."group") AS npp --without ORDER BY FROM ( SELECT SUBSTRING(random() :: text, 3, 1) AS "group" , random() :: text AS "data" FROM generate_series(1, 100) t(ser) ORDER BY "group", "data" ) tt ORDER BY tt."group", npp;

Docker Postgresql ERROR HINT: The server must be started by the user that owns the data directory postgresql rhel

I have a docker compose file where i am launching PostgreSQL with shared volume. But i am continuously getting the bellow ERROR. 2018-10-11 14:57:01.757 GMT [81] LOG: skipping missing configuration file "/postgresql/data/postgresql.auto.conf" | 2018-10-11 14:57:01.768 GMT [81] FATAL: data directory "/postgresql/data" has wrong ownership | 2018-10-11 14:57:01.768 GMT [81] HINT: The server must be started by the user that owns the data directory. My docker compose file as below addb: imag

Postgresql postgres function takes parameter bytea while calling insted of varchar

ERROR: function dharani.fn_generate_ror_1b_citizen(bytea, character varying) does not exist at character 15 HINT: No function matches the given name and argument types. You might need to add explicit type casts. STATEMENT: select * from dharani.fn_generate_ror_1b_citizen($1,$2) ERROR: function dharani.fn_generate_pahani_citizen(bytea, bytea, character varying) does not exist at character 15 HINT: No function matches the given name and argument types. You might need to add explicit type cas

Postgresql ERROR: operator does not exist: timestamp without time zone > integer)

I am using Rails 5.2 and passing a date parameter. To simplify my example, look at the below sql query, it is similar but simplified as it is too complex to be used with active record methods, so I need to run the raw sql.. sql = 'Select * FROM mytable WHERE created_at > #{@start_date_time}' 1) How do I sanitize the parameters as I pass them into a string or the execute command. 2) What format should my date be in? I tried '2018-01-01 00:00:00.000' and '2018-01-01' and both error. CODE

Applying Entity Framework Core's Database Update to PostgreSQL server on a docker container with SSH

I'm a bit new to SQL and Docker. I've recently created a container for PostgreSQL on my Linux server that can be accessed by SSH. I am trying to manage it using the Entity Framework on .NET Core 2.2. I'm trying to go by Npgsql's official documentation, but there isn't any provision for connection via SSH. The example they've provided for the connection string is: optionsBuilder.UseNpgsql("Host=my_host;Database=my_db;Username=my_user;Password=my_pw") Where: my_host is set to the docker cont

PostgreSQL Marketing Report

I'm writing out a query that takes ad marketing data from Google Ads, Microsoft, and Taboola and merges it into one table. The table should have 3 rows, one for each ad company with 4 columns: traffic source (ad company), money spent, sales, and cost per conversion. Right now I'm just dealing with the first 2 till I get those right. The whole table's data should be grouped within that a given month's data. Right now the results I'm getting are multiple rows from each traffic source, some of th

Postgresql select from oneToMany with condition

I'm trying to select the item from items table when it have an offer if not select it with the original price. For this I have 2 tables items and priceList and this is an example: | itemId | itemName | |--------|---------------| | 1 | bikex1 | | 2 | bikex2 | | 3 | bikex3 | | 4 | bikex4 | | priceIDId | itemID | itemPrice | priceStatus | |-----------|----------|-----------|-------------| | 1 | 1 |100 | offer |

Postgresql How to Generate Scripts For All Triggers and tables in Database Using pgAdmin4

I'd like to generate an SQL script that contains the SQL to create all of the triggers and tables that exist in our database. I have already tried the method where you right-click the table, select script -> create Scripts. While this does create a SQL script for the table, it does not create a script for the trigger functions of that table. What should I do? I also tried: SELECT tgrelid::regclass , tgname , pg_get_triggerdef(oid) , pg_get_functiondef(tgfoid) FROM pg_trigger

Postgresql How to update postgres JSONB with variable params in golang?

I have a table in cockroachdb/postgres as below: column_name | data_type | is_nullable | column_default | generation_expression | indices | is_hidden +-------------+-----------+-------------+----------------+--------------------+-----------+-----------+ id | STRING | false | NULL | | {primary} | false student | JSONB | true | NULL | | {} | false (2 rows) id |

Postgresql Nifi : How to move CSV content and its meta data to single table in Postgresdatabase using NiFi

I have csv files and I want to move the content of files along with its meta data (File name, source (To be hard coded), control number (Part of file name - to be extracted from file name itself) using NiFi. So here is the sample File name and layout - File name - 12345_user_data.csv (control_number_user_data.csv) source - Newyork CSV File Content/columns -  Fields - abc1, abc2, abc3, abc4  values - 1,2,3,4 Postgres Database table layout Table name - User_Education fields name  - contro

Postgresql - restore to savepoint across transaction boundaries

Is there a way to roll back to a "committed savepoint"? Afaik, the actual savepoints supported by postgresql are subtransactions and lose their meaning when the enclosing transaction commits or aborts. Are there "savepoints" across transaction boundaries? Basically, what I want is execute these three transaction in order: Transaction ~ A BEGIN TRANSACTION; COMMIT SAVEPOINT 'before_a'; DO SOMETHING; COMMIT TRANSACTION; Transaction ~ B BEGIN TRANSACTION; DO SOMETHING_ELSE; COMMI

Postgresql Springboot Pessimistic Locking

I have trying to achieve row level locking in the postgres DB.Below is the sample code Repository : @Repository public interface DeploymentRepository extends JpaRepository<Deployment,Long> { @Lock(LockModeType.PESSIMISTIC_WRITE) List<Deployment> findByStatus(Status status); } Scheduler function calling lockable method: @Scheduled(initialDelay = 1000, fixedRate = 10000) public void checkStatus(){ try { for (Deployment deployment : deploymentRepository.findBySt

Optional properties in a composite type (Postgresql)

I've created a composite type which corresponds to my interface in Typescript for one of my Schemas; however I can't seem to find any documentation pointing on how to do that. Here's an example: CREATE TYPE PROMOTES AS (level INT, roles TEXT[], strip_roles BOOLEAN); and I wish to mark strip_roles as an optional property, with a default value of true. Is this even possible? And if yes, how would I go about it?

Why my psql (postgreSQL) not inserting the email and password

I've created a table using the following code and I'm encrypting the password using bf. CREATE EXTENSION pgcrypto; CREATE TABLE auth ( id SERIAL PRIMARY KEY, name TEXT NOT NULL, dob DATE NOT NULL, email TEXT NOT NULL UNIQUE, password TEXT NOT NULL ); After this If i try to INSERT the data using the following -: INSERT INTO auth (name, dob, email, password) VALUES ( 'Divyansh' '1995-09-21' 'divyanshkumar@gmail.com', crypt('password', gen_salt('bf')) ); I got error "INSER

parallelisation sum in PostgreSQL

I have a simple table (Test) which contains 10 millions of records and I am considering the following simple query: explain (costs off) select ( (select count(value) from Test) +(select count(value) from Test) +(select count(value) from Test) +(select count(value) from Test)); I expected that its subqueries were executed in parallel, but: the execution time grows linearly over the number of subqueries, even though their total number is less than the number of CPUs on my server

Postgresql Proc SQL to Postgres UFT8 database: non printable character shown as normal question mark, can't remove

I need advice for a problem I'm facing: I use SAS 9.4 (desktop version) to connect to a Postgres database with the Unicode Postgres ODBC driver. I'm using a proc sql statement to retrieve the data and create a sas data file. There is one issue: One entry has the following value in the database it PgAdmin: "CAR " But when I look at the SAS data file that proc sql created it looks like this: "CAR ?" Just a normal question mark. The compress function with _FIELD = compress(_FIE

Postgresql Docker-Compose cannot find config env file

I've created image from my Go project that has config env file. Here's my script for docker file FROM alpine AS base RUN apk add --no-cache curl wget FROM golang:1.15 AS go-builder WORKDIR /go/app COPY . /go/app RUN GO111MODULE=on CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -o /go/app/main ./main.go FROM base COPY --from=go-builder /go/app/main /main CMD ["/main"] Also i create docker-compose file to connect with postgresql like this: version: "3.7" services: postgr

  1    2   3   4   5   6  ... 下一页 最后一页 共 174 页